• bufordt@sh.itjust.works
    link
    fedilink
    arrow-up
    34
    ·
    1 year ago

    It’s similar in IT. Almost no one recommends regular password changes anymore, but we won’t pass our audit if we don’t require password changes every 90 days.

      • bufordt@sh.itjust.works
        link
        fedilink
        arrow-up
        10
        ·
        1 year ago

        When we first switched to JD Edwards, it still sent the passwords in plain text, and our Oracle partner set up our weblogic instances over http instead of https.

        I had to prove I could steal passwords as just a local admin on a workstation before they made encrypting the traffic a priority.

    • InfiniWheel@lemmy.one
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      A very non-techy relative works in a company that requires password changes every month. At this point his passwords are just extremely easy to guess and basically go like 123aBc+ and variations of it.

      Yeah, no clue how that caught traction.

      • ddh@lemmy.sdf.org
        link
        fedilink
        arrow-up
        8
        ·
        edit-2
        1 year ago

        Our IT department won’t allow password managers. Their current stance on what we should do instead is “Uh, we’re working on it”. So everyone at work uses weak passwords and writes them down in notepad. headdesk

      • Corkyskog@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        I never understood why this caught on, you even see it recommended for personal applications… which is just stupid. The only reason it existed in the first place is because of concerns of shoulder lookers.

  • AlexRogansBeta@kbin.social
    link
    fedilink
    arrow-up
    7
    ·
    1 year ago

    I feel this in my bones as an anthropologist when it comes to semi-structured interviews, which frankly have very little to do with anthropological inquiry but have nonetheless become a rote methodology.

  • Hellsadvocate@kbin.social
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    1 year ago

    It makes me wonder if we can create AIs that behave close enough to humans by adding an additional neurological baseline noise to the LLM training. Then throwing it in simulations to see whether social sciences might work. I’d be curious to see how true to life something like that would be as well.

    A while ago, some researchers designed a game where chatGPT was assigned to characters and told to act and live like humans. It was interesting to watch. https://www.iflscience.com/stanford-scientists-put-chatgpt-into-video-game-characters-and-its-incredible-68434