• Natanael@slrpnk.net
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    edit-2
    8 months ago

    Unironically yes, sometimes. A lot of the best works which its training samples are based on cites the original poster’s qualifications, and this filters into the model where asking for the right qualifications directly can influence it to rely more on high quality input samples when generating its response.

    But it’s still not perfect, obviously. It doesn’t make it stop hallucinating.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      8 months ago

      Yeah, you still need to give an AI’s output an editing and review pass, especially if factual accuracy is important. But though some may mock the term “prompt engineering” there really are a bunch of tactics you can use when talking to an AI to get it to do a much better job. The most amusing one I’ve come across is that some AIs will produce better results if you offer to tip them $100 for a good output, even though there’s no way to physically fulfill such a promise. The theory is that the AI’s training data tended to have better stuff associated with situations where people paid for it, so when you tell the AI you’re willing to pay it’ll effectively go “ah, the user is expecting good quality.”

      You shouldn’t have to worry about the really quirky stuff like that unless you’re an AI power-user, but a simple request for high-quality output can go a long way. Assuming you want high quality output. You could also ask an AI for a “cheesy low-quality high-school essay riddled with malapropisms” on a subject, for example, and that would be a different sort of deviation from “average.”