Feel like we’ve got a lot of tech savvy people here seems like a good place to ask. Basically as a dumb guy that reads the news it seems like everyone that lost their mind (and savings) on crypto just pivoted to AI. In addition to that you’ve got all these people invested in AI companies running around with flashlights under their chins like “bro this is so scary how good we made this thing”. Seems like bullshit.

I’ve seen people generating bits of programming with it which seems useful but idk man. Coming from CNC I don’t think I’d just send it with some chatgpt code. Is it all hype? Is there something actually useful under there?

    • tara@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      So typically there are 4 main competing interpretations of what AI is:

      1. Acting like a human
      2. Thinking like a human
      3. Acting rationally
      4. Thinking rationally

      These are from Norvig’s “AI: A Modern Approach”.

      Alan Turing’s “Turing Test” tests whether a given agent is artificially intelligent (according to definition #1). The test involves a human conversing with the agent via text messages, and deciding whether the agent is human or not. Large language models, a form of machine learning, can produce chatbot agents which pass this test. Instances of GPT4 prompted sufficiently to text an assessor for example. The assessor occasionally interacts with humans so they are kept sufficiently uncertain.

      By this point, I think that machine learning in the form of an LLM can achieve artificial intelligence according to definition #1, but that isn’t what most non-tech non-academic people mean by AI.

      The mainstream definition of AI is what we would call Artificial General Intelligence (AGI). This is an agent that meets a given one of Norvig’s criteria for AI across multiple scenarios and situations that they have never encountered before.

      Many would argue that LLMs like GPT4 do not meet the criteria for AGI because they are not general enough, unable to learn to play an Atari game for example, or to learn an entirely unseen language to fluency.

      This is the difference between an LLM and a fictional AGI like Glados or Skynet.

      Additionally forms of machine learning exist like k-means clustering, which identify related groups within a dataset as their only function. I would assert these are not AI, although a weak argument could be made that they are thinking “rationally” enough to meet definition #4.

      Then there are forms of AI which are not machine learning, such as heuristic agents - agents that are hard coding with reasoning by humans - such as the chess playing Stockfish, or the AI found in most video games.

      Ultimately AI can describe machine learning if “AI” is understood as something which meets one or more of Norvig’s definitions. But since most people say AI when they mean AGI, I think “machine learning” is a better term. Less undeserved hype, less marketing disinformation, and generally better at communicating what is being talked about.