• FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    10
    arrow-down
    3
    ·
    2 days ago

    The term “artificial intelligence” has been in use since the 1950s and it encompasses a wide range of fields in computer science. Machine learning is most definitely included under that umbrella.

    Why do you think an AI can’t double check things and fix them when it notices problems? It’s a fairly straightforward process.

      • barsoap@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        23 hours ago

        What are you trying to argue, that humans aren’t Turing-complete? Which would be an insane self-own. That we can decide the undecidable? That would prove you don’t know what you’re talking about, it’s called undecidable for a reason. Deciding an undecidable problem makes as much sense as a barber who shaves everyone who doesn’t shave themselves.

        Aside from that why would you assume that checking results would, in general, involve solving the halting problem.

        • dustyData@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          19 hours ago

          It has nothing to do with whether humans are Turing complete or not. No Turing machine is capable of solving an undecidable. But humans can solve undecidables. Machines cannot solve the problem the way a human would. So, no, humans are not machines.

          This by definition limits the autonomy a machine can achieve. A human can predict when a task will cause a logic halt and prepare or adapt accordingly, a machine can’t. Unless intentionally limited by a programmer to stop being Turing complete and account for the undecidables before hand (thus with the help of the human). This is why machines suck at unpredictable or ambiguous task that humans fulfill effortlessly on the daily.

          This is why a machine that adapts to the real world is so hard to make. This is why autonomous cars can only drive in pristine weather, on detailed premapped roads with really high maintenance, with a vast array of sensors. This is why robot factories are extremely controlled and regulated environments. This is why you have to rescue your roomba regularly. Operating on the biggest undecidable there is (e.g. future parameters of operations) is the biggest yet unsolved technological problem (next to sensor integration on world parametrization and modeling). Machine learning is a step towards it, in a several thousand miles long road yet to be traversed.

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            17 hours ago

            But humans can solve undecidables.

            No, we can’t. Or, more precisely said: There is no version of your assertion which would be compatible with cause and effect, would be compatible with physics as we understand it.

            Don’t blame me I didn’t do it. The universe just is that way.

            • dustyData@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              17 hours ago

              Yet we live in a world where millions of humans assert their will over undecidables every day. Because we can make irrational decisions, logic be damned. Explain that one.

              • barsoap@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                17 hours ago

                That’s not deciding anything in the information-theoretical sense. We rely a lot on approximations and heuristics when it comes to day to day functioning.

                You can’t decide the halting problem by saying “I’ll have a glance at it and go with whatever I think after thinking about it for half a second”. That’s not deciding the problem that’s giving up on it and computers are perfectly capable of doing that.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        3
        arrow-down
        3
        ·
        1 day ago

        The halting problem is an abstract mathematical issue, in actual real-world scenarios it’s trivial to handle cases where you don’t know how long the process will run. Just add a check to watch for the process running too long and break into some kind of handler when that happens.

        I’m a professional programmer, I deal with this kind of thing all the time. I’ve literally written applications using LLMs that do this.