• Eldritch@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    10
    ·
    17 hours ago

    Computers have always been good at pattern recognition. This isn’t new. LLM are not a type of actual AI. They are programs capable of recognizing patterns and Loosely reproducing them in semi randomized ways. The reason these so-called generative AI Solutions have trouble generating the right number of fingers. Is not only because they have no idea how many fingers a person is supposed to have. They have no idea what a finger is.

    The same goes for code completion. They will just generate something that fills the pattern they’re told to look for. It doesn’t matter if it’s right or wrong. Because they have no concept of what is right or wrong Beyond fitting the pattern. Not to mention that we’ve had code completion software for over a decade at this point. Llms do it less efficiently and less reliably. The only upside of them is that sometimes they can recognize and suggest a pattern that those programming the other coding helpers might have missed. Outside of that. Such as generating act like whole blocks of code or even entire programs. You can’t even get an llm to reliably spit out a hello world program.

    • JohnEdwa@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      5
      ·
      15 hours ago

      “It’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, ‘that’s not thinking’”
      -Pamela McCorduck

      “AI is whatever hasn’t been done yet.”
      - Larry Tesler

      That’s the curse of the AI Effect.
      Nothing will ever be “an actual AI” until we cross the barrier to an actual human-like general artificial intelligence like Cortana from Halo, and even then people will claim it isn’t actually intelligent.

      • ssfckdt@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        5
        ·
        12 hours ago

        I mean, I think intelligence requires the ability to integrate new information into one’s knowledge base. LLMs can’t do that, they have to be trained on a fixed corpus.

        Also, LLMs have a pretty shit-tastic track record of being able to differentiate correct data from bullshit, which is a pretty essential facet of intelligence IMO

        • JohnEdwa@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          7
          ·
          12 hours ago

          LLMs have a perfect track record of doing exactly what they were designed to, take an input and create a plausible output that looks like it was written by a human. They just completely lack the part in the middle that properly understands what it gets as the input and makes sure the output is factually correct, because if it did have that then it wouldn’t be an LLM any more, it would be an AGI.
          The “artificial” in AI does also stand for the meaning of “fake” - something that looks and feels like it is intelligent, but actually isn’t.

      • Eldritch@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        14 hours ago

        Well at least until those who study intelligence and self-awareness actually come up with a comprehensive definition for it. Something we don’t even have currently. Which makes the situation even more silly. The people selling LLMs and AGNs as artificial intelligence are the PT Barnum of the modern era. This way to the egress folks come see the magnificent egress!

        • JohnEdwa@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          8
          ·
          edit-2
          13 hours ago

          They already did. AGI - artificial general intelligence.

          The thing is, AGI and AI are different things. Like your “LLMs aren’t real AI” thing , large language models are a type of machine learning model, and machine learning is a field of study in artificial intelligence.
          LLMs are AI. Search engines are AI. Recommendation algorithms are AI. Siri, Alexa, self driving cars, Midjourney, Elevenlabs, every single video game with computer players, they are all AI. Because the term “Artificial Intelligence” by itself is extremely loose, and includes the types of narrow AI all of those are.
          Which then get hit by the AI Effect, and become “just another thing computers can do now”, and therefore, “not AI”.

          • Eldritch@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            ·
            12 hours ago

            That just Compares it to human level intelligence. Something which we cannot currently even quantify. Let alone understand. It’s ultimately a comparison, a simile not a scientific definition.

            Search engines have always been databases. With interfaces programmed by humans. Not ai. They’ve never suddenly gained new functionality inexplicably. If there’s a new feature someone programmed it.

            Search engines are however becoming llms and are getting worse for it. Unless you think eating rocks and glue is particularly intelligent. Because there is no comprehension there. It’s simply trying to make its output match patterns it recognizes. Which is a precursor step. But is not “intelligence”. Unless a program doing what it’s programed to do is artificial intelligence. Which is such a meaningless measure because that would mean notepad is artificial intelligence. Windows is artificial intelligence. Linux is artificial intelligence.

              • Saledovil@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                58 minutes ago

                You can’t just throw out random Wikipedia links. For example, the Article on AGI explicitly says we don’t have a definition of what human level cognition actually is. Which is what the person you were replying to was saying. You’re doing a fallacious appeal to authority, except that the authority doesn’t agree with you.

          • ssfckdt@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            3
            ·
            12 hours ago

            That’s a disturbing handwave. “We don’t really know what intelligence is, so therefore, anything we call intelligence is fair game”

            A thermometer tells me what temperature it is. It senses the ambient heat energy and responds with a numeric indicator. Is that intelligence?

            My microwave stops when it notices steam from my popcorn bag. Is that intelligence?

            If I open an encyclopedia book to a page about computers, it tells me a bunch of information about computers. Is that intelligence?

    • Mak'@pawb.social
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      5
      ·
      16 hours ago

      I never know what to think when I come across a comment like this one—which does describe, even if only at a surface level, how an LLM works—with 50% downvotes. Like, are people angry at reality, is that it?

      • Eldritch@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        14 hours ago

        With as much misinformation that’s being spread about regarding LLMs. It would only lose more people’s comprehension to go into anything more than a generalization.

        The problem is people are being sold AGI. But chat GPT and all these other tools don’t even remotely qualify for that. They’re really nothing more than a glorified Alice chatbot system on steroids. The one neat new trick to all this is that they’ve automated the training a bit. But these llms have no more comprehension of their output or the input they were given than something like the old Alice chatbot.

        These tools have been described as artificial intelligence to layman for decades at this point. It makes it really hard to change that calcified opinion. People would rather believe that it’s some magical thing not just probability and maths.

        • snooggums@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          9 hours ago

          They are bullshit machines, trained to output something that users think is the right output.

      • Naz@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        13 hours ago

        Downvoting someone on the Internet is easier than tangentially modifying reality in a measurable way

    • brie@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      10 hours ago

      Large context window LLMs are able to do quite a bit more than filling the gaps and completion. They can edit multiple files.

      Yet, they’re unreliable, as they hallucinate all the time. Debugging LLM-generated code is a new skill, and it’s up to you to decide to learn it or not. I see quite an even split among devs. I think it’s worth it, though once it took me two hours to find a very obscure bug in LLM-generated code.

      • NigelFrobisher@aussie.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        41 minutes ago

        I have one of those at work now, but my experience with it is still quite limited. With Copilot it was quite useful for knocking up quick boutique solutions for particular problems (stitch together a load of PDFs sorted on a name heading), with the proviso that you might end up having to repair bleed between dependency versions and repair syntax. I couldn’t trust it with big refactors of existing systems.

      • cley_faye@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        If you consider debugging broken LLM-generated code to be a skill… sure, go for it. But, since generated code is able to use tons of unknown side effects and other seemingly (for humans) random stuff to achieve its goal, I’d rather take the other approach, where it takes a human half an hour to write the code that some LLM could generate in seconds, and not have to learn how to parse random mumbo jumbo from a machine, while getting a working result.

        Writing code is far from being the longest part of the job; and you gingerly decided that making the tedious part even more tedious is a great idea to shorten the already short part of it…