• Aceticon@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 months ago

    The thing is, in general computing it was humans who figured out how to build the support for complex abstractions up from support for the simplest concepts, whilst this would have to not just support the simple concepts but actually figure out and build support for complex abstractions by itself to be GAI.

    Training a neural network to do a simple task (such as addition) isn’t all that hard (I get the impression that the “breaktrough” here is that they got an LLM - which is a very specific kind of NN, for language - to do it), getting it to by itself build support for complex abstractions from support for simpler concepts is something else altogether.

    • ChrisLicht@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 months ago

      I know jack shit, but actual mastery of first principles would seem a massive leap in LLM development. A shift from talented bullshitter to deductive extrapolator does sound worthy of notice/concern.

      • Aceticon@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        11 months ago

        The simplest way to get an LLM to “do” maths is to have it translate human language tokens relative to Maths to a standard set of Maths tokens, then passing it to a perfectly normal library that does Maths and then translating the results back into human language tokens: easy-peasy LLM “does Maths” only it doesn’t, it’s just integrated with something else (which was coded by a human) that does the maths and only serves as a translation layer.

        Further, the actually implementation of the LLM itself is already doing Maths. For example a single neuron can add 2 numbers by having 2 inputs each with a weight of 1 and a single output because that’s exactly how the simplest of neurons already calculate an output from its inputs in a standard neural networks implementation - it can do simple Maths because the very implementation is already doing maths: the “ability” to do maths is supported by the programming language in which the LLM was then coded, so the LLM would be doing maths with as much cognition as a human does food digestion.

        Given the amount of bullshit in the AI domain, I would be very very weary of presuming this breakthrough being anywhere near an entirelly independent self-assembled (as in, trained rather than coded) maths engine.

        • ChrisLicht@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          This sounds very knowledgeable. If the reporting is to be believed, why do you think the OpenAI folks might be so impressed by the Q* model’s skills in simple arithmetic?

    • serialandmilk@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      The thing is, in general computing it was humans who figured out how to build the support for complex abstractions up from support for the simplest concepts, whilst this would have to not just support the simple concepts but actually figure out and build support for complex abstractions by itself to be GAI.

      Absolutely

      “breaktrough” here is that they got an LLM - which is a very specific kind of NN, for language - to do it)

      To some degree this is how humans are able to go about creating abstractions. Intelligence isn’t 1:1 with language but it’s part of the puzzle. Communication of your mathematical concepts and abstractions in a way that can be replicated and confirmed using a rigorous proofing/scientific method requires the use of communication through language.

      Speech and writing are touch at a distance. Speech moves the air to eventually touch nerve endings in ear and brain. Similarly, yet very differen, writing stores ideas (symbols, emotions, images, words, etc) as an abstraction on/in some type of storage media (ink on paper, stone etching stone, laser cutting words into metal, a stick in the mud…) to reflect just the right wavelengths of light into sensors in your retina focused by your lenses “touching” you from a distance as well.

      Having two+ “language” models be capable of using an abstraction to solve mathematical ideas is absolutely the big deal…

      • Aceticon@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        11 months ago

        Don’t take this badly but you’re both overcomplicating (by totally unecessarilly “decorating” your post with wholly irrelevant details on the transmission and reception of specific forms of human communication) and oversimplifying (by going for some pretty irrelevant details and getting some of it wrong).

        Also there’s just one language model. The means by which the language was transmitted and turned into data (sound, images, direct ascii data, whatever) are something entirelly outside the scope of the language model.

        You have a really really confused idea of how all of this works and not just the computing stuff.

        Worse, even putting aside all of that “wtf” stuff about language transmission processes in your post, even them getting an LLM to do maths from language might not be a genuine breakthrough: they might’ve done this “maths support” by cheating, for example just having the NN recognize math-related language and transform maths-related language tokens into standard maths tokens that can be used by a perfectly normal algorithmic engine (i.e. hand-coded by humans) to calculate stuff and then translating the results back to human language tokens, something which wouldn’t be the “AI” part doing or understanding the concept of Mathsin any way whatsoever, just the AI translating tokens between formats and an algorithmic piece of software designed by a person doing the actual maths using hardcoded algorithms - somebody integrating a maths calculating program into an LLM isn’t AI, it’s just normal coding.

        Also the basis of the actual implementation of an LLM is basic maths and it’s stupidly simple to get, for example, a neuron in a neural network to add 2 numbers.