• 0 Posts
  • 5 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle

  • Not quite ELI5 but I’ll try “basic understanding of calculus” level.

    In very broad terms, the model learns complex relationships between words (or tokens to be specific, explained below) as probabilistic scores. At its simplest, this could mean the likelihood of one word appearing next to another in the massive amounts of text the model was trained with: the words “apple” and “pie” are often found together, so they might have a high-ish score of 0.7, while the words “apple” and “chair” might have a lower score of just 0.2. Recent GPT models consist of several billions of these scores, known as the weights. Once their values have been estabilished by feeding lots of text through the model’s training process, they are all that’s needed to generate more text.

    Without getting into the math too much, this is how a GPT model then uses these numbers to come up with words:

    • The input prompt is first chopped up into tokens that are each assigned a number. For example, the OpenAI tokenizer translates “Hello world!” into the numbers [15496, 995, 0]. You can think of this as the A=1, B=2, C=3… cipher we all learnt as kids, but the numbers are also assigned to common words, syllables and punctuation.
    • These numbers are inserted into a massive system of equations where they are multiplied together with the billions of weights of the model in a specific manner. This calculation results in a probability score from 0 to 1 for each token known by the model, representing how likely that token is to appear next in sequences that look similar to your input.
    • One of the tokens with the highest scores is chosen as the model’s output semi-randomly to provide variance.
    • This cycle is then repeated over and over, generating the text one token at a time.

    In reality we’re not quite so sure what the weights represent to the model exactly, but this is the gist of it. All we know is that they signify the importances or non-importances that the model places on some pattern that was present in the training data. Some of these patterns could be just simple two-word pairs, but many are probably much more complicated. Lots of researchers are currently trying to get a better idea of how these numbers are actually affecting the model’s output.


  • Taival@suppo.fitoDisc Golf@lemmy.worldFavorite Finds '23
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I’ve really taken to the Axiom Proxy for a neutral throwing putter. Used to throw P2’s but these are ever so slightly more shallow, allowing for cleaner releases with my small hands I feel. Also testing an Envy in the bag for the past week and really liking it so far for similar reasons.


  • I’m currently maintaining a multi million line VB.NET code base, the foundations of which were hastily laid down by young and inexperienced devs realizing a business opportunity in the early 2000s. Lots of these out there in the enterprise world from what I hear and I think this is where there the language gets its reputation from. Sure, at its best it’s just C# with words in place of curly braces, but that’s only the case with well disciplined programmers (and even then, why not just use C#?). Option Strict is, well, just an option, and even the infamous On Error Resume Next is still usable in VB.NET to this day afaik. A lot more room for shooting yourself (or the next person reading your code) in the foot if you don’t know what to look out for.