• 1 Post
  • 19 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle


  • It’s not it’s biological origins that make it hard to understand the brain, but the complexity. For example, we understand how the heart works pretty well.

    While LLMs are nowhere near as complex as a brain, they’re complex enough to make it extremely difficult to understand.

    But then there comes the question: if they’re so difficult to understand, how did people make them in the first place?

    The way they did it actually bears some similarities to evolution. They created an “empty” model - a large neural network that wasn’t doing anything useful or meaningful. But it depended on billions of parameters, and if you tweak a parameter, its behavior changes slightly.

    Then they expended enormous amount of computing power tweaking parameters, each tweak slightly improving its ability to model language. While doing this, they didn’t know what each number meant. They didn’t know how or why each tweak was improving the model. Just that each tweak was making an improvement.

    Unlike evolution, each tweak isn’t random. There’s an algorithm called back-propagation that can tell you how to tweak the neural network to make it predict some known data slightly better. But unfortunately it doesn’t tell you anything about the “why” this tweak is good, or “what” each parameter change means. Hence why we don’t understand how LLMs work.

    One final clarification: It’s not a complete black box. We do have some understanding of how LLM works, mostly on high level. Kind of like we have some basic understanding of how a brain works. We understand LLMs much better than brains, of course.


  • It’s not that nobody took the time to understand. Researchers have been trying to “un-blackbox” neural networks pretty much since those have been around. It’s just an extremely complex problem.

    Logistic regression (which is like a neural network but with just one node) is pretty well understood - but even then sometimes it can learn some pretty unintuitive coefficients and it can be tricky to understand why.

    With LLMs - which are enormous by comparison - it’s simply not a tractable problem to understand how it works in detail.












  • Frankly, I think someone should actually do that. Except maybe use open source AI instead of ChatGPT.

    The fact is, in a federated setting all this data will be accessible. For example, if lemmy tried to hide who made each vote, and just federate totals, that would allow my malicious instance to report 1M upvotes for my post.

    When lemmy tries to hide this data, all this does is instill a false sense of privacy with users. IMHO the best thing is to make all this de facto public data, officially public, so everyone knows and can act accordingly.

    As for privacy, I’d say the best thing to do is, keep your account anonymous.




  • I, personally, want things to be decentralized. I want to have 100+ technology communities that are all relevant. But for that to be practical, there needs to be a simple mechanism for people to follow the topic “technology”, and get the content of all these 100+ communities merged together (then perhaps manually block some of them that have bad moderation). Unless we have such mechanism, we’ll end up with one main big technology community, and all others will be secondary.


  • I’m hoping for two features: Let communities “follow” other communities - so one community’s content also shows up on the other. And let me group communities together on my personal feed, if they don’t want to follow each other for some reason. For now, I stay mostly on the home page, which aggregates everything - but I’d much prefer to be able to browse by topic and still have some aggregation.