- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Translation: “We told everyone we could turn glorified autocomplete into artificial general intelligence and then they gave us a bunch of money for that, so now we actually have to try to deliver something and we’ve got no idea how.”
How about giving billions to those guys simulating brains of a small worms and fruit flies, so we can have very slow “brain in a bottle” that will be equally useless.
You know what? Sure, fuck it, why not? I don’t even have a problem with OpenAI getting billions of dollars to do R&D on LLMs. They might actually turn out to have some practical applications, maybe.
My problem is that OpenAI basically stopped doing real R&D the moment ChatGPT became a product, because now all their money goes into their ridiculous backend server costs and putting increasingly silly layers of lipstick on a pig so that they can get one more round of investment funding.
AI is a really important area of technology to study, and I’m all in favour of giving money to the people actually studying it. But that sure as shit ain’t Sam Altman and his band of carnival barkers.
I mean this respectfully. The character Everett True is known as someone who tells the truth even when it’s not popular.
Being compared to Everett True is the greatest compliment I have ever been given, and am honour of which I am in no way worthy.
Carnival barkers 🤣
Predictable outcome for anyone not wallowing in wishful belief.
We’ve known this for a while. LLMs are a dead end, lots of companies have tried throwing more data at it but it’s becoming clear the differences between each model and the next are getting too small to notice, and none of them fix the major underlying issue that chat models keep spreading BS because it can’t differentiate between right and wrong
And the thing is the architecture of LLMs was already a huge breakthrough in the field. Now these companies are basically trying to come up with another by - and that’s just my guess - throwing tons of cash at it and hoping for the best. I think that’s like trying to come up with a building material that outperforms steel concrete in every aspect. Just because it was discovered by some guy doesn’t mean multi billion dollar companies can force something better with all the money in the world.
So an infant technology is showing a glimmer of maturation?
Yeah, well Alibaba nearly (and sometimes) beat GPT-4 with a comparatively microscopic model you can run on a desktop. And released a whole series of them. For free! With a tiny fraction of the GPUs any of the American trainers have.
Bigger is not better, but OpenAI has also just lost their creative edge, and all Altman’s talk about scaling up training with trillions of dollars is a massive con.
o1 is kind of a joke, CoT and reflection strategies have been known for awhile. You can do it for free youself, to an extent, and some models have tried to finetune this in: https://github.com/codelion/optillm
But one sad thing OpenAI has seemingly accomplished is to “salt” the open LLM space. Theres way less hacky experimentation going on than there used to be, which makes me sad, as many of its “old” innovations still run circles around OpenAI.
… “Alibaba (LLM)” … is it this ? … ?
Qwen2.5: A Party of Foundation Models!
https://qwenlm.github.io/blog/qwen2.5/BTW, as I wrote that post, Qwen 32B coder came out.
Now a single 3090 can beat GPT-4o, and do it way faster! In coding, specifically.
Great news 😁🥂, someone should make a new post on this !
Yep.
32B fits on a “consumer” 3090, and I use it every day.
72B will fit neatly on 2025 APUs, though we may have an even better update by then.
I’ve been using local llms for a while, but Qwen 2.5, specifically 32B and up, really feels like an inflection point to me.