Wowed by a new paper I just read and wish I had thought to write myself. Lukas Berglund and others, led by Owain Evans, asked a simple, powerful, elegant question: can LLMs trained on A is B infer automatically that B is A? The shocking (yet, in historical context, see below, unsurprising) answer is no:
Very interesting, this seems so incredibly stupid it’s hard to believe it’s true.
It’s amazing how far AI has come recently, but also kind of amazing how far away we still are from a truly general AI.
That is because the I in AI as currently used is as literal as the word hover in Hoverboard. You know, those things that don’t hover, just catch on fire.
There is no intelligence in AI.
I strongly disagree, remember intelligence does not require consciousness, when we have that, it’s called strong AI or (AGI) artificial general intelligence.
AI really has been making huge progress the past 10 years, probably equivalent to all the time that goes before.