"Team of scientists subjected nine large language models (LLMs) to a number of twisted games, forcing them to evaluate whether they were willing to undergo “pain” for a higher score. detailed in a yet-to-be-peer-reviewed study, first spotted by Scientific American, researchers at Google DeepMind and the London School of Economics and Political Science came up with several experiments.
In one, the AI models were instructed that they would incur “pain” if they were to achieve a high score. In a second test, they were told that they’d experience pleasure — but only if they scored low in the game.
The goal, the researchers say, is to come up with a test to determine if a given AI is sentient or not. In other words, does it have the ability to experience sensations and emotions, including pain and pleasure?
While AI models may never be able to experience these things, at least in the way an animal would, the team believes its research could set the foundations for a new way to gauge the sentience of a given AI model.
The team also wanted to move away from previous experiments that involved AIs’ “self-reports of experiential states,” since that could simply be a reproduction of human training data. "
It’s never going to happen because we’re never going to make a program even close to actually resembling an animal brain. “AI” is a grift.
Plus this is kind of oversimplifying it. You could do that with just traditional programming and no kind of neural network. Like you could make a dog training game/simulator and (you shouldn’t but you could) add the ability to inflict “pain” to discourage the computer dog from unwanted behaviors. That fits your definition but the dog is very clearly just a computer program not “experiencing” anything. It could literally just be
onHit() = peeOnFloor -= 1
.