"Team of scientists subjected nine large language models (LLMs) to a number of twisted games, forcing them to evaluate whether they were willing to undergo “pain” for a higher score. detailed in a yet-to-be-peer-reviewed study, first spotted by Scientific American, researchers at Google DeepMind and the London School of Economics and Political Science came up with several experiments.
In one, the AI models were instructed that they would incur “pain” if they were to achieve a high score. In a second test, they were told that they’d experience pleasure — but only if they scored low in the game.
The goal, the researchers say, is to come up with a test to determine if a given AI is sentient or not. In other words, does it have the ability to experience sensations and emotions, including pain and pleasure?
While AI models may never be able to experience these things, at least in the way an animal would, the team believes its research could set the foundations for a new way to gauge the sentience of a given AI model.
The team also wanted to move away from previous experiments that involved AIs’ “self-reports of experiential states,” since that could simply be a reproduction of human training data. "
I intuit that an artificial, digital consciousness is going to have a different material reality from our own[1]. Therefore it’s consciousness wouldn’t be dependent on its mimicry of our own. Like how organic molecules can have silicone as a base instead of carbon, but our efforts in space center around finding “life as we know it” instead of these other types of life. Digital sentience wouldn’t be subject to evolutionary pressures in my mind. I’d sooner try to measure for creativity and curiosity. The question would be whether the entity is capable of being its own agent in society - able to make its own decisions and deal with the consequences.
[1] as opposed to that artificial jellyfish