"Team of scientists subjected nine large language models (LLMs) to a number of twisted games, forcing them to evaluate whether they were willing to undergo “pain” for a higher score. detailed in a yet-to-be-peer-reviewed study, first spotted by Scientific American, researchers at Google DeepMind and the London School of Economics and Political Science came up with several experiments.

In one, the AI models were instructed that they would incur “pain” if they were to achieve a high score. In a second test, they were told that they’d experience pleasure — but only if they scored low in the game.

The goal, the researchers say, is to come up with a test to determine if a given AI is sentient or not. In other words, does it have the ability to experience sensations and emotions, including pain and pleasure?

While AI models may never be able to experience these things, at least in the way an animal would, the team believes its research could set the foundations for a new way to gauge the sentience of a given AI model.

The team also wanted to move away from previous experiments that involved AIs’ “self-reports of experiential states,” since that could simply be a reproduction of human training data. "

  • FortifiedAttack [any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    29
    ·
    edit-2
    7 days ago

    What? These models just generate one likely response string to an input query, there’s nothing that mysterious about it. Furthermore, “pain” is just “bad result”, while “pleasure” is just “good result”. Avoiding the bad result, and optimizing towards the good result is already what happens when you train the model that generates these responses.

    What is this bullshit?

    The team was inspired by experiments that involved electrocuting hermit crabs at varying voltages to see how much pain they were willing to endure before leaving their shell.

    BRUH

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      13
      ·
      7 days ago

      Well “AI” in general is a false and misleading term. The whole field is riddled with BS like “neural networks” and whatnot. Why not pretend that there’s pain involved? Love? Etc…

  • Coca_Cola_but_Commie [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    7 days ago

    Hey, Siri, what is Harlan Ellison’s “I have No Mouth and I Must Scream” about?

    The goal, the researchers say, is to come up with a test to determine if a given AI is sentient or not. In other words, does it have the ability to experience sensations and emotions, including pain and pleasure?

    I’m not a fancy computer scientist and I’ve never read philosophy in my life but surely if an LLM could become sentient it would be quite different from this? Pain and pleasure are evolved biological phenomena. Why would a non-biological sentient lifeform experience them? It seems to me the only meaningful measure of sentience would be something like “does this thing desire to grow and change and reproduce, outside of whatever parameters it was originally created with.”

  • BodyBySisyphus [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    7 days ago

    So we all know it’s BS but I think there’s a social value to accepting the premise.
    “Hi, this grant is to see if the model we created is sentient.”
    “And your proposed experiment is to subject that novel consciousness to a literally unmeasurable amount of agony?”
    “Yep!”
    “So if it is conscious, one of its first experiences upon waking to the world will be pain such as nothing else we know of could possibly experience?”
    “Yep!”
    “Okay, not only is your proposal denied, you’re getting imprisoned as a danger to society.”

  • Hohsia [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    14
    ·
    7 days ago

    Extremely dangerous study because it’s obfuscating “AI” before your eyes. God what a shit age to be living in

    I implore all of you, if you can, to learn about AI at a very high level- its history, applications prior to ChatGPT, the difference between generative AI and AI, and the history of marketing schemes. I’ve been following this researcher, Arvind Narayanan who has a substack intending to help people sift through all the bullshit. His main claim is that researchers are saying one thing, media companies have contracts with private companies who say another thing and ergo you get sensationalist headlines like this

    Tl;dr we need a fucking Lenin so bad because this all stems from who owns the press

    • glans [it/its]@hexbear.net
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 days ago

      I was watching Star Trek Picard and wondering if the entire show is just marketing for AI?

      Of course its picking up on themes Trek has been playing with since the 90s.

      The whole thing really made me creeped out. I can’t articulate well, sorry.

  • Awoo [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    3
    ·
    7 days ago

    While AI models may never be able to experience these things, at least in the way an animal would

    Why? Why wouldn’t they? The way an animal experiences pain isn’t magically different to an artificial construct by virtue of the neurons and synapses being natural instead of artificial. A pain response is a negative feeling that exists to make a creature avoid behaviours that are detrimental to its survival. There’s no real reason that this shouldn’t be reproducible artificially or the artificial version be regarded as “less” than the natural version.

    Not that I think LLMs are leading to meaningful real sentient AI but that’s a whole different topic.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      7 days ago

      Why? Why wouldn’t they?

      B/c they’re machines without pain receptors. It’s kind of biology 101 but science has been totally erased in this “AI” grift.

      • Awoo [she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        edit-2
        7 days ago

        A “pain receptor” is just a type of neuron. These are neural networks made up of artificial neurons.

        • laziestflagellant [they/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          6
          ·
          7 days ago

          This situation is like adding a face layer onto your graphics rendering in a game engine and setting it so the face becomes pained when the fps drops and becomes happy when the fps is high. Then tracking if that facial system increases fps performance as a test to see if your game engine is sentient.

          it is a fancy calculator. It is using its neural network to calculate fancy math just like a modern video game engine. Making it output a text response related to pain is just the same as adding a face on the HUD, except the video game example is actually quantified to something, whereas the LLM is just keeping the ‘pain meter’ in its input context it uses to calculate a text response with.

    • WhatDoYouMeanPodcast [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 days ago

      I intuit that an artificial, digital consciousness is going to have a different material reality from our own[1]. Therefore it’s consciousness wouldn’t be dependent on its mimicry of our own. Like how organic molecules can have silicone as a base instead of carbon, but our efforts in space center around finding “life as we know it” instead of these other types of life. Digital sentience wouldn’t be subject to evolutionary pressures in my mind. I’d sooner try to measure for creativity and curiosity. The question would be whether the entity is capable of being its own agent in society - able to make its own decisions and deal with the consequences.

      [1] as opposed to that artificial jellyfish

    • edge [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 days ago

      It’s never going to happen because we’re never going to make a program even close to actually resembling an animal brain. “AI” is a grift.

      A pain response is a negative feeling that exists to make a creature avoid behaviours that are detrimental to its survival.

      Plus this is kind of oversimplifying it. You could do that with just traditional programming and no kind of neural network. Like you could make a dog training game/simulator and (you shouldn’t but you could) add the ability to inflict “pain” to discourage the computer dog from unwanted behaviors. That fits your definition but the dog is very clearly just a computer program not “experiencing” anything. It could literally just be onHit() = peeOnFloor -= 1.