We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision.

https://arxiv.org/abs/2311.07590

  • gandalf_der_12te@feddit.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Edgy comment here but:

    In another thread we were discussing AI-generated CSAM. Thread:

    https://feddit.de/post/6315841

    You would probably agree, then, that such material is not problematic, because even if it looks like CSAM, and it quacks like CSAM, it is not CSAM, therefore we don’t have to take it seriously or regulate it in similar ways that we do regulate actual CSAM, if I continue your logic, no?

    • wildginger
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      1 year ago

      very very very different, because the AI image is intentionally attempting to realistically imitate an existing, living, human victim, and because hyper realistic child pornographic art is illegal.

      Pedophiles have been making loads of AI child porn. But its legal as long as it doesnt attempt to “look realistic” whatever that means, and isnt trying to look like a real person. A hyper realistic painting of child porn would also be illegal.

      Laws might change in the future, but currently AI child porn slips between the same lines that 2d cartoon child porn does.