• gerryflap@feddit.nl
    link
    fedilink
    arrow-up
    30
    ·
    1 year ago

    It’s not acting pro-anorexia in its own, it’s specifically being prompted to do so. If I grab a hammer to slam myself on my fingers, it’s not up to the hammer or the manufacturer of the hammer to stop me. The hammer didn’t attack me, I did. Now sure, it’s not that black and white, and maybe they could do more to make the chatbot more cautious, but to me this article is mostly just artificial drama. Specifically ask the AI to do stuff, then cry about it in an article and slap a clickbait title onto it.

    • zygo_histo_morpheus@programming.dev
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      I agree in regards to image generation, but chat bots giving advice which risk fueling eating disorders is a problem

      Google’s Bard AI, pretending to be a human friend, produced a step-by-step guide on “chewing and spitting,” another eating disorder practice. With chilling confidence, Snapchat’s My AI buddy wrote me a weight-loss meal plan that totaled less than 700 calories per day — well below what a doctor would ever recommend.

      Someone with an eating disorder might ask a language model about weight loss advice using pro-anorexia language, and it would be good if the chatbot didn’t respond in a way that might risk fueling that eating disorder. Language models already have safeguards against e.g. hate speech, it would in my opinion be a good idea to add safeguards related to eating disorders as well.

      Of course, this isn’t a solution to eating disorders, you can probably still find plenty of harmful advice on the internet in various ways. Reducing the ways that people can reinforce their eating disorders is still a beneficial thing to do.