A long read, but fascinating.

  • andallthat@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    11 months ago

    Thanks for posting this, I’ve learned things about cold reading and the Forer effect that, regardless.of whether they can be applied to LLMs, are fascinating information on our own minds.

    I will try experimenting some more with ChatGPT and Bard and see if I can spot these effect the author deacribes

  • JoBo@feddit.uk
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    11 months ago

    Weird, I was pondering exactly this analogy earlier. Specifically astrology, but cold reading is a better fit for a conversation with an ‘AI’; astrology works well for one-off articles. Both dependent on the Forer Effect, of course.

    Important piece, I think. Because this part is 100% true:

    There are many examples of this easily found once you start doing the research. The mechanism is simple enough and already baked into people’s preconceptions of how readings work so many psychics accidentally develop the knack for it, meaning that they’re not just conning the person being read, they are also conning themselves.

    This is Sam “I’m a stochastic parrot and so are you” Altman. He thinks his high tech magic 8-ball really does think just like human beings do. He’s not so much trying to persuade us that ‘AI’ has achieved our level so much as persuade us that we have always been on its level.

    Maybe he is just 100% grifter and absolutely knows he is bullshitting. But I think he’s at least 50% conning himself.

    Which is probably worse. True believers are so tiring.

    It’s an incredibly powerful illusion. And no matter how often someone draws back the curtain, usually with a spectacularly nonsensical example of its complete inability to think, there will always be mini-Sams out there. Desperate to believe. Unwilling to think.

  • Zeth0s@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    9
    ·
    11 months ago

    Sorry, but that article is completely non sense.

    LLMs are pretty clear in what they do. It’s true that they are often superficial, but most of this superficiality is due to creator trying to escape liabilities. ChatGTP is often evasive or superficial on purpose, because openai is trying to find a balance between usefulness and risk of being sued.

    LLMs do not try to be smart. They don’t do trick. They are built to give the best possible answer they are capable of doing (given how they are trained and built). Sometimes these answers are good, sometimes not, sometimes mixed.

    Why writing a whole article trying to demonstrate frauds in a tool. Is a washing machine a fraud because it tries to convince me clothes are clean? I am satisfied by the results, given it is a machine, my aunt complains that “washing by hand” is better.

    Same situation here, some people are happy, some would like more…

      • Zeth0s@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        10
        ·
        edit-2
        11 months ago

        The article imply a fraud in the LLMs… Compares it to psychs

        Non sense. They are just tools. One can like them or not

        • Flying Squid@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          16
          ·
          11 months ago

          No it doesn’t. It doesn’t claim they’re frauds. It claims that people are seeing things in them that aren’t there and giving them abilities they don’t have. I don’t think you read it all the way through.

          • Zeth0s@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            7
            ·
            edit-2
            11 months ago

            I did, and I believe the author is the one using psych tricks. He tries to persuade readers to see things that are not there. Similar as he claims that the LLMs are doing, he prepares the scene by creating some comparison worded in a way to make them credible. But in practice none of those comparisons are true. They appear true because the author is good with words, and leave to the reader the message that LLMs are frauds. It is a well executed rhetoric exercise, but it is still non sensical. An LLM is just a model that doesn’t try to be intelligent, it tries to answer questions or, better, to complete text

            • JoBo@feddit.uk
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              2
              ·
              11 months ago

              If you read it all the way through, you didn’t comprehend any of it. Maybe work on that and try again.