• superkret@feddit.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 minutes ago

    It’s an article about a reddit post with a screenshot as the only proof.
    I’ve faked AI answer screenshots like that in less than 2 minutes for a meme.

  • Vibi@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    20
    ·
    2 hours ago

    It could be that Gemini was unsettled by the user’s research about elder abuse, or simply tired of doing its homework.

    That’s… not how these work. Even if they were capable of feeling unsettled, that’s kind of a huge leap from a true or false question.

  • meyotch@slrpnk.net
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 hours ago

    I suspect it may be due to a similar habit I have when chatting with a corporate AI. I will intentionally salt my inputs with random profanity or non sequitur info, for lulz partly, but also to poison those pieces of shits training data.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      2 hours ago

      I don’t think they add user input to their training data like that.

      • kitnaht@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 hour ago

        They don’t. The models are trained on sanitized data, and don’t permanently “learn”. They have a large context window to pull from (reaching 200k ‘tokens’ in some instances) but lots of people misunderstand how this stuff works on a fundamental level.

  • Gointhefridge@lemm.ee
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    8
    ·
    3 hours ago

    I’m still really struggling to see an actual formidable use case for AI outside of computation and aiding in scientific research. Stop being lazy and write stuff. Why are we trying to give up everything that makes us human by offloading it to a machine?

    • superkret@feddit.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 minutes ago

      It’s good for speech to text, translation and a starting point for a “tip-of-my-tongue” search where the search term is what you’re actually missing.

    • deegeese@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      3 hours ago

      AI summaries of larger bodies of text work pretty well so long as the source text itself is not slop.

      Predictive text entry is a handy time saver so long as a human stays in the driver’s seat.

      Neither of these justify current levels of hype.

      • kitnaht@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        edit-2
        1 hour ago

        Go look at the models available on huggingface.

        There’s applications in Visual Question Answering, Video to Text, Depth Estimation, 3D recreation from a photo, Object detection, visual classification, Translation from language to language, Text to realistic speech, Robotics Reinforcement learning, Weather Forecasting, and those are just surface-level models.

        It absolutely justifies current levels of hype because the research done now will absolutely put millions out of jobs; and will be much cheaper than paying people to do it.

        The people saying it’s hype are the same people who said the internet was a fad. Did we have a bubble of bullshit? Absolutely. But there is valid reason for the hype, and we will filter out the useless stuff eventually. It’s already changed entire industries practically overnight.

        • Mbourgon everywhere@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 minutes ago

          I think he’s talking about the LLMs, which…yeah. AI and LLMs are lumped together (which makes sense, but classification makes a huge difference here)

        • chrash0@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          14 minutes ago

          the reactionary opinions are almost hilarious. they’re like “ha this AI is so dumb it can’t even do complex systems analysis! what a waste of time” when 5 years ago text generation was laughably unusable and AI generated images were all dog noses and birds.

    • CubitOom@infosec.pub
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 hours ago

      It can be really good for text to speech and speech to text applications for disabled or people with learning disabilities.

      However it gets really funny and weird when it tries to read advanced mathematics formulas.

      I have also heard decent arguments for translation although in most cases it would still be better to learn the language or use a professional translator.

    • five82@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      3
      ·
      2 hours ago

      The relentless pursuit of capitalism and reduced labor costs. I still don’t think anyone knows how effective it’s going to be at this point. But companies are investing billions to find out.

    • bloup@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 hours ago

      I don’t use it for writing directly, but I do like to use it for worldbuilding. Because I can think of a general concept that could be explored in so many different ways, it’s nice to be able to just give it to an LLM and ask it to consider all of the possible ways it could imagine such an idea playing out. it also kind of doubles as a test because I usually have some sort of idea for what I’d like, and if it comes up with something similar on its own that kind of makes me feel like it would be something which would easily resonate with people. Additionally, a lot of the times it will come up with things that I hadn’t considered that are totally worth exploring. But I do agree that the only as you say “formidable” use case for this stuff at the moment is to use this thing as basically a research assistant for helping you in serious intellectual pursuits.

    • chakan2@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      2 hours ago

      I’m still really struggling to see an actual formidable use case

      It’s an excellent replacement for middle management blather. Content that has no backing in data or science but needs to sound important.

  • Ceedoestrees@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    27 minutes ago

    The war with AI didn’t start with a gun shot, a bomb or a blow, it started with a Reddit comment.