I’m usually the one saying “AI is already as good as it’s gonna get, for a long while.”

This article, in contrast, is quotes from folks making the next AI generation - saying the same.

  • makyo@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 months ago

    I feel like people are using those terms pretty well interchangeably lately anyway

    • Greg Clarke@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      10
      ·
      2 months ago

      People that don’t understand those terms are using them interchangeably

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 months ago

        LLM is the technology, Chatbot is an implementation of it. So yes a Chatbot as it’s talked about here is an LLM. Although obviously chatbots don’t have to be LLM, those that are not are irrelevant.

        • Greg Clarke@lemmy.ca
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          9
          ·
          2 months ago

          No, a chat bot as it’s talked about here is not an LLM. This article is discussing limitations of LLM training data and inferring that chat bots can not scale as a result. There are many techniques that can be used to continue to improve chat bots.

          • Buffalox@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            2 months ago

            The chatbot is a front end to an LLM, you are being needlessly pedantic. What the chatbot serves you, is the result of LLM queries.

            • Greg Clarke@lemmy.ca
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              8
              ·
              2 months ago

              That may have been true for the early LLM chatbots but not anymore. ChatGPT for instance, now writes code to answer logical questions. The o1 models have background token usage because each response is actually the result of multiple background LLM responses.