• Nakoichi [they/them]@hexbear.netM
    link
    fedilink
    English
    arrow-up
    43
    ·
    edit-2
    1 year ago

    how much information a Brian or model can keep active

    Yeah fuck Brian.

    would likely be both more ethical and more effective than human leadership.

    Here’s where you are wrong and I have something you should listen to to understand why.

    https://soundcloud.com/thismachinekillspod/281-the-smoking-gun-of-techno-capitalism-ft-meredith-whittaker

    Here’s the specific article on why this is a utopian pipedream and why the reality under capitalism is much different and much scarier.

    https://logicmag.io/supa-dupa-skies/origin-stories-plantations-computers-and-industrial-control/

    • conditional_soup@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Good response, and thanks for bringing receipts. I’d love to read this a little later. Imo, though, large language models and generative AI in particular represent the capacity to make the means of production free and open source. True, freely available models that you could run on a gaming computer don’t hold water against ChatGPT yet, but I do suspect that this will change as the emphasis in AI research pivots towards making models more efficient. It’s also true that if a general AI is developed, it’s not going to be FOSS, though that’s honestly not the worst idea.

      With respect to your article on Babbage, I’d like to point out that much of the leadership in AI right now has been leading with the idea that any AI must follow the 3 Hs: Honest, Harmless, and Helpful. I think it’s more than just hype, IMO, because they’re currently burning a lot of cash hiring teams whose whole job it is to make sure that we get alignment (that is, constraining it with ethical values rather than allowing it to become a paperclip maximizer) of a potential super-intelligence correct. To be quiet frank, there’s a lot of MBAs out there who could stand to pick up those 3Hs.

      • combat_brandonism [they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        11
        ·
        1 year ago

        Imo, though, large language models and generative AI in particular represent the capacity to make the means of production free and open source.

        I remember left-sympathetic cryptobros saying the same thing about cryptocurrencies for the last decade.

        • conditional_soup@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I really never saw the value proposition with crypto, besides it being digital cash.

          A key difference is that generative AI actually can and already does produce value as a means of production. Tons of folks use chatGPT to save hours on their workflows; I’m a programmer and it’s probably saved me days of work, and I’m far from an edge case. Imo, the most telling thing is that a lot of the major AI companies are begging Congress to pull the ladder up behind them so that you can only develop AI if your market cap is at least this high; I think some of them are worried that decentralized, FOSS AIs will seriously erode their value propositions, and I think that their suspicions are correct.