I personally don’t trust Snowden. Looking at the difference between how he was treated and how Assange was treated. One guy gets to rot in prison, other guy gets movies made about him from Hollywood. Snowden is very much controlled opposition in my mind.

But that being said, I think he is truthful about OpenAI. Which sucks, because now I and many others love using chat gpt.

It’s possible to self host these things though. I read an article about it here:

https://blog.lytix.co/posts/self-hosting-llama-3

But probably not worth the money for most people.

  • soulfirethewolf@lemdro.id
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    13 days ago

    Unfortunately, it’s not so easy or useful if you don’t have a powerful machine to host it with

    • ichbinjasokreativ@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      13 days ago

      Depends on the model. Dolphin-mistral is like 4GB in size and runs on any somewhat modern cpu with reasonable performance. Larger models ofc should be run with higher end gpus at least, but even in hybrid mode (gpu+cpu) models like dolphin-mixtral (26GB) run just fine. For reference, I have a 5800x and a 6900xt, ollama installed in a distrobox container.