Even when you’re not sending data that you consider sensitive, it’s helping train their models (and you’re paying for it!).
Also what’s not sensitive to one person might be extremely sensitive to another.
Also something you run locally, by definition, can be used with no Internet connection (like writing code on a plane or in a train tunnel).
For me as a consultant, it means I can generally use an assistant without worrying about privacy policies on the LLM provider or client policies related to AI and third parties in general.
For me as an individual, it means I can query the model away without worrying that every word I send it will be used to build a profile of who I am that can later be exploited by ad companies and other adversaries.
I understand the benefits of cutting down sugar, but why not just binge on cake and ice cream?
Sounds like you don’t understand the benefits of running things, and specifically LLMs and other kinds of AI models locally.
So what are the benefits with respect to local LLMs in the context I described?
If you’re doing it locally, more sensitive queries become ok, because that data is never leaving your computer……
Even when you’re not sending data that you consider sensitive, it’s helping train their models (and you’re paying for it!).
Also what’s not sensitive to one person might be extremely sensitive to another.
Also something you run locally, by definition, can be used with no Internet connection (like writing code on a plane or in a train tunnel).
For me as a consultant, it means I can generally use an assistant without worrying about privacy policies on the LLM provider or client policies related to AI and third parties in general.
For me as an individual, it means I can query the model away without worrying that every word I send it will be used to build a profile of who I am that can later be exploited by ad companies and other adversaries.