- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Trust in AI technology and the companies that develop it is dropping, in both the U.S. and around the world, according to new data from Edelman shared first with Axios.
Why it matters: The move comes as regulators around the world are deciding what rules should apply to the fast-growing industry. “Trust is the currency of the AI era, yet, as it stands, our innovation account is dangerously overdrawn,” Edelman global technology chair Justin Westcott told Axios in an email. “Companies must move beyond the mere mechanics of AI to address its true cost and value — the ‘why’ and ‘for whom.’”
This implies I ever had trust in them, which I didn’t. I’m sure others would agree.
The fact that some people are surprised by this finding really shows the disconnect between the tech community and the rest of the population.
and its getting worse. I am working on learning to write. I had never really used it for much…I heard other people going to it for literal plot points which… no. fuck you. But I had been feeding it sentences where I was iffy on the grammar. Literally just last night I asked chatgpt something, and it completely ignored the part I WAS questionable about and fed me absolute horse shit about another part of the paragraph. I honestly can’t remember what but even a first grader would be like ‘that doesn’t sound right…’
Up till that it had, at least, been useful for something that basic. Now it’s not even good for that.
Try LanguageTool. Free, has browser plugins, actually made for checking grammar.
This speaks to the kneejerk “shove everything through an AI” instead of doing some proper research, which is probably worse than just grabbing the first search result due to hallucination. No offence intended to @EdibleFriend, just observing that humans do so love to abdicate responsibility when given a chance…
I recently heard a story about a teacher who had their class have ChatGPT write their essay for them, and then had them correct the essays afterward and come back with the results. Turns out, even when it cited sources, it was wrong something like 45% of the time and oftentimes made stuff up that wasn’t in the sources it was citing or had absolutely no relevance to the source.
I guess those who just have to be on the bleeding edge of tech trust AI to some degree.
Never trusted it myself, lived through enough bubbles to see one forming and AI is a bubble.