. . . NVIDIA in-house counsel Nikki Pope: In a panel on “governing” AI risk, she cited internal research that showed consumers trusted brands less when they used AI.
This gels with research published last December that found only around 25 percent of customers trust decisions made by AI over those made by people. One might think an executive with access to this data might not want to admit to using a product that would make people trust them less.
25% is an abnormally large number considering the current techonological inability to the same thing as a human could. In my experience current “AI” is mostly useful for very specific tasks with very narrow guidelines.
What’s interesting is the research where when humans don’t know that the output is generated by an AI, they prefer and trust it more than output from actual humans.
Indeed.
25% is an abnormally large number considering the current techonological inability to the same thing as a human could. In my experience current “AI” is mostly useful for very specific tasks with very narrow guidelines.
What’s interesting is the research where when humans don’t know that the output is generated by an AI, they prefer and trust it more than output from actual humans.