When ChatGPT launched two years ago, it threw generative artificial intelligence (AI) into the media limelight. While many in business saw the opportunities for increased efficiency, others expressed serious concerns at both extremes.

Most Americans worry that AI will take their jobs. A recent poll by Gallup revealed that nearly seven out of 10 people believe this will happen over the long term. However, what’s less well known is that the people in the business of using AI are also worried. The problem they see is multifaceted but ultimately boils down to one thing: Trust.

“The whole theme is around trust,” says Raj Venkatesan, professor of business administration at the University of Virginia’s Darden School of Business. As a marketing expert, he, like his colleagues, sees the importance of consumer adoption for this new technology to take off. For that to happen, trust is essential.

“Just because the product is good doesn’t mean it sells,” he says.

Venkatesan highlights that ChatGPT hasn’t yet been fully embraced by the mass market. “You saw a lot of usage when ChatGPT came out," he says. The first wave of adopters were innovators who were tech savvy and comfortable taking risks. “But the percent of the world using it is still within the far smaller, educated world.” In other words, it's not taken off across the board.

Part of the hesitation is due to lack of trust.

 

AI conference banner

 

Omar Garriott, executive director at the Batten Institute for Entrepreneurship, Innovation and Technology, says trust in tech and AI has dropped. Eight years ago, nine out of 10 countries trusted the technology business, but that has now fallen to 76%, according to data he cites from the Edelman Trust Barometer. Trust in AI is even lower, at only 50%. 

Garriott says the tech sector as a whole, including the AI subsector, needs to be proactive to regain trust. It is not something that is a nice-to-have, he says. Rather, it is essential. 

Another troubling issue comes when people think every mistake by AI means that all AI is bad, says Luca Cian, professor of marketing at UVA Darden. His research found that when artificial intelligence (AI) systems make mistakes when providing government services, citizens are more likely to lose trust in other AI systems — even those run by different agencies. This "algorithmic transference" effect doesn't occur to the same degree with human errors.

"When people learn about an AI system making mistakes in one government agency, they become more hesitant to interact with AI systems in other agencies because they believe the same mistakes will occur again," he says.

It’s worth mentioning that the public presumption in this case is false. “AI used by different states is very different,” Cian says. The problem with gaining trust in that example is that trust disappears because of lack of understanding among the public.

Another area where AI is being embraced but also causing problems is the news business. Many publishers are now using AI to help draft stories, as it’s cheaper, Cian says. 

“What we found is that people find news written by AI to be less accurate,” he says. Both real and fake news were viewed as less credible when people thought AI wrote it. 

“When most articles are seen as less accurate, then you lose trust in the newspaper as well,” he says.

There are also strategic considerations about when to advertise the use of AI to consumers, and when not to. Cian points out that sometimes consumers demand a human touch when it comes to recommendations for some goods and services. He points to items that are “hedonic,” or associated with senses. When a movie streaming service suggests something to watch, it will likely not mention AI. Instead, it is more along the lines of “people similar to you liked these movies.”

For products that are solely utilitarian, AI can work well. “For utilitarian items like automobile lubricant, AI is perfect,” Cian says. “The same applies to utilitarian, or in other words, healthy food; but for a cake, then you might want to avoid mentioning AI.”

The rapid speed of changes in AI regulations, along with the lack of understanding by consumers, is nearly impossible to keep up with, Garriott says. That's at least part of the reason that he, Venkatesan, and others will be discussing AI trust issues and ethics at the upcoming conference on leadership in business, data and intelligence, on Dec. 6 at The Forum Hotel on Darden School Grounds, organized by the LaCross Institute for Ethical Artificial Intelligence in Business.

 

Darden Professor Luca Cian co-authored “News from Generative Artificial Intelligence is Believed Less” with Chiara Longoni, Andrey Fradkin and Gordon Pennycook. He also co-authored “Algorithmic Transference: People Overgeneralize Failures of AI in the Government” with Chiara Longoni and Ellie Kyung, and “Artificial Intelligence in Utilitarian vs. Hedonic Contexts: The ‘Word-of-Machine’ Effect” with Chiara Longoni.

Darden Professor Rajkumar Venkatesan co-authored “Influence of Privacy Regulation on Customer-Centric AI Acquisitions: Case of GDPR” with S. Arunachalam.

See also, “Building Stakeholder Trust in Artificial Intelligence, White Paper Spring 2024.”