Earlier this month, Business Insider quietly dropped a bombshell. In a memo to staff, the editor-in-chief told journalists they can now use AI to draft stories — possibly without telling readers.

The policy makes BI one of the first major U.S. newsrooms to formally endorse such expansive use of AI, according to Status, which reported the news. “BI’s rules also stop short of offering transparency to readers on the matter,” it said.

What makes this news striking is that some outlets explicitly require AI disclosure when used in certain instances, such as generating an article summary or translating an English article into Spanish.

Using AI is an efficiency win — but at what cost?

In 2022, working with colleagues Andrey Fradkin and Gordon Pennycook, we uncovered an interesting finding. When news organizations said AI helped them write their stories, readers lost trust, even though the material was true.

Since then, ChatGPT has grown to 700 million weekly users, and newsrooms all over the world are using AI technologies. Our finding is even more relevant today.

But don’t take our word for it.  A recent survey led by nonprofit Trusting News found that 94% of news readers want to know when AI is used in reporting. At the same time, according to the 2025 Edelman Trust Barometer, only 32% of Americans trust AI. This makes it nearly impossible for news organizations to retain their credibility while meeting the challenge of publishing content 24/7.

The transparency that was meant to build trust instead shows that the work doesn't have real human judgment.

The 2022 Finding: Disclosure Hurts Trust

In our paper, “News from Generative Artificial Intelligence Is Believed Less,” we explored the question of trust in news written by generative AI.

When it comes to AI-written news, people fall into two camps. Some think machines, free of bias and emotion, might actually be more accurate. Others believe that without empathy and moral judgment, AI simply can’t be trusted. Add in the fear of humans being replaced, and skepticism grows.

We tested these competing theories using two experiments designed to measure perception of news accuracy and trust in human and AI reporting, with more than 4,000 study participants.

We found that simply labeling a story as “AI-generated” made people trust it between 7 and 14 percentage points less — whether the news was true or false. Readers consistently regarded AI reporters as less trustworthy than human journalists — probably because they thought AI lacked the empathy, moral judgment and context that is the bedrock of good journalism.

And yet this kind of generative AI might be more pervasive than realized. Even the news you read every day might have been written by neural networks; most likely, a good proportion already is.

Our findings are confirmed in a recently published paper by Oliver Schilke and Martin Reimann. They found that individuals who disclose their AI usage are consistently trusted less than those who don't.

As AI grows more common, the transparency “tax” seems to remain truer, not weaker.

The ChatGPT Revolution

A lot has changed since we published our paper in 2022.  Two months after it launched, ChatGPT had 100 million monthly active users, making it the fastest-growing consumer app in history. By comparison, TikTok took nine months to reach the same number of users.

Today, the use of generative AI tools is ubiquitous. A 2024 working paper from the Federal Reserve of St. Louis found about 40% of Americans between the ages of 18 and 64 now use generative AI.

The rapid adoption has massive implications for everyone — especially journalism.

According to the technological research firm Similarweb, ChatGPT news queries surged 212% between January 2024 and May 2025. At the same time, traditional search traffic to news sites lost hundreds of millions of visitors each month.

Not surprisingly, major news organizations have a complex relationship with AI.

AI chatbots often mention Reuters, the Financial Times, Time, Axios, Forbes and the Associated Press. This creates a cyclical dynamic in which major outlets both use AI tools and have their content consumed by them.

The Disclosure Dilemma

Researchers call what newsrooms are going through today a “disclosure dilemma.”

Surveys show that people don't only want to know if AI was employed; they also want to know why and how it was used. The Trusting News survey found that 87% of people want to know why reporters employed AI, and 92% want proof that humans checked any material generated by AI.

But as our research shows, openness about AI use actually backfires. The negative effect stays the same no matter how the disclosure is presented, whether it is voluntary or required, or even if readers already think AI is involved.

Editors and publishers are now stuck between the public's demand for openness and the fact that being honest makes people less trusting. A lasting solution can't be found in either full disclosure or secrecy.

Finding a Way Forward

We believe newsrooms shouldn't give up on being open about AI use; instead, they should rethink how they talk about it. There are a few ways that look promising:

Context matters more than labels. Newsrooms shouldn't just slap an  “AI-generated” tag on stories; they should explain in detail how AI made human reporting better. Did algorithms look at huge amounts of data? Did AI write down interviews? Did machines make the first drafts and then reporters make a lot of changes to them?

Human oversight is key. Readers trust AI more when humans clearly stay in charge.

Match task to tool. People trust AI more for day-to-day work, less so for creative tasks. For example, they may be fine with AI translating or analyzing data, but not when it’s used to pen opinion pieces or help with investigative journalism.

Earn trust in the use of AI. News organizations need to ensure — and show — that AI strengthens, rather than weakens, the core values of journalism. This includes ensuring and demonstrating that technology makes it possible to do more in-depth research, uncover more sources, or check facts faster, and then being transparent about how this improves reporting. Efficiency can be valuable, but only if it frees journalists to focus on work that requires human judgment, not if it reduces journalism to cheaper content.

ProPublica, for example, recently disclosed how it uses AI responsibly in its investigations: When its reporters prompted a large language model to help identify “woke” themes in a database of grants, AI helped them tell a story about science funding and U.S. Sen. Ted Cruz.

Why This Matters Beyond News

Our findings extend far beyond journalism.

The drop in trust we observed when AI authorship was disclosed likely has parallels in other contexts where organizations use AI to communicate — from customer service chatbots to corporate communications. Any business that speaks to the public through machines must grapple with the same challenge: how to maintain credibility when audiences doubt the source.

In 2023, Bill Gates wrote that after watching OpenAI’s GPT model answer questions on an AP Biology exam, “I knew I had just seen the most important advance in technology since the graphical user interface.”

How media handles this change will affect not only the future of newsrooms but also the health of democratic discourse.

In the age of generative AI, trust isn't just about being open — it's about being open about being human. The problem for newsrooms isn't whether or not to reveal their use of AI; it's how to show that journalism is still a fundamentally human activity that is guided by human values, judgment and accountability, even when algorithms are used.

As news organizations deal with this conundrum, one thing is clear: The traditional ways of being open aren't working anymore. In a time when machines can write like humans, it's up to newsrooms to show that people are still in charge of the machines.

Luca Cian is the author of “
News from Generative Artificial Intelligence Is Believed Less,” published in the Proceedings of ACM FAccT 2022, with colleagues Chiara Longoni from Bocconi University in Milan, Italy; Andrey Fradkin from Boston University’s Questrom School of Business; and Gordon Pennycock from Cornell University.

About the Expert

Luca Cian

Killgallon Ohio Art Associate Professor of Business Administration

Cian’s area of marketing expertise encompasses consumer behavior and psychology, specifically as related to sensory marketing and social cognition.

His work has appeared in leading academic journals Journal of Marketing ResearchJournal of Consumer Psychology and Journal of Consumer Research, and has been discussed on NPR and in other mainstream channels including The Huffington Post, New York magazine, The Atlantic and Fast Company’s Co.Design.

Before coming to Darden, Cian was a postdoctoral scholar at the University of Michigan Ross School of Business, while also serving as a marketing consultant for the Italian Environmental Protection Agency and working at the Sensory Marketing Laboratory and at the Social Cognition Laboratory.

M.S., University of Trieste; Ph.D., University of Verona (visiting at University of Michigan); Postdoc, University of Michigan

READ FULL BIO