AI Content
March 8, 2024

Can AI-Generated Content be Trusted? Lessons from CNET's AI Debacle

CNET's AI-generated content controversy highlights the importance of factual accuracy and ethical concerns related to AI usage. Brands need to be vigilant, establish clear AI policies, and maintain transparency with their audience.

. . .


Did you know 78% of US adults perceive news articles written by AI to be a step in the wrong direction as it could lead to the spread of misinformation?

With a growing demand for factual accuracy and ethical use of content, it’s no surprise that AI content has been in the news for quite some time. Remember the copyright lawsuits that were initiated over AI-generated art? Or the much-talked-about Writers Guild strike in the US?

One case that might have slipped through the cracks is CNET, a leading tech media platform. The company went under extensive public and media scrutiny after the discovery of factual inaccuracies in their AI-generated articles. A precipitous moment that led to the introduction of an elaborate AI policy and revisions for multiple stories - months after they were originally written by AI.

Let’s look at what transpired.

A global survey in May 2023 of newsroom executives revealed that 85% of newsroom executives are worried about information inaccuracy and content quality when using AI-generated content
A global survey in May 2023 of newsroom executives revealed that 85% of newsroom executives are worried about information inaccuracy and content quality when using AI-generated content, source

Factual inaccuracies in CNET’s AI-generated content

It started in January 2023, when an online marketer tweeted about CNET Money using “automation technology” to create their news article. It was picked up by a tech site, Futurism, which reported that CNET had been “quietly publishing entire articles generated by AI.” It also pointed out “dumb errors” made in these articles, which a (human) expert would not have made.

For instance, an article on compound interest, titled “What is Compound Interest?,” mentioned that a principal deposit of $10,000 with 3% interest would earn you $10,300 in a year.

This is factually incorrect.

The earnings from interest would be $300, while $10,300 would be the total value of your principal amount plus interest. However, the way this article was structured, readers who do not understand the concept of compound interest would have got the wrong notions.

This factual error was just one of the many instances found in over 70 AI-generated articles published by CNET Money, between November 2022 and January 2023.

Other stories included titles like, “Should You Break an Early CD for a Better Rate?” and “What is Zelle and How Does It Work?

Various media headlines on CNET's AI-generated articles story
Various media headlines on CNET's AI-generated articles story

Public and media reaction to CNET’s AI-generated articles

While we believe that there’s nothing wrong with adopting new technology and experimentation, CNET tried to hide this fact.

Until the news broke, readers were unaware that they were reading AI-generated content as the byline was attributed to “CNET Money Staff.” The usage of AI was only revealed on reading the author's bio, which was not directly visible on the screen.

Needless to say, there was public outcry when this news got out.

In addition to reports about factual inaccuracies detected in the AI content, there were questions raised about the ethical usage of AI content.

When an independent agency ran the articles through AI content detectors, they found that over 87% of CNET’s AI-generated content was detectable with a publicly available tool and only 12.8% had zero detectable AI content.

Public and media reaction to CNET’s AI-generated articles
Source

Public and media reaction to CNET’s AI-generated articles
Source

Measures implemented by CNET in response to public criticism

Following the reports and tweets, CNET had to take measures to address the backlash for the AI-generated content:

  1. Clarification: CNET owned up and published an explanation saying they’re experimenting with AI assistants.
  2. Updating past stories: They corrected more than half of the published articles with public updates and revisions. These also contained lengthy correction notes as well as updated bylines for transparency.
  3. Updation of AI policy: In June 2023, they updated their AI policy that laid guardrails on how they would be using AI tools to create content.

For example, one of their tenets on AI usage states, “If and when we use generative AI to create content, that content will be sourced from our own data, our own previously published work, or carefully fact-checked by a CNET editor to ensure accuracy and appropriately cited sources.”

An example of correction that was issued on the AI-generated articles by CNET
An example of correction that was issued on the AI-generated articles by CNET, source

Lessons to be learned from CNET

With the rise of AI technology, over 90% of enterprises have already adopted at least one AI technology. This has sparked concern in over 71% of employees regarding job security and the risk of being replaced. This widespread apprehension highlights the need for companies to approach AI tool integration with sensitivity and foresight.

If you are planning to use AI tools to create content, here are three takeaways from the CNET controversy:

  1. Be vigilant during fact-checking: AI-generated content requires rigorous fact-checking processes to ensure accuracy. The technology is not infallible and may sometimes hallucinate incorrect information.
  2. Having an AI policy in place: Brands must be aware of the ethical implications of using AI for content creation. There needs to be guidelines set for content creators as well as disclaimers for creating AI-assisted content.
  3. Public perception and trust: The controversy highlights the importance of maintaining trust. Brands should be transparent about their use of AI in content creation and be prepared to effectively address public concerns or criticisms.

What are your thoughts on the use of AI in journalism?

Let us know.

Want to cut the clutter and get information directly in your mailbox?