CNET’s AI Plagiarism Debacle

Over the past two weeks, CNET has become the poster child of artificial intelligence (AI) gone wrong.

First, the website was caught publishing articles produced by an AI under the byline “CNET Money Staff” in an article written by Frank Landymore at Futurism. 

There was no clear disclosure that it was written by an AI, and it was only discovered after some users clicked the byline itself. To make matters worse, the byline page didn’t make it completely clear either, saying that “This article was generated using automation technology” rather than saying it was written by an AI. 

All totaled, it appears CNET has published some 73 articles using their AI system.

Still, CNET brushed the criticism aside, saying that they were still editing and fact-checking those articles with humans before publishing. Unfortunately, that argument fell at the first hurdle when Jon Christian at Futurism published an article highlighting factual errors found in those articles

However, even that wasn’t the bottom of the rabbit hole, as Christian published another article highlighting allegations of plagiarism committed by the bot

The allegations themselves dealt with short passages, usually one or two sentences, that were similar but rewritten by the bot. Still, it was more than clear that the bot was attempting to rewrite or alter existing passages rather than write original text. 

This prompted CNET’s editor-in-chief, Connie Guglielmo, to speak about the issues. On the plagiarism, she said that plagiarism was the fault of the editor overseeing the works. She alleged that, “In a handful of stories, our plagiarism checker tool either wasn’t properly used by the editor,” or that the tool missed the issue.

Regarding the factual errors, she said CNET has performed an audit of all articles published by the AI and found additional issues. They’ve added corrections to all pieces impacted.

She went on to say that, while they have “paused” the AI program for now, that they intend to bring it back. 

Doing Everything Wrong

CNET, for their part, have done pretty much nothing right in this space.

They did not properly disclose that they were using an AI to write stories, even falsely attributing them to “staff”. They claimed to be fact-checking and editing the stories, yet they were riddled with both inaccuracies and plagiarism. 

Even now, there are many things that we don’t know. We don’t know which AI CNET used, we don’t know how that AI was trained (other than the stories it plagiarized from) and we don’t know how CNET will change their disclosures when the program is restarted.

In their rush to experiment with AI, CNET skipped or shortchanged every step it could have made to be transparent with readers and ensure that the articles published are at least of good quality.

That, in turn, is the problem with AI in a journalism space. While you can have an AI reporter, such as the Washington Post’s Heliograf, if you want that reporter to do more than fill-in-the-blank articles, like what Heliograf does, it requires humans to edit, fact check, polish and add to it.

However, if the goal is to have fewer humans working on a particular piece, AI is not the way to go. Yes, humans need fact checking and editing too, but with a human, you at least know how the article was written. You can review notes, you can check their sources and lean some on their expertise.

An AI is a black box. A prompt goes in, text comes out, and how it got from A to B is unknown even to the creators of the AI. As such, it needs an even higher level of editing and fact checking than a human. CNET, clearly, didn’t provide that.

Simply put, AI is not a simple shortcut, especially in a journalism setting. If you want to use AI you need to be transparent in how you use it, find appropriate work for it to do and then do the work behind the scenes to make sure it doesn’t go off the rails.

None of that is easy, as CNET can now attest.

So, How Bad Are The Sins?

All this raises a question: How bad were the missteps by the AI?

The answer is that they were pretty bad, but not the worst.

Looking at the plagiarism allegations, the passages at issue are short and show signs of rewriting. It’s a case where the bot did a poor job paraphrasing. Among humans, it is easily one of the most common writing mistakes that students make.

To be clear, it is plagiarism. It is absolutely plagiarism. However, I see this exact kind of plagiarism in students all the time, and it often comes more from poor writing habits and skills than an intent to deceive. 

It’s also worth noting that this plagiarism doesn’t rise to the level of copyright infringement. At least not based solely on the examples given. That could change if there are other, deeper issues. But short, rewritten passages are not likely to sustain a copyright infringement claim on their own.

What’s far more damming is what this says about the CNET AI. This low-quality rewriting is something I would expect from an “automated paraphrasing tool” from six years ago or a scraping/spinning tool from 15 years ago. This doesn’t read like a modern AI generating new text, it reads like a low-quality article spinner trying to hide its sources.

Turning to the factual errors the bot made, these are mistakes that, while obvious to people in the field, would not be to a new reporter or someone who has less expertise.

However, I would expect an organization like CNET to have fact-checkers working behind such new reporters to catch and correct these errors behind the scenes. CNET, from all appearances, put more faith in the AI than they would a human reporter and did not apply the same rigor that I would expect. 

In short, these errors and these plagiarisms speak very ill not just of the AI they used, but of CNET itself. The AI failed to do its duties and CNET shares a portion of the blame for not adequately checking behind it.

However, the AI did not make the decision to not properly disclose its use. That issue is one for CNET alone to bear and that makes it the one that reflects worse on CNET as an institution of journalism.

Bottom Line

If CNET had properly disclosed that it was using an AI reporter, a lot of this wouldn’t be as big of an issue. 

Yes, there would have been and still be a backlash against the use of AI in this space, but the issues found in the AI writing wouldn’t have been as severe.

As things sit now, CNET not only was coy about their use of an AI, but allowed that AI to publish misinformation and plagiarism on their site.

It’s not a good look for CNET.

However, perhaps the worst misstep is that CNET has made it clear that the program is only paused. Pushing forward with this program after these issues makes it seem as if CNET is more committed to AI than its own reputation.

While I don’t necessarily think a human reporter would or should be fired for these missteps, CNET’s own actions have tainted this program. What could have been an interesting experiment if done transparently has instead become a black mark for CNET.

CNET would be wise to distance themselves from that, not push forward with it. But that is clearly not the decision they have made. 

Want to Reuse or Republish this Content?

If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.

Click Here to Get Permission for Free