Using Tortured Phrases to Spot Problematic Papers

A recent article in Nature highlighted an unusual approach to spotting questionable research papers, the use of “tortured phrases”.

The article highlights the work of Guillaume Cabanac, a computer scientist at the University of Toulouse, and his colleagues.

According to Cabanac, he couldn’t understand why researchers in several articles were using phrases like “Colossal Information” instead of “Big Data” or “Haze Figuring” instead of “Cloud Computing” and began to search for those kinds of expressions in various databases.

Cabanac went on to find some 860 papers that featured one or more of these tortured phrases, and that 31 of those phrases were published in a single journal: Microprocessors and Microsystems.

That journal has already become the subject of a separate investigation by the journal’s publisher, Elsevier. According to Elsivier, some 400 articles that appeared in six special issues have been amended with an expression of concern with the issue that they bypassed portions of the editorial process due to a “configuration error”.

That list includes the papers identified by Cabanac and his team.

However, it’s important to note that tortured phrases like the ones identified by Cabanac do not necessarily mean that the research is plagiarized, fabricated or otherwise invalid. They very often point to issues with translation, not integrity.

That said, they are still a warning sign and, as any teacher or professor can tell you, they are often the first clue that something is amiss with a paper.

Tortured Phrases, Poor Translation and Fabrication

Defining what is a tortured phrase is often very difficult. However, the easiest way to think about it is that it’s an uncommon and awkward way to say something for which there is already a common expression.

Other examples provided include “Leftover Vitality” for “Remaining Energy” and “Irregular Esteem” for “Random Value”. They may be technically correct, but in a way that no fluent speaker of the language would ever use.

These expressions can appear in a work for a variety of reasons. One common cause is simply poor translation. When an author is trying to translate their work into a less familiar language, they may not be aware of all the nuances and can make these kinds of mistakes.

However, they can also be produced by more nefarious means. For one, it can be the result of “Article Spinning” a process by which someone uses an automated tool to swap words for synonyms. This is commonly done to avoid plagiarism detection tools and make a work appear original when, in reality, it’s a complete duplicate.

Another is with automatically generated text. Though artificial intelligence is improving every day, it’s ability to write natural-sounding language is struggling. Even high-end writing AIs, including GPT, can produce such phrases.

This was also part of Cabanac’s work, as he ran a selection of abstracts through a tool to identify GPT-produced text and eventually found some 500 “questionable” articles from that single publication.

What all this adds up to is the fact that tortured phrases aren’t a sure-fire sign of some kind of integrity violation. However, they are a cause for further investigation, ideally before publication.

However, this isn’t a surprise to teachers and instructors, who have long used these kinds of phrases as a warning sign that something is off with a student paper, especially when it is from a student who is fluent in the language at issue.

So, whether they realize it or not, many have been relying on tortured phrases and other translational oddities to spot these issues for a very long time. In that regard, it’s an old skill that needs a new light shined on it.

Bottom Line

What Cabanac’s work shines a light on isn’t just a novel approach to spot questionable scientific publications, but a serious problem in academic publishing. Tortured phrases can be difficult to spot by automated tools, but should be easily recognized by human readers.

The question becomes a simple one: Why did none of the peer reviewers or editors involved not raise concerns about these papers before publication. The answer is also simple: It was likely due to inadequate human evaluation of the work.

As powerful as automated tools are for detecting integrity issues in published work, they are just one part of the process. Humans are still required to evaluate the work and determine its merit. Removing or even reducing the human element harms all of academic publishing.

Cabnac’s work should be a wake-up call, not necessarily about tortured phrases or specific methods for detecting possible integrity violations, but about the process itself.

Want to Reuse or Republish this Content?

If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.

Click Here to Get Permission for Free