Long before ChatGPT arrived on the hype cycle, many of us were considering what artificial intelligence (AI) technology could mean for scholarly publishing – how it might change processes developed over centuries, and how publishers should react.

In 2019 in an article for the European Medical Writers Association journal, I asked, will medical writers be replaced by robots? And after some deliberation, the answer was no. However, the challenges regarding AI-powered technologies are now much more apparent. The main challenge right now is simply how to keep up with the changes.

For publishers, there is significantly more discussion and evaluation of the risks, both real and perceived. A major concern is the amplification of existing fraudulent practices, such as paper mills and fake papers. But publishers are developing more clarity on what we define as AI and what sits under the umbrella of AI. I break it down probably into four focal areas: big data processing, reasoning around data, problem-solving, and learning.

ChatGPT, courtesy of a Microsoft $10 billion investment, is very famous, but it’s just one of many large learning models and generative text tools. It still requires human expertise and skilled use. A human will not be replaced by AI but, will be replaced by a person who uses and is skilled in using, AI.

I believe publishers will benefit broadly in three ways from these AI tools. Firstly, AI has the potential to automate repetitive and tedious tasks – managing large submission volumes, increasing process efficiency, and developing more efficient peer review processes. It can also help in directing authors in submitting to journals that are most relevant in scope for their work, determining if the subject falls within the correct aims and scope of the journal. And it may move towards being able to reason and assess the novelty of a scientific research study whilst also checking for ethical compliance, copyright issues, and image duplication, which have been consistent challenges for academic publishers.

What steps should publishers take right now to address some of the concerns and questions that generative AI raises? Education is everything here. Nature, Science, and other publishers moved very quickly to update their authorship guidelines and ethical policies to address ChatGPT and related AI tools. We need to educate internal editorial teams on use of these tools and the potential use by authors of these tools. Finally, we are seeing lots of participation in cross-publisher initiatives to define best practices and policies around use of these AI technologies.

When considering AI, I occasionally think of The Hitchhiker’s Guide to the Galaxy. Writing in the late 1970s, author Douglas Adams imagined the all-knowing, electronic book, and he pretty much got it right. At the same time, it sounds scary. But as it states on the cover of The Hitchhiker’s Guide, I would simply say to all, “don’t panic!”

Topic:

Author: Martin Delahunty

Martin Delahunty is Managing Director of Inspiring STEM, an independent publishing consultancy which provides strategy, business development, market research and training services to publishers, universities, pharmaceutical companies, societies, funders, and technology vendors.
Don't Miss a Post

Subscribe to the award-winning
Velocity of Content blog