Last spring, not long after OpenAI launched ChatGPT, an AI working group in the US House of Representatives obtained 40 licenses for the generative AI tool. ChatGPT and other available AI tools now conduct systematic review of scientific literature for government officials by searching millions of information sources. And the machines are expected to do much more in years ahead.

Members of government and public policymakers around the world rely on science and science publishing when shaping regulation and legislation. The responsibility to stay current on research is a formidable challenge for the public sector, especially as the volume of science publishing grows. Ethical concerns, of course, temper the enthusiasm over AI. Congressional staff, for example, must limit their ChatGPT use to research and evaluation only, and they should only input non-sensitive data.

Click below to listen to the latest episode of the Velocity of Content podcast.

Early in his own career, Dr. Christopher Tyler in the Department of Science, Technology, Engineering, and Public Policy at University College London was a science advisor in the House of Commons. Looking back from the perspective of 2024, he wishes ChatGPT were there to help with his work.

“Oh, a thousand times yes. It would have been fantastic. I can’t tell you how long I used to spend doing things like scoping new inquiries for select committees, where I would have been able to just throw into ChatGPT a question,” says Dr. Tyler, who has written for Nature about the powerful potential of AI in developing science policy.

“We’ll probably find that these kinds of tools will speed up a lot of the donkey work component of science advice to enable people like me back in the day to spend more time face to face, more time crafting bespoke briefs for individuals, more time making sure that the evidence synthesis met the exact need of the policy questions that we’re being asked, rather than just scrambling for information the entire time.”

Rachel MartinElsevier’s global director of sustainability, served on a team that developed a proof-of-concept project testing the suitability of gen AI narratives for advisors and their clients in government. She tells me what readers think of machine-written policy documents.

“One of the biggest things was that everybody said it reads well. Nobody thought, ‘Oh my God, a machine has written this.’ Nor at all,” Martin says.

“People said they wanted data. They wanted a clear number. And they wanted that to be citable. They wanted to be able to go to that document and to say, ‘OK, this study says that it’s this number.’ All these elements come into it, and you suddenly realize that this is a lot more complicated. It isn’t just a simple question, ‘Hey, ChatGPT, please write my Christmas menu.’ This is far more detailed and far more nuanced if it’s going to work and work at scale.”

Topic:

Author: Christopher Kenneally

Christopher Kenneally hosts CCC's Velocity of Content podcast series, which debuted in 2006 and is the longest continuously running podcast covering the publishing industry. As CCC's Senior Director, Marketing, he is responsible for organizing and hosting programs that address the business needs of all stakeholders in publishing and research. His reporting has appeared in the New York Times, Boston Globe, Los Angeles Times, The Independent (London), WBUR-FM, NPR, and WGBH-TV.
Don't Miss a Post

Subscribe to the award-winning
Velocity of Content blog