If you’ve been following CCC’s Velocity of Content since its inception, you’ve likely come across a blog post (or several) from Jane Reed, Linguamatics’ head of life science strategy.

As a long-time partner of CCC, we virtually sat down with Jane to discuss where Linguamatics has evolved from as a company, how they’re meeting customers’ ever-changing needs, and the biggest industry shakeups they’ve seen since partnering with CCC nearly a decade ago.

Tell us an interesting fact about Linguamatics that people may not know.

JR: Did you know that one of Linguamatics’ founders met Queen Elizabeth?

Linguamatics received the Queen’s Award for Enterprise in 2014, an award for International Trade in recognition of outstanding growth in export sales across USA and Europe for its Natural Language Processing [NLP] text analytics software platform. As part of the award, Phil Hastings our Senior Director of Business Development and David Milward, one of the founders and previous CTO went to Buckingham Palace to receive the award from the late HM Queen Elizabeth.  

Linguamatics has been serving the healthcare and pharma space for many years. In a recent blog post you talk about the “healthcare data revolution.” Can you give us a brief explainer of what’s happening in the industry – and how Linguamatics is responding?

JR: The last few years have witnessed the ongoing expansion of real-world healthcare data (RWD) sources, which has been accompanied by a rise in the recognition and approval of these data by global regulatory agencies, including the U.S. Food and Drug Administration, the European Medicines Agency, and many health technology assessment agencies. The use of real-world evidence (RWE) to support regulatory decision making, such as the approval of new indications for approved drugs, has been given added emphasis by the 21st Century Cures Act (2016).

Increasingly, rich RWD sources are being used with cutting-edge analytical technologies to deliver previously unattainable insights. These new data sources and tools hold out the possibility of improving drug discovery and development processes end-to-end, addressing unmet medical needs, reducing healthcare costs, and enhancing the quality of care.  Many sources of real-world data contain large amounts of unstructured text (e.g. medical records, literature, congress reports, or even social media for patient-reported outcomes). IQVIA NLP from Linguamatics extracts the key facts, using relevant ontologies and focused queries, transforming real world data into actionable intelligence for decision making.

How are we responding?

As the understanding of the value of RWD grows, we are seeing the need to cater for a wider ecosystem of users, including expert informaticians, data scientists, or business users such as scientists and clinicians. To provide value, we have developed a broad suite of offerings to enable all these users to gain the value we know NLP can bring from unstructured text relevant for RWD and RWE.  We are working with partners – both within IQVIA and outside – to provide comprehensive solutions that customers need, integrated with our NLP. Linguamatics doesn’t need to do it all, but we do need to know who to work with, for best practice and best customer value, and we do.  

The Linguamatics I2E application delivers text mining capabilities across a broad range of application areas. Tell us a little about how I2E works.

JR: Linguamatics I2E has been our mainstay NLP product for over 20 years; more recently we have released new products such as NLP Data Factory and Human Assisted Review Tool, ideal for data scientists, expert informaticians, end-user scientists and researchers. All these products share the NLP Core Platform to tackle any sort of unstructured text and transform into structured, enriched, information. I2E itself is a powerful search tool that enables the creation of sophisticated queries than combine word, phrases, ML-predicted entities, terminologies, linguistic units, regular expressions and much more – linked together with relations that extend Boolean logic (AND, OR, NOT, etc). As well as search, I2E enables the user to “shape” their results with appropriate features extracted from source documents, and then export them to a variety of commonly-used output formats, e.g. XML, JSON, TSV, HTML, Docx, etc. Under-the-hood, the NLP Core platform is performing high-throughput processing of incoming documents which understands the fundamental elements that make up text – sentences, phrases, nouns and verbs, other parts of speech, morphology of words, and more. Our NLP also leverages a range of ontologies – all the key concepts in life science and healthcare, such as drugs, genes, diseases, with millions of terms and synonyms to enable effective search. It also recognizes chemical names, patterns of words in text and assign them as dates, dosages, mutations, measurements, etc. And if that is not enough, NLP Core exposes an API to give customers even more fine-grained control over the NLP!

Natural language processing is undoubtedly useful for data scientists and teams heavily invested in R&D outcomes. What are some areas of a business where NLP can play a big role, but is often underutilized?

JR: IQVIA NLP was founded over 20 years ago [https://www.linguamatics.com/about-us], and in the early years, much of the use focused on drug discovery, such as target identification and prioritisation, systems biology, biomarker discovery. Over time, our NLP technologies have evolved, and our customers have matured in their use of NLP. This has led to NLP playing a role across the whole lifecycle of a drug, from bench to bedside. This includes clinical trial analytics, preclinical toxicology and clinical safety, regulatory and medical affairs, health economics and outcomes research, brand and product teams, and within hospitals and health insurers, for better patient and population care.

With the focus in the last few years on digital transformation, NLP can play a big role in enabling the move from document-driven, to data-driven, decision support. For example, clinical, safety and regulatory processes are all reliant on document-heavy processes. This means organizations incur a significant manual burden, to find, read and act on the relevant information across these processes. NLP, along with other AI innovations such as natural language generation and structured content authoring, and end-to-end integration with internal data management systems can streamline many of these processes. Human-in-the-loop review and curation brings 10x or more improvements in efficiencies while maintaining quality and auditability for these critical processes.

Linguamatics has been a longstanding partner of CCC. Over these years, what do you think has changed the most in terms of the combined value our partnership brings to customers?

JR: Linguamatics and CCC have been working together for nearly a decade, to collaboratively develop valuable, scalable solutions for our clients, through the synergy of award-winning NLP and world-class content provision. Linguamatics and CCC launched their first integrated offering, a seamless integration from CCC’s RightFind XML to Linguamatics i2E, in 2016, and aim for continuous improvement in what we provide to customers.  One significant change over this time is the acceptance of the need for technology to handle text in published literature.  Although NLP has been studied for nearly a century, when Linguamatics was founded it was not commonly used within pharm discovery and development processes, and it could be hard to convince people why they needed NLP to effectively handle unstructured text. That has changed hugely over the past few years; and there is now a deep and widespread acceptance that AI technologies are needed to manage the ever-expanding velocity of content.

Interested in learning more? Check out: 

Topic:

Author: Christine McCarty

Christine Wyman McCarty is product marketing director for corporate solutions at Copyright Clearance Center. Through over a decade of experience working with clients at R&D intensive companies, she has gained an understanding of the challenges they face in finding, accessing, and deriving insight from published content. She draws on this expertise to shape innovative product offerings that solve market problems. Christine has held a variety of positions at CCC including roles in software implementation and product management. Christine has a Masters in Library and Information Science from Simmons University and practiced librarianship for several years before finding her passion for helping companies digitalize their knowledge workflows with software.
Don't Miss a Post

Subscribe to the award-winning
Velocity of Content blog