Female business woman looking through a telescope

As touched on in the last post in this series, utilizing deep search can help an organization reduce the manual effort needed to secure crucial data found in gaps, such as across multiple unconnected data sources. The solution can also help companies who need to gather data quickly to help inform time-sensitive decisions.

Scaling Up Manual Curation with Less Effort

Even when sources of information are already available to an organization, deep search can be an effective solution for reducing the manual labor needed to find relevant data. In one example, a client of ours tracked company news as part of gathering competitive intelligence.

The sources of information for this client were easy to locate and included competitor websites. The existing process the client used to track competitive intelligence involved a single employee checking these various websites for relevant content, including press releases, investor news, company newsletters, etc. The employee then summarized the content manually.

When applying deep search to this organizational need, the client was able to significantly increase the number of websites tracked, as well as index and tag found content to an internal ontology so relevant information could be searched and delivered to different business users. All this tracking was completed three times a day to help ensure the organization had access to the most up to date information available.

Further, the client was able to rate the importance of each item tracked using a trained model — a form of AI or deep learning that requires an initial set of data which domain experts can look through and classify with any labels they choose. In the context of this deep search project, documents in the initial dataset were labeled as “important” and “not important.” This organization then used the dataset to establish a baseline and train a model through machine learning, and the model was then applied to incoming data sourced via crawling, allowing these newly found documents to be labeled automatically.

A mechanism was set up for newly crawled information, allowing users to evaluate the automatically applied rating and override it as necessary. This feedback loop enables the client’s machine-learning model to be updated on the fly and helps improve its accuracy over time.

Lastly, in this deep search example, the key information that was tracked and rated was then pulled into a weekly report that was automatically emailed to the target audience, saving even more manual effort. Though the project discussed here did not involve a complicated process, it was still one that was onerous for a single employee to take on. By employing a deep search solution, the client was able to scale this process and create many more useful competitive intelligence insights.

Seizing on a Small Window of Opportunity

There are also instances where deep search can benefit an organization when there are limitations imposed on data outside the normal expectations, such as when the data is transient and time-limited.

We have seen problems related to this type of limitation arise frequently, and often the solution requires access to new information the instant it’s published, with any delay resulting in missed opportunities for an organization. To illustrate this, we can take the instance of a company seeking to extract value from pre-conference activity. This type of information is updated regularly, even as frequently as hourly in some cases.

One benefit of this information includes the ability to sift through conference exhibitors, find those of interest, and research their websites to help make an organization’s attendance extremely targeted. Another benefit is obtaining a sense of key conference themes by trawling through all the abstracts to isolate those of interest. The abstracts might even tell a story about where the market is heading, one more benefit to possessing this information.

However, to utilize the most accurate, updated version of pre-conference activity, an organization would need to dedicate employee time to frequently visiting the conference website, trawling the abstracts, trying to identify people, issues, and talks of relevance, and then packaging this information and sharing it internally, a time-consuming process prone to human error.

By using deep search, this process can be automated and made more accurate — crawling across the conference site as often as needed (when crawling is permitted), gathering and automatically curating discovered content, even running an ontology across any findings to focus on topics of interest or key word extraction, all possible as part of a deep search solution.

Deep search even allows for following internal links to deeper information (e.g. company websites referenced in the abstracts) and adding that information into the mix, another example of extracting value from data found within the gaps of intersecting sources. In this case, the newly created dataset unites conference information with company and even product information in one place. The potential of a data source like this one includes:

  • Allowing customers to find key opinion leaders to engage with at the conference
  • Enabling a business leader to whittle down a long list of exhibiting companies to only those of probable value
  • Furthering a company’s strategy by identifying target companies for acquisition
  • Helping an organization to infer market direction based on a summary view across all abstracts

After the conference, this information source can continue to provide value by allowing an organization to keep an archive of everything crawled and enriched, helping it to continually grow its understanding of the market and adding to its competitive intelligence data. While this example is focused on conferences, a deep search solution can potentially provide as much or more value when applied to any data that’s time bound.

This is the final in a four-part series on how deeper, automated searching can help your organization more easily find the information needed to make the right business decisions. View the first three blog posts in this series here:

Beyond Standard Search: Solving Problems with New Datasets from Multiple Sources

Beyond Standard Search: Getting More Value from Your Data

Beyond Standard Search: Getting the Targeted Data Your Organization Needs

Topic:

Author: Carl Robinson

Carl Robinson is Senior Corporate Solutions Director for CCC. He focuses on helping clients look at business vision, goals and strategies around their content and tooling to enable flexibility and readiness to meet the ever-changing demands of the digital market. Carl has been in publishing since 1995 and has worked for Pearson Education, Macmillan Education and Oxford University Press.

Author: Stephen Howe

Stephen has spent his career working at the intersection of publishing, education, and technology, holding positions in sales, sales management, production, project management, digital publishing, digital editorial, and product management. Trained in the liberal arts tradition, Stephen holds a BA and MA in philosophy, an MBA in management, and a Masters in Analytics. Stephen currently works as the Senior Product Manager - Analytics at CCC and serves on the advisory board at Brandeis University for the Masters in Strategic Analytics program.