As the capabilities of AI expand beyond usual use cases like autonomous cars and the like, research groups are finding ways to use AI to improve upon existing processes like data analysis. Perennial Field Day delegate, Ray Lucchesi, discovered one such case in how AI can be used to better identify high-impact research studies, which he wrote about in his personal blog.
Expanding the Reach of AI
We’ve seen an amazing advancement in the ways AI can be used to automate and improve upon the ways we do, well, just about anything. Beyond obvious applications like self-driving automobiles and image recognition, AI models are also being used in areas like both the security and financial sectors to identify anomalies for risk detection. Similarly, AI detection solutions have proved invaluable for improving large-scale farming efforts, honing in on the precise moments that agricultural plants are ready to be watered, pruned, and harvested.
Outside of these examples, a quintessential use of AI is in data analysis. With so many data points at play in data operations, humans can’t simply do it themselves, so AI provides a method for quickly and effectively ingesting and operating on data points like metadata to help bring insight into unexpected areas.
Using AI to Identify Research to Invest In
One unexpected area that AI could potentially revolutionize is research indexing. Most commonly, research papers are organized based on the other scholarly articles they reference. Although it works decently for showing papers that have gained traction, it doesn’t necessarily show the impact that a specific study may have on the overall scientific community. Then, when investors and grant funders look to where to put their money, they may not support researchers that will have the greatest impact on their work.
Researchers at MIT, however, have found a way to use AI to improve upon these methods of indexing research to maximize visibility on the highest impact studies. In his personal blog, Ray Lucchesi, one of the GreyBeards of the GreyBeards on Storage podcast, writes about the recent MIT study and their DELPHI (Dynamic Early-warning by Learning to Predict High Impact [research]) tool. He describes the tool as such:
Apparently, DELPHI uses article metadata, such as one can find looking at the Nature article behind this research [sic] to create a knowledge graph. They then use the knowledge graph and an AI model to predict whether the research will become high impact or not. The threshold they used for their publication was any research DELPHI predicts would be in the top 5% of all research in a domain.
Lucchesi goes on to explain more about how the DELPHI model works, and how he looked into the GitHub repository behind the tool, but you’ll have to read more for yourself. Check out Using AI to Identify Research to Invest In to learn more, and read the rest of his blog for more fascinating AI content.