
Every year, trillions are invested in research funding globally that propels scientific advancements. But where does that funding truly end up? Researchers have developed an innovative machine learning AI instrument known as Funding the Frontier (FtF) that charts the influence of research grants on publications, patents, policy-making, clinical trials, and even media coverage – offering insight into how funding genuinely influences science and society.
Although studies have emphasized the critical role of science funding, they primarily focus on grants and scholarly articles, leaving other forms of impact less examined. FtF aims to assist funders, policymakers, university officials, and researchers in understanding the larger context by illustrating how science transitions from funding to innovation in a knowledgeable and transparent manner, as detailed in a preprint yet to undergo peer review.
Utilizing one of the largest datasets of its kind compiled from global sources such as Dimensions, SciSciNet, and Altmetric, FtF links over 7 million research grants to 140 million scientific articles, 160 million patents, 10.9 million policy documents, 800,000 clinical trials, and 5.8 million news pieces – all interconnected through 1.8 billion citation linkages.
“FtF was developed through close collaboration with practical decision-makers,” states lead author Yifang Wang from Florida State University. “[It] could catalyze a transition … to a thorough, multidimensional perspective on impact. Decision-makers could observe not just which projects yield papers but also which foster innovation, policy change, health advancements, or public awareness.”
“FtF additionally visualizes funding allocation across various levels: by discipline, institution, gender, and career stage, enabling users to identify who receives funding and where potential disparities may lie,” she notes.
This innovation could transform funding decisions by emphasizing high-impact research and improving the prediction of future prospects. “[It is] an ambitious and commendable effort to integrate and synthesize a diverse array of data, indicators, and algorithms,” remarks James Wilsdon from University College London, who did not participate in the research. “What distinguishes this from most scientometric studies is the multitude of different elements amalgamated into a single … framework.”
“The primary advantage of the system is an encouraging user interface of the unified data sources that enables users to investigate scientific outcomes, their results, and impacts, appearing to be intuitive,” adds Vincent Traag from Leiden University, also not involved in the research.
Concerns that Funding the Frontier might steer funders toward ‘risk-free’ research
However, critics caution that an excessive reliance on metrics and predictive models could skew funding towards ‘safe’ initiatives and underestimate long-term or exploration-driven science.
“Could there be a danger of research becoming a self-fulfilling cycle if we base decisions on such algorithms?” queries Wilsdon. “Could decision-making become increasingly risk-averse if it relies on historical successes rather than future possibilities?”
Ethicists share this apprehension, warning that using previous achievements as a benchmark for upcoming funding may reinforce existing conditions. “When such a system is employed [to distribute] funding to research that has, historically, produced [impact], the potential issue is that … it might discourage investment in pioneering research that creates new types of impacts [or] disrupts traditional patterns,” comments Philip Brey from the University of Twente in the Netherlands, who was not part of the study. “A significant amount of innovative, groundbreaking, and disruptive research does not immediately yield impact but deserves funding nonetheless.”
AI policy specialists also advise caution regarding uncritical faith in FtF’s outcomes – accepting the results without considering their derivation or the ambiguity surrounding them. Traag argues that we typically scrutinize human-generated decisions more closely; however, it is simpler to accept algorithmic forecasts. “Yet there remains considerable uncertainty in forecasting which grants will produce which impacts … and individuals might place excessive trust in the results,” Traag adds.
Nonetheless, FtF’s forecasts are predicated solely on patterns within grant language, rendering its conclusions inherently incomplete. “This represents a relatively straightforward interpretation of terms into impact without additional and extensive contextual information,” states Traag. “A more advantageous viewpoint would likely involve situating a grant proposal abstract within a broader context, allowing users to explore … how the proposal extends beyond current knowledge.”
The general agreement is that, in real-world decision-making, frameworks like FtF should function solely as supplementary tools rather than dictating results. “These categories of predictive analytics must be approached with care and thoroughly vetted before being embraced and executed by research funders and policymakers,” states Wilsdon.