A community of sleuths hunting for errors in scientific research have sent shockwaves through some of the most prestigious research institutions in the world — and the science community at large.

High-profile cases of alleged image manipulations in papers authored by the former president at Stanford University and leaders at the Dana-Farber Cancer Institute have made national media headlines, and some top science leaders think this could be just the start.

“At the rate things are going, we expect another one of these to come up every few weeks,” said Holden Thorp, the editor-in-chief of the Science family of scientific journals, whose namesake publication is one of the two most influential in the field. 

The sleuths argue their work is necessary to correct the scientific record and prevent generations of researchers from pursuing dead-end topics because of flawed papers. And some scientists say it’s time for universities and academic publishers to reform how they address flawed research. 

“I understand why the sleuths finding these things are so pissed off,” said Michael Eisen, a biologist, the former editor of the journal eLife and a prominent voice of reform in scientific publishing. “Everybody — the author, the journal, the institution, everybody — is incentivized to minimize the importance of these things.” 

For about a decade, science sleuths unearthed widespread problems in scientific images in published papers, publishing concerns online but receiving little attention. 

That began to change last summer after then-Stanford President Marc Tessier-Lavigne, who is a neuroscientist, stepped down from his post after scrutiny of alleged image manipulations in studies he helped author and a report criticizing his laboratory culture. Tessier-Lavigne was not found to have engaged in misconduct himself, but members of his lab appeared to manipulate images in dubious ways, a report from a scientific panel hired to examine the allegations said. 

In January, a scathing post from a blogger exposed questionable work from top leaders at the Dana-Farber Cancer Institute, which subsequently asked journals to retract six articles and issue corrections for dozens more. 

In a resignation statement, Tessier-Lavigne noted that the panel did not find that he knew of misconduct and that he never submitted papers he didn’t think were accurate. In a statement from its research integrity officer, Dana-Farber said it took decisive action to correct the scientific record and that image discrepancies were not necessarily evidence an author sought to deceive. 

“We’re certainly living through a moment — a public awareness — that really hit an inflection when the Marc Tessier-Lavigne matter happened and has continued steadily since then, with Dana-Farber being the latest,” Thorp said. 

Now, the long-standing problem is in the national spotlight, and new artificial intelligence tools are only making it easier to spot problems that range from decades-old errors and sloppy science to images enhanced unethically in photo-editing software.  

This heightened scrutiny is reshaping how some publishers are operating. And it’s pushing universities, journals and researchers to reckon with new technology, a potential backlog of undiscovered errors and how to be more transparent when problems are identified. 

This comes at a fraught time in academic halls. Bill Ackman, a venture capitalist, in a post on X last month discussed weaponizing artificial intelligence to identify plagiarism of leaders at top-flight universities where he has had ideological differences, raising questions about political motivations in plagiarism investigations. More broadly, public trust in scientists and science has declined steadily in recent years, according to the Pew Research Center.

Eisen said he didn’t think sleuths’ concerns over scientific images had veered into “McCarthyist” territory.

“I think they’ve been targeting a very specific type of problem in the literature, and they’re right — it’s bad,” Eisen said. 

Scientific publishing builds the base of what scientists understand about their disciplines, and it’s the primary way that researchers with new findings outline their work for colleagues. Before publication, scientific journals consider submissions and send them to outside researchers in the field for vetting and to spot errors or faulty reasoning, which is called peer review. Journal editors will review studies for plagiarism and for copy edits before they’re published. 

That system is not perfect and still relies on good-faith efforts by researchers to not manipulate their findings.

Over the past 15 years, scientists have grown increasingly concerned about problems that some researchers were digitally altering images in their papers to skew or emphasize results. Discovering irregularities in images — typically of experiments involving mice, gels or blots — has become a larger priority of scientific journals’ work.   

Jana Christopher, an expert on scientific images who works for the Federation of European Biochemical Societies and its journals, said the field of image integrity screening has grown rapidly since she began working in it about 15 years ago. 

At the time, “nobody was doing this and people were kind of in denial about research fraud,” Christopher said. “The common view was that it was very rare and every now and then you would find someone who fudged their results.” 

Today, scientific journals have entire teams dedicated to dealing with images and trying to ensure their accuracy. More papers are being retracted than ever — with a record 10,000-plus pulled last year, according to a Nature analysis

A loose group of scientific sleuths have added outside pressure. Sleuths often discover and flag errors or potential manipulations on the online forum PubPeer. Some sleuths receive little or no payment or public recognition for their work.

“To some extent, there is a vigilantism around it,” Eisen said. 

An analysis of comments on more than 24,000 articles posted on PubPeer found that more than 62% of comments on PubPeer were related to image manipulation. 

For years, sleuths relied on sharp eyes, keen pattern recognition and an understanding of photo manipulation tools. In the past few years, rapidly-developing artificial intelligence tools, which can scan papers for irregularities, are supercharging their work. 

Now, scientific journals are adopting similar technology to try to prevent errors from reaching publication. In January, Science announced that it was using an artificial intelligence tool called Proofig to scan papers that were being edited and peer-reviewed for publication. 

Thorp, the Science editor-in-chief, said the family of six journals added the tool “quietly” into its workflow about six months before that January announcement. Before, the journal was reliant on eye-checks to catch these types of problems. 

Thorp said Proofig identified several papers late in the editorial process that were not published because of problematic images that were difficult to explain and other instances in which authors had “logical explanations” for issues they corrected before publication.

“The serious errors that cause us not to publish a paper are less than 1%,” Thorp said.

In a statement, Chris Graf, the research integrity director at the publishing company Springer Nature, said his company is developing and testing “in-house AI image integrity software” to check for image duplications. Graf’s research integrity unit currently uses Proofig to help assess articles if concerns are raised after publication. 

Graf said processes varied across its journals, but that some Springer Nature publications manually check images for manipulations with Adobe Photoshop tools and look for inconsistencies in raw data for experiments that visualize cell components or common scientific experiments.

“While the AI-based tools are helpful in speeding up and scaling up the investigations, we still consider the human element of all our investigations to be crucial,” Graf said, adding that image recognition software is not perfect and that human expertise is required to protect against false positives and negatives. 

No tool will catch every mistake or cheat. 

“There’s a lot of human beings in that process. We’re never going to catch everything,” Thorp said. “We need to get much better at managing this when it happens, as journals, institutions and authors.”

Many science sleuths had grown frustrated after their concerns seemed to be ignored or as investigations trickled along slowly and without a public resolution.  

Sholto David, who publicly exposed concerns about Dana-Farber research in a blog post, said he largely “gave up” on writing letters to journal editors about errors he discovered because their responses were so insufficient. 

Elisabeth Bik, a microbiologist and longtime image sleuth, said she has frequently flagged image problems and “nothing happens.” 

Leaving public comments questioning research figures on PubPeer can start a public conversation over questionable research, but authors and research institutions often don’t respond directly to the online critiques. 

While journals can issue corrections or retractions, it’s typically a research institution’s or a university’s responsibility to investigate cases. When cases involve biomedical research supported by federal funding, the federal Office of Research Integrity can investigate. 

Thorp said the institutions need to move more swiftly to take responsibility when errors are discovered and speak plainly and publicly about what happened to earn the public’s trust.  

“Universities are so slow at responding and so slow at running through their processes, and the longer that goes on, the more damage that goes on,” Thorp said. “We don’t know what happened if instead of launching this investigation Stanford said, ‘These papers are wrong. We’re going to retract them. It’s our responsibility. But for now, we’re taking the blame and owning up to this.’” 

Some scientists worry that image concerns are only scratching the surface of science’s integrity issues — problems in images are simply much easier to spot than data errors in spreadsheets. 

And while policing bad papers and seeking accountability is important, some scientists think those measures will be treating symptoms of the larger problem: a culture that rewards the careers of those who publish the most exciting results, rather than the ones that hold up over time. 

“The scientific culture itself does not say we care about being right; it says we care about getting splashy papers,” Eisen said. 

Source: | This article originally belongs to Nbcnews.com

You May Also Like

Hershey facing lawsuit over ‘misleading’ Reese’s Halloween candy packaging

Hershey has been sued by a Florida woman who said its holiday-themed Reese’s…

New House bill would block pay for members of Congress if the U.S. defaults

WASHINGTON — A bipartisan bill set to be unveiled Thursday by Reps. Abigail…

High Court Declines to Hear Bayer Appeal on Roundup

Copyright ©2022 Dow Jones & Company, Inc. All Rights Reserved This copy…

U.S. Trade Deficit Widened to Record in June, Showing Strong Pre-Delta Demand

WASHINGTON—The U.S. trade deficit widened to a record in June as the…