Science journal adopts AI to check image fraud

Zainab Sa’id, Abuja

423

The Science family of journals is set to integrate ‘Proofig’, an artificial intelligence-powered image-analysis tool, for identifying manipulated images in research submissions across all six journals.

In an editorial, the group’s editor-in-chief, Holden Thorp, noted that the research community has become increasingly vigilant in addressing challenges associated with image manipulation in scientific publications in recent years.

He explained that certain unintentional modifications, arising from experimental techniques such as microscopy, flow cytometry, and western blots, might not inherently affect the conclusions drawn in research papers.

“But in rare cases, some are done deliberately to mislead readers,” Thorp noted.

Proofig employs advanced artificial intelligence algorithms to meticulously identify instances of image reuse and duplication, ensuring a robust mechanism for maintaining the integrity of visual data in scientific research.

Thorp confirmed that Science has been actively piloting Proofig for several months, revealing compelling evidence that the tool adeptly identifies problematic figures before they reach publication.

Science Editor-in-chief, Holden Thorp

“Its use will expand to all papers under consideration that present relevant images. This should help identify both honest mistakes and fraudulent activity before a decision is made on publication,” he said.

Previously, Science staff conducted manual image checks; however, Thorp asserts that the integration of AI screening represents a “natural next step,” indicating a progressive shift towards a more advanced and efficient image-validation process.

Also Read: Gastroenterology: FDA Clears NaviCam ProScan

In the image-vetting process, Science will integrate Proofig after authors revise a research paper. Post-analysis, the tool produces a comprehensive report identifying duplications and anomalies, including rotation, scale distortion, and splicing, enhancing the journal’s commitment to rigorous image integrity.

Thereafter, the paper’s editor reviews the findings and determines whether the AI-detected issues may be problematic.  If so, the editor contacts the authors to request an explanation.

Thorp noted that during the trial phase, authors “generally provided a satisfactory response”. However, others were stopped from progressing through the editorial process.

Academic circles have been discussing image manipulation for decades. In the early 2000s, the Journal of Cell Biology’s managing editor, Mike Rossner, implemented an image vetting policy due to the rise of digital submissions and image editing software. Rossner and Ken Yamada published image-vetting guidelines in 2004.

Ten years ago, Enrico Bucci, now an adjunct professor at Temple University, conducted a software analysis on over 1,300 open-access papers, revealing that 5.7% contained suspected instances of image manipulation.

Scientific integrity expert Elisabeth Bik found that around 4% of papers in a sample of 20,000 had manipulated images during her decade-long investigation starting in 2016.

Comments are closed.