[Version anglaise de l’article « Et si la reproductibilité de nos données faisait partie de notre évaluation ? »]
It’s that time of year again- when young researchers wait impatiently (and sometimes apprehensively) for the results of CNRS, INRAE, and INSERM ‘concours’; the French competition for acquiring permanent academic research positions. We know publication records play a vital role in this recruitment process- but should our engagement in Open Science also be considered?
In face of the scientific ‘reproducibility crisis’, often considered a result of the pressure to publish high volumes of work very quickly, Sarah Cohen-Boulakia, professor and bioinformatician at the University of Paris-Saclay, suggests a partial solution: we should include candidates’ data reproducibility in grant and hiring evaluation criteria, particularly in bioinformatic fields. Having personally experienced the frustration of impossible to-use published tools or data (incomprehensible methods/ code without commentary, tools with obsolete dependencies, dead links, etc.)- to say nothing of reproducing actual results– I am in favour of the idea.
And I am not the only one. A recent study revealed that 90% of scientific grant/hiring committee members consider research ‘credibility’ to be a ‘very important’ criterion. The study surveyed 485 biology researchers who had sat on evaluation committees (for grants, hiring, or promotion) over the past two years. Study participants defined ‘credible’ research as a mixture of reliable, well-conceived, exhaustively documented, transparent, and ethical. For comparison, 54% of those surveyed consider that ‘research impact’ is important. However, while 45% of committee researchers were satisfied with their ability to evaluate research impact, only 38% were satisfied with their ability to identify research credibility. As a result, 57% base ‘credibility’ evaluations on proxy indicators, such as the general reputation of the candidate or their scientific institute, or the impact factor of the journals in which they are published!
Rather than relying on proxies, consideration of the degree to which data, codes, and methods are openly accessible seems a logical start for evaluating the transparency- and, by extension, ‘credibility’- of scientific output. However, gauging the extent to which a candidate’s publications are aligned with open science seems to be a particularly difficult task. In the aforementioned study, only 30% of respondents reported being ‘very satisfied’ with their ability to evaluate open science aspects of candidates’ publications. Perhaps the feeble consideration of open science practice in current recruitment processes is linked to this difficulty?
How, then, can we measure the reproducibility of our work? Or more simply, how might we evaluate the level of data and methods transparency of a paper?
Unfortunately, I do not yet have a good answer to this question, though we are beginning to make progress. PLOS developed its own measurement system, the ‘Open Science Indicators’, based on six criteria in accordance with FAIR principles (Findable, Accessible, Interoperable, Reusable). However, these measurements apply to journals as a whole and thus do not enable evaluation of individual articles. At the article level, PLOS also created an experimental ‘Accessible Data’ badge in 2022, which they further extended in 2023. Yet this badge is attributed to any article which includes links in the ‘Data Availability Statement’ section, meaning we cannot know from its presence if some or all data are accessible, nor to what degree. The openness of articles with such badges may therefore be very heterogenous. Going one step further, the Association for Computing Machinery (‘ACM’) developed badges which use colour to indicate whether an article’s ‘artifacts’ (its data, codes, software, etc.) are open access, and subsequently, if the results have been reproduced, or even replicated (find out more about Open Science badges here – in French). Alas, these badges are exclusive to ACM journals and thus will not help us to evaluate the global ‘credibility’ of a researcher.
So, what can we do?
It seems to be the ubiquitous response these days: there is a new AI tool for the job. Several ScreenIT tools, for example, have been designed to evaluate open science aspects of our articles. This time, I have not tested them for you, because a team has already done the work!
Stay tuned, I will report back in my next article! We will see to what extent these new tools could help revolutionize our evaluations…
Caitlin Martin, postdoctoral researcher at the Institut Pasteur
References:
- Sarah Cohen-Boulakia : « On publie trop et trop vite ». Interview by Lucile Veissier, TheMetaNews, February 13, 2026
- Hrynaszkiewicz et al. 2026. A survey of how biology researchers assess credibility when serving on grant and hiring committees. PeerJ 14:e20502 https://doi.org/10.7717/peerj.20502
- Khan et al. 2022. Open science failed to penetrate academic hiring practices: a cross-sectional study. Journal of Clinical Epidemiology 144:136-143 https://doi.org/10.1016/j.jclinepi.2021.12.003


