Making "best paper" awards for young researchers more equitable and open.
Research awards can propagate existing biases in academia in terms of rewarding novel results rather than robust and transparent research. They can also contribute to the “Matthew Effect” where already privileged groups become rewarded. As such, revealing which awards do/do not provide equitable access and evaluation can lead to systemic change in how publications, especially these by early career researchers, are assessed and recognised.
This project aims to conduct a survey of current practices in awarding “best papers” awards by research journals. Such awards are usually given to early career researchers. In this project, we will evaluate assessment criteria and historical biases in gender composition of the names of past awardees. Evaluation will be performed on a sample of awards from most respected journals and societies across disciplines. Our findings will be openly available and disseminated in the research community.
We expect that our findings will contribute to culture change fostering more equitable and open science.
We invite early- and mid-career researchers across disciplines to contribute to the project.
Overall:
- We welcome researchers with all backgrounds and walks of life o contribute to any Stage (see above) of this project.
- It will be possible to start contributing to this project during an in-person hackathon event at AIMOS2022 conference, 28-30 November 2022, Melbourne, Australa.
- There will be also a folow-up virual hackathon (or two) after that to enable broad global participation.
- After this, we will work asynchroniously online until we complete all stages of the project.
- If you would like to comment on this project or provide suggestions to improve this project, feel free to open an issue.
- For more details see our Contribution Guide.
The project has main 4 Stages:
- Preparing protocol.
- Piloting.
- Registration.
- Hackathon preparation.
- Screening journal lists from 27 Scimago Subject Areas rankings
- Creating award shortlist.
- Checking shortlists.
- Data extraction for included awards.
- Data cross-checking.
- Collecting additional data.
- Preliminary analyses.
- Final analyses.
- Draft report.
- Contributors feedback.
- Final report.
- Sharing data and code in GitHub repository.
- Preprint in MetaRxiv.
- Manuscript.
We expect all project contributors to familiarise themselves and follow our Code of Conduct.
This work is licensed under a Creative Commons Attribution 4.0 International License.