Metrics for automated FAIR software assessment in a disciplinary context. FAIR-IMPACT result released

FAIR-IMPACT recently published 17 metrics that can be used to assess research software against the FAIR Principles for Research Software (FAIR4RS Principles), as well as providing examples of how these might be implemented in one exemplar disciplinary context of the social sciences in CESSDA.

This deliverable, "Metrics for automated FAIR software assessment in a disciplinary context", is part of Work Package 5 on "Metrics, Certification and Guidelines" within the FAIR-IMPACT project. It builds on the outputs of the RDA/ReSA/FORCE11 FAIR for Research Software WG and existing guidelines and metrics for research software. It also builds on community input from a workshop run at the Research Data Alliance Plenary Meeting 20 in Stockholm.

FAIR software can be defined as research software which adheres to these principles, and the extent to which a principle has been satisfied can be measured against the criteria in a metric. This work on software metrics was coordinated with FAIR-IMPACT Work Package 4 on "Metadata and Ontologies", in particular the deliverable "Guidelines for recommended metadata standard for research software within EOSC", to ensure that metrics are related to their recommended metadata properties.

The FAIR-IMPACT project will work to implement the metrics as practical tests by extending existing assessment tools such as F-UJI; this work will be reported in Q2 2024. Feedback will be sought from the community, through webinars and an open request for comments. The information from all these sources will be used to publish a revised version of the metrics.

 

You can view the metrics here: https://zenodo.org/doi/10.5281/zenodo.10047400

Metrics for automated FAIR software assessment in a disciplinary context. FAIR-IMPACT result released