Benchmarking: Incentives and Best Practices across Scientific Disciplines - Abstracting from Established Evaluations

Tue 12 September 2017, 09:00 to 21:00
Room 111, University of Basel, Kollegienhaus Petersplatz 1, Basel, Switzerland

The ELIXIR EXCELERATE Framework focuses on key aspects of modern research by bringing together method and infrastructure developers as well as researchers who aim to provide the scientific community with novel scientific tools, and cutting edge research results at the same time. New developments are generally benchmarked during development, with a certain composition of data and methods to benchmark against.

Internal benchmarks often suffer from an incompleteness concerning both the test data and competing methods. Independent and community-driven blind benchmarks are addressing these issues and have been shown to be crucial for objectively assessing scientific methods in various communities, including substantial commitment from industry.  

CASP is the prime example of community-driven benchmarking activities, with bi-annual editions from 1994. Since then, many other communities have come together to design and run their own benchmarking activities e.g. CAFA and Quest for Orthologs.

The main benefit of recurring independent assessments is the task composition, which is varied from what was initially used to develop the methods. Since the tasks evolve over time, new aspects become crucial to be evaluated, reflecting current developments in the community. By being able to compare various methods on the same data set at the same point in time, new methods can prove their superiority in a much more transparent way as past individual benchmarks inherently suffer from the temporal focus on the methods available at that particular time.

This workshop integrates experiences from various community-driven benchmarking efforts, illustrating key common elements. More importantly, these examples illustrate how benchmarking activities bring together to their members and foster new interactions and developments.

The program comprises two main parts: (1) speakers from diverse communities reporting on successful recurring benchmarking implementations and (2) plenary discussions to identify common approaches, resulting in refined guidelines for establishing community based benchmarking.

Further information for this workshop is available at:


Salvador Capella-Gutierrez (

See also: Tools Platform