Ir al contenido (pulsa Retorno)

Universitat Politècnica de Catalunya

    • Català
    • Castellano
    • English
    • LoginRegisterLog in (no UPC users)
  • mailContact Us
  • world English 
    • Català
    • Castellano
    • English
  • userLogin   
      LoginRegisterLog in (no UPC users)

UPCommons. Global access to UPC knowledge

Banner header
61.603 UPC E-Prints
You are here:
View Item 
  •   DSpace Home
  • E-prints
  • Centres de recerca
  • BSC - Barcelona Supercomputing Center
  • Life Sciences
  • Altres
  • View Item
  •   DSpace Home
  • E-prints
  • Centres de recerca
  • BSC - Barcelona Supercomputing Center
  • Life Sciences
  • Altres
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Lessons Learned: Recommendations for Establishing Critical Periodic Scientific Benchmarking

Thumbnail
View/Open
Lessons Learned Recommendations for.pdf (1,244Mb)
Supplementary (52,66Kb)
  View Usage Statistics
  LA Referencia / Recolecta stats
Cita com:
hdl:2117/107279

Show full item record
Capella-Gutierrez, Salvador
de la Iglesia, Diana
Haas, Juergen
Lourenco, Analia
Fernandez, José M.
Repchevsky, Dmitry
Dessimoz, Christophe
Schwede, Torsten
Notredame, Cedric
Gelpí, Josep Lluís
Valencia, AlfonsoMés informació
Document typeWorking paper
Defense date2017
Rights accessOpen Access
Attribution-NonCommercial-NoDerivs 3.0 Spain
Except where otherwise noted, content on this work is licensed under a Creative Commons license : Attribution-NonCommercial-NoDerivs 3.0 Spain
ProjectELIXIR-EXCELERATE - ELIXIR-EXCELERATE: Fast-track ELIXIR implementation and drive early user exploitation across the life-sciences. (EC-H2020-676559)
Abstract
The dependence of life scientists on software has steadily grown in recent years. For many tasks, researchers have to decide which of the available bioinformatics software are more suitable for their specific needs. Additionally researchers should be able to objectively select the software that provides the highest accuracy, the best efficiency and the highest level of reproducibility when integrated in their research projects. Critical benchmarking of bioinformatics methods, tools and web services is therefore an essential community service, as well as a critical component of reproducibility efforts. Unbiased and objective evaluations are challenging to set up and can only be effective when built and implemented around community driven efforts, as demonstrated by the many ongoing community challenges in bioinformatics that followed the success of CASP. Community challenges bring the combined benefits of intense collaboration, transparency and standard harmonization. Only open systems for the continuous evaluation of methods offer a perfect complement to community challenges, offering to larger communities of users that could extend far beyond the community of developers, a window to the developments status that they can use for their specific projects. We understand by continuous evaluation systems as those services which are always available and periodically update their data and/or metrics according to a predefined schedule keeping in mind that the performance has to be always seen in terms of each research domain. We argue here that technology is now mature to bring community driven benchmarking efforts to a higher level that should allow effective interoperability of benchmarks across related methods. New technological developments allow overcoming the limitations of the first experiences on online benchmarking e.g. EVA. We therefore describe OpenEBench, a novel infra-structure designed to establish a continuous automated benchmarking system for bioinformatics methods, tools and web services. OpenEBench is being developed so as to cater for the needs of the bioinformatics community, especially software developers who need an objective and quantitative way to inform their decisions as well as the larger community of end-users, in their search for unbiased and up-to-date evaluation of bioinformatics methods. As such OpenEBench should soon become a central place for bioinformatics software developers, community-driven benchmarking initiatives, researchers using bioinformatics methods, and funders interested in the result of methods evaluation.
CitationCapella-Gutierrez, S. [et al.]. "Lessons Learned: Recommendations for Establishing Critical Periodic Scientific Benchmarking". 2017. 
URIhttp://hdl.handle.net/2117/107279
Collections
  • Life Sciences - Altres [1]
  View Usage Statistics

Show full item record

FilesDescriptionSizeFormatView
Lessons Learned Recommendations for.pdf1,244MbPDFView/Open
20170828.LessonsLearnt.Benchmarking.Table1.pdfSupplementary52,66KbPDFView/Open

Browse

This CollectionBy Issue DateAuthorsOther contributionsTitlesSubjectsThis repositoryCommunities & CollectionsBy Issue DateAuthorsOther contributionsTitlesSubjects

© UPC Obrir en finestra nova . Servei de Biblioteques, Publicacions i Arxius

info.biblioteques@upc.edu

  • About This Repository
  • Contact Us
  • Send Feedback
  • Privacy Settings
  • Inici de la pàgina