Mostra el registre d'ítem simple

dc.contributor.authorBilalli, Besim
dc.contributor.authorMunir, Rana Faisal
dc.contributor.authorAbelló Gamazo, Alberto
dc.contributor.otherFacultat d'Informàtica de Barcelona
dc.contributor.otherUniversitat Politècnica de Catalunya. Doctorat Erasmus Mundus en Tecnologies de la Informació per a la Intel·ligència Empresarial
dc.contributor.otherUniversitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació
dc.date.accessioned2021-02-15T12:16:51Z
dc.date.available2021-11-05T01:32:21Z
dc.date.issued2021-01
dc.identifier.citationBilalli, B.; Munir, R.; Abelló, A. A framework for assessing the peer review duration of journals: case study in computer science. "Scientometrics", 2021, vol. 126, p. 545-563.
dc.identifier.issn1588-2861
dc.identifier.urihttp://hdl.handle.net/2117/339619
dc.description.abstractIn various fields, scientific article publication is a measure of productivity and in many occasions it is used as a critical factor for evaluating researchers. Therefore, a lot of time is dedicated to writing articles that are then submitted for publication in journals. Nevertheless, the publication process in general and the review process in particular tend to be rather slow. This is the case for instance of computer science (CS) journals. Moreover, the process typically lacks in transparency, where information about the duration of the review process is at best provided in an aggregated manner, if made available at all. In this paper, we develop a framework as a step towards bringing more reliable data with respect to review duration. Based on this framework, we implement a tool—journal response time (JRT), that allows for automatically extracting the review process data and helps researchers to find the average response times of journals, which can be used to study the duration of CS journals’ peer review process. The information is extracted as metadata from the published articles, when available. This study reveals that the response times publicly provided by publishers differ from the actual values obtained by JRT (e.g., for ten selected journals the average duration reported by publishers deviates by more than 500% from the actual average value calculated from the data inside the articles), which we suspect could be from the fact that, when calculating the aggregated values, publishers consider the review time of rejected articles too (including quick desk-rejections that do not require reviewers).
dc.format.extent19 p.
dc.language.isoeng
dc.publisherSpringer Nature
dc.subjectÀrees temàtiques de la UPC::Informàtica
dc.subject.lcshAcademic writing -- Evaluation
dc.subject.lcshComputer science -- Periodicals
dc.subject.otherPeer review process
dc.subject.otherReview process duration
dc.subject.otherReview process quality
dc.titleA framework for assessing the peer review duration of journals: case study in computer science
dc.typeArticle
dc.subject.lemacInformàtica -- Revistes
dc.subject.lemacArticles de revistes -- Avaluació
dc.contributor.groupUniversitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
dc.identifier.doi10.1007/s11192-020-03742-9
dc.description.peerreviewedPeer Reviewed
dc.relation.publisherversionhttps://link.springer.com/article/10.1007/s11192-020-03742-9
dc.rights.accessOpen Access
local.identifier.drac30466294
dc.description.versionPostprint (author's final draft)
local.citation.authorBilalli, B.; Munir, R.; Abelló, A.
local.citation.publicationNameScientometrics
local.citation.volume126
local.citation.startingPage545
local.citation.endingPage563


Fitxers d'aquest items

Thumbnail

Aquest ítem apareix a les col·leccions següents

Mostra el registre d'ítem simple