首页    期刊浏览 2024年12月05日 星期四
登录注册

文章基本信息

  • 标题:AI-assisted peer review
  • 本地全文:下载
  • 作者:Alessandro Checco ; Lorenzo Bracciale ; Pierpaolo Loreti
  • 期刊名称:Humanities and Social Sciences Communications
  • 电子版ISSN:2662-9992
  • 出版年度:2021
  • 卷号:8
  • 期号:1
  • 页码:1-11
  • DOI:10.1057/s41599-020-00703-8
  • 语种:English
  • 出版社:Springer
  • 摘要:The scientific literature peer review workflow is under strain because of the constant growth of submission volume. One response to this is to make initial screening of submissions less time intensive. Reducing screening and review time would save millions of working hours and potentially boost academic productivity. Many platforms have already started to use automated screening tools, to prevent plagiarism and failure to respect format requirements. Some tools even attempt to flag the quality of a study or summarise its content, to reduce reviewers’ load. The recent advances in artificial intelligence (AI) create the potential for (semi) automated peer review systems, where potentially low-quality or controversial studies could be flagged, and reviewer-document matching could be performed in an automated manner. However, there are ethical concerns, which arise from such approaches, particularly associated with bias and the extent to which AI systems may replicate bias. Our main goal in this study is to discuss the potential, pitfalls, and uncertainties of the use of AI to approximate or assist human decisions in the quality assurance and peer-review process associated with research outputs. We design an AI tool and train it with 3300 papers from three conferences, together with their reviews evaluations. We then test the ability of the AI in predicting the review score of a new, unobserved manuscript, only using its textual content. We show that such techniques can reveal correlations between the decision process and other quality proxy measures, uncovering potential biases of the review process. Finally, we discuss the opportunities, but also the potential unintended consequences of these techniques in terms of algorithmic bias and ethical concerns.
国家哲学社会科学文献中心版权所有