首页    期刊浏览 2025年03月01日 星期六
登录注册

文章基本信息

  • 标题:WER-BERT: AutomaticWEREstimation withBERTin a Balanced Ordinal Classification Paradigm
  • 本地全文:下载
  • 作者:Akshay Krishna Sheshadri ; Anvesh Rao Vijjini ; Sukhdeep Kharbanda
  • 期刊名称:Conference on European Chapter of the Association for Computational Linguistics (EACL)
  • 出版年度:2021
  • 卷号:2021
  • 页码:3661-3672
  • DOI:10.18653/v1/2021.eacl-main.320
  • 语种:English
  • 出版社:ACL Anthology
  • 摘要:Automatic Speech Recognition (ASR) systems are evaluated using Word Error Rate (WER), which is calculated by comparing the number of errors between the ground truth and the transcription of the ASR system. This calculation, however, requires manual transcription of the speech signal to obtain the ground truth. Since transcribing audio signals is a costly process, Automatic WER Evaluation (e-WER) methods have been developed to automatically predict the WER of a speech system by only relying on the transcription and the speech signal features. While WER is a continuous variable, previous works have shown that positing e-WER as a classification problem is more effective than regression. However, while converting to a classification setting, these approaches suffer from heavy class imbalance. In this paper, we propose a new balanced paradigm for e-WER in a classification setting. Within this paradigm, we also propose WER-BERT, a BERT based architecture with speech features for e-WER. Furthermore, we introduce a distance loss function to tackle the ordinal nature of e-WER classification. The proposed approach and paradigm are evaluated on the Librispeech dataset and a commercial (black box) ASR system, Google Cloud’s Speech-to-Text API. The results and experiments demonstrate that WER-BERT establishes a new state-of-the-art in automatic WER estimation.
国家哲学社会科学文献中心版权所有