出版社:Vilnius University, University of Latvia, Latvia University of Agriculture, Institute of Mathematics and Informatics of University of Latvia
摘要:In this paper, we describe a tool for debugging the output and attention weights of
neural machine translation (NMT) systems and for improved estimations of confidence about the
output based on the attention. We dive deeper into ways for it to handle output from transformerbased
NMT models. Its purpose is to help researchers and developers find weak and faulty
translations that their NMT systems produce without the need for reference translations. We
present a demonstration website of our tool with examples of good and bad translations: http:
//attention.lielakeda.lv.