ICLR 2018
Skip to yearly menu bar Skip to main content


Workshop

A differentiable BLEU loss. Analysis and first results

noe casas · Marta R. Costa-jussà · Jose A.R Fonollosa

East Meeting Level 8 + 15 #19

In natural language generation tasks, like neural machine translation and image captioning, there is usually a mismatch between the optimized loss and the de facto evaluation criterion, namely token-level maximum likelihood and corpus-level BLEU score. This article tries to reduce this gap by defining differentiable computations of the BLEU and GLEU scores. We test this approach on simple tasks, obtaining valuable lessons on its potential applications but also its pitfalls, mainly that these loss functions push each token in the hypothesis sequence toward the average of the tokens in the reference, resulting in a poor training signal.

Live content is unavailable. Log in and register to view live content