Dual Language Models: Balancing sample-efficiency and overfitting resilience
Abstract
This paper combines autoregressive and masked-diffusion training objectives without any architectural modifications, resulting in flexible models that outperform the standard single-objective models in both settings. Autoregressive language modeling has been a popular approach, partly because of its training efficiency; however, this comes at the cost of susceptibility to overfitting. On the other hand, masked-diffusion language models are less efficient to train while being more resilient to overfitting. In this work, we demonstrate that dual-objective training achieves the best of both worlds. To derive the optimal ratio of the masked-diffusion and autoregressive objectives, we train and evaluate 50 language models under varying levels of data repetition. We show that it is optimal to combine both objectives under all evaluated settings and that the optimal ratio is similar whether targeting autoregressive or masked-diffusion downstream performance.