Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

Unsupervised Learning for Combinatorial Optimization Needs Meta Learning

Haoyu Wang · Pan Li

Keywords: [ combinatorial optimization ] [ graph neural networks ] [ unsupervised learning ] [ meta learning ] [ Unsupervised and Self-supervised learning ]


Abstract:

A general framework of unsupervised learning for combinatorial optimization (CO) is to train a neural network whose output gives a problem solution by directly optimizing the CO objective. Albeit with some advantages over traditional solvers, current frameworks optimize an averaged performance over the distribution of historical problem instances, which misaligns with the actual goal of CO that looks for a good solution to every future encountered instance. With this observation, we propose a new objective of unsupervised learning for CO where the goal of learning is to search for good initialization for future problem instances rather than give direct solutions. We propose a meta-learning-based training pipeline for this new objective. Our method achieves good performance. We observe that even the initial solution given by our model before fine-tuning can significantly outperform the baselines under various evaluation settings including evaluation across multiple datasets, and the case with big shifts in the problem scale. The reason we conjecture is that meta-learning-based training lets the model be loosely tied to each local optimum for a training instance while being more adaptive to the changes of optimization landscapes across instances.

Chat is not available.