Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

Globally Optimal Training of Neural Networks with Threshold Activation Functions

Tolga Ergen · Halil Gulluk · Jonathan Lacotte · Mert Pilanci

Keywords: [ Optimization ] [ threshold activation ] [ Lasso ] [ quantization ] [ convex optimization ] [ neural networks ] [ binary activation ]


Abstract:

Threshold activation functions are highly preferable in neural networks due to their efficiency in hardware implementations. Moreover, their mode of operation is more interpretable and resembles that of biological neurons. However, traditional gradient based algorithms such as Gradient Descent cannot be used to train the parameters of neural networks with threshold activations since the activation function has zero gradient except at a single non-differentiable point. To this end, we study weight decay regularized training problems of deep neural networks with threshold activations. We first show that regularized deep threshold network training problems can be equivalently formulated as a standard convex optimization problem, which parallels the LASSO method, provided that the last hidden layer width exceeds a certain threshold. We also derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network. We corroborate our theoretical results with various numerical experiments.

Chat is not available.