The literature has shown how to optimize and analyze the parameters of different types of neural networks using mixed integer linear programs (MILP). Building on these developments, this work presents an approach to do so for a McCulloch/Pitts and Rosenblatt neurons. As the original formulation involves a step-function, it is not differentiable, but it is possible to optimize the parameters of neurons, and their concatenation as a shallow neural network, by using a mixed integer linear program. The main contribution of this paper is to additionally enforce sparsity constraints on the weights and activations as well as on the amount of used neurons. Several experiments demonstrate that such constraints effectively prevent overfitting in neur...
Optimization plays a significant role in almost every field of applied sciences (e.g., signal proces...
Modern Machine learning techniques take advantage of the exponentially rising calculation power in n...
In this paper, we address the challenging task of simultaneously optimizing (i) the weights of a neu...
Recent work has shown potential in using Mixed Integer Programming (MIP) solvers to optimize certain...
In recent years, constrained sparsity maximization problems received tremendous attention in the con...
In a previous work we showed that hardlimit multilayer neural networks have more computational power...
Recent work has shown potential in using Mixed Integer Programming (MIP) solvers to optimize certain...
International audienceSparsifying deep neural networks is of paramount interest in many areas, espec...
We study optimization problems where the objective function is modeled through feedforward neural ne...
In deep learning, fine-grained N:M sparsity reduces the data footprint and bandwidth of a General Ma...
Convex $\ell_1$ regularization using an infinite dictionary of neurons has been suggested for constr...
This work investigates Sparse Neural Networks, which are artificial neural information processing sy...
Traditionally, optimizing the structure of a feed-forward neural-network is time-consuming and it ne...
Deep learning has been empirically successful in recent years thanks to the extremely over-parameter...
Deep neural networks have relieved a great deal of burden on human experts in relation to feature en...
Optimization plays a significant role in almost every field of applied sciences (e.g., signal proces...
Modern Machine learning techniques take advantage of the exponentially rising calculation power in n...
In this paper, we address the challenging task of simultaneously optimizing (i) the weights of a neu...
Recent work has shown potential in using Mixed Integer Programming (MIP) solvers to optimize certain...
In recent years, constrained sparsity maximization problems received tremendous attention in the con...
In a previous work we showed that hardlimit multilayer neural networks have more computational power...
Recent work has shown potential in using Mixed Integer Programming (MIP) solvers to optimize certain...
International audienceSparsifying deep neural networks is of paramount interest in many areas, espec...
We study optimization problems where the objective function is modeled through feedforward neural ne...
In deep learning, fine-grained N:M sparsity reduces the data footprint and bandwidth of a General Ma...
Convex $\ell_1$ regularization using an infinite dictionary of neurons has been suggested for constr...
This work investigates Sparse Neural Networks, which are artificial neural information processing sy...
Traditionally, optimizing the structure of a feed-forward neural-network is time-consuming and it ne...
Deep learning has been empirically successful in recent years thanks to the extremely over-parameter...
Deep neural networks have relieved a great deal of burden on human experts in relation to feature en...
Optimization plays a significant role in almost every field of applied sciences (e.g., signal proces...
Modern Machine learning techniques take advantage of the exponentially rising calculation power in n...
In this paper, we address the challenging task of simultaneously optimizing (i) the weights of a neu...