Orthogonal transformations have driven many great achievements in signal processing. They simplify computation and stabilize convergence during parameter training. Researchers have introduced orthogonality to machine learning recently and have obtained some encouraging results. In this thesis, three new orthogonal constraint algorithms based on a stochastic version of an SVD-based cost are proposed, which are suited to training large-scale matrices in convolutional neural networks. We have observed better performance in comparison with other orthogonal algorithms for convolutional neural networks
In this dissertation, we propose two stochastic alternating optimization methods for solving struct...
Machine learning has been a topic in academia and industry for decades. Performance of machine lear...
International audienceOptimal transport (OT) defines a powerful framework to compare probability dis...
Orthogonal transformations have driven many great achievements in signal processing. They simplify c...
1This work seeks to answer the question: as the (near-) orthogonality of weights is found to be a fa...
Inserting an SVD meta-layer into neural networks is prone to make the covariance ill-conditioned, wh...
Orthogonal matrix has shown advantages in training Recurrent Neural Networks (RNNs), but such matrix...
This thesis proposes a novel experimental environment (non-linear stochastic gradient descent, NL-SG...
International audienceImposing orthogonality on the layers of neural networks is known to facilitate...
Estimating a set of orthogonal functions from a finite set of noisy data plays a crucial role in sev...
As deep learning becomes present in many applications, we must consider possible shortcomings of the...
Machine learning has become one of the most exciting research areas in the world, with various appli...
The estimation of a data matrix contains two parts: the well estimated and the poorly estimated. The...
Recent literature has advocated the use of randomized methods for accelerating the solution of vario...
International audienceThis paper underlines a subtle property of batch-normalization (BN): Successiv...
In this dissertation, we propose two stochastic alternating optimization methods for solving struct...
Machine learning has been a topic in academia and industry for decades. Performance of machine lear...
International audienceOptimal transport (OT) defines a powerful framework to compare probability dis...
Orthogonal transformations have driven many great achievements in signal processing. They simplify c...
1This work seeks to answer the question: as the (near-) orthogonality of weights is found to be a fa...
Inserting an SVD meta-layer into neural networks is prone to make the covariance ill-conditioned, wh...
Orthogonal matrix has shown advantages in training Recurrent Neural Networks (RNNs), but such matrix...
This thesis proposes a novel experimental environment (non-linear stochastic gradient descent, NL-SG...
International audienceImposing orthogonality on the layers of neural networks is known to facilitate...
Estimating a set of orthogonal functions from a finite set of noisy data plays a crucial role in sev...
As deep learning becomes present in many applications, we must consider possible shortcomings of the...
Machine learning has become one of the most exciting research areas in the world, with various appli...
The estimation of a data matrix contains two parts: the well estimated and the poorly estimated. The...
Recent literature has advocated the use of randomized methods for accelerating the solution of vario...
International audienceThis paper underlines a subtle property of batch-normalization (BN): Successiv...
In this dissertation, we propose two stochastic alternating optimization methods for solving struct...
Machine learning has been a topic in academia and industry for decades. Performance of machine lear...
International audienceOptimal transport (OT) defines a powerful framework to compare probability dis...