In the context of kernel machines, polynomial and Fourier features are commonly used to provide a nonlinear extension to linear models by mapping the data to a higher-dimensional space. Unless one considers the dual formulation of the learning problem, which renders exact large-scale learning unfeasible, the exponential increase of model parameters in the dimensionality of the data caused by their tensor-product structure prohibits to tackle high-dimensional problems. One of the possible approaches to circumvent this exponential scaling is to exploit the tensor structure present in the features by constraining the model weights to be an underparametrized tensor network. In this paper we quantize, i.e. further tensorize, polynomial and Fouri...
In this thesis, we develop high performance algorithms for certain computations involving dense tens...
In this thesis, we develop high performance algorithms for certain computations involving dense tens...
A recent goal in the theory of deep learning is to identify how neural networks can escape the "lazy...
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research...
Polynomial kernels are among the most popular kernels in machine learning, since their feature maps ...
As its width tends to infinity, a deep neural network's behavior under gradient descent can become s...
A key ingredient to improve the generalization of machine learning algorithms is to convey prior inf...
International audienceWe study the implicit regularization effects of deep learning in tensor factor...
It is well known that tensor network regression models operate on an exponentially large feature spa...
This monograph builds on Tensor Networks for Dimensionality Reduction and Large-scale Optimization: ...
Modern applications in engineering and data science are increasingly based on multidimensional data ...
Over the past few years, quantization has shown great and consistent success in compressing high-dim...
Over-parametrization of deep neural networks has recently been shown to be key to their successful t...
Over-parametrization of deep neural networks has recently been shown to be key to their successful t...
An increasing number of emerging applications in data science and engineering are based on multidime...
In this thesis, we develop high performance algorithms for certain computations involving dense tens...
In this thesis, we develop high performance algorithms for certain computations involving dense tens...
A recent goal in the theory of deep learning is to identify how neural networks can escape the "lazy...
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research...
Polynomial kernels are among the most popular kernels in machine learning, since their feature maps ...
As its width tends to infinity, a deep neural network's behavior under gradient descent can become s...
A key ingredient to improve the generalization of machine learning algorithms is to convey prior inf...
International audienceWe study the implicit regularization effects of deep learning in tensor factor...
It is well known that tensor network regression models operate on an exponentially large feature spa...
This monograph builds on Tensor Networks for Dimensionality Reduction and Large-scale Optimization: ...
Modern applications in engineering and data science are increasingly based on multidimensional data ...
Over the past few years, quantization has shown great and consistent success in compressing high-dim...
Over-parametrization of deep neural networks has recently been shown to be key to their successful t...
Over-parametrization of deep neural networks has recently been shown to be key to their successful t...
An increasing number of emerging applications in data science and engineering are based on multidime...
In this thesis, we develop high performance algorithms for certain computations involving dense tens...
In this thesis, we develop high performance algorithms for certain computations involving dense tens...
A recent goal in the theory of deep learning is to identify how neural networks can escape the "lazy...