Over the past few years, quantization has shown great and consistent success in compressing high-dimensional data and over-parameterized models. This dissertation focuses on theoretical guarantees and applications of quantization algorithms for fast binary embeddings (FBEs), random Fourier features (RFFs), and neural networks (NNs). Chapter 1 presents an introduction to quantization and background information for topics covered by later chapters. In Chapter 2, we introduce a novel fast binary embedding algorithm that transforms data points from high-dimensional space into low-dimensional binary sequences. We prove that the $\ell_2$ distances among original data points can be recovered by the $\ell_1$ norm on binary embeddings and its associ...
International audienceIn this paper, we address the problem of reducing the memory footprint of conv...
International audienceWe tackle the problem of producing compact models, maximizing their accuracy f...
Robust quantization improves the tolerance of networks for various implementations, allowing reliabl...
This thesis explores the topic of quantization in the context of data science and digital signal pro...
While neural networks have been remarkably successful in a wide array of applications, implementing ...
We study two problems from mathematical signal processing. First, we consider problem of approximate...
We study the dynamics of gradient descent in learning neural networks for classification problems. U...
Artificial Neural Networks (NNs) can effectively be used to solve many classification and regression...
Artificial Neural Networks (NNs) can effectively be used to solve many classification and regression...
Artificial Neural Networks (NNs) can effectively be used to solve many classification and regression...
Quantization of deep neural networks is a common way to optimize the networks for deployment on ener...
Quantized neural networks (QNNs), which use low bitwidth numbers for representing parameters and per...
International audienceWe tackle the problem of producing compact models, maximizing their accuracy f...
International audienceWe tackle the problem of producing compact models, maximizing their accuracy f...
\u3cp\u3eArtificial Neural Networks (NNs) can effectively be used to solve many classification and r...
International audienceIn this paper, we address the problem of reducing the memory footprint of conv...
International audienceWe tackle the problem of producing compact models, maximizing their accuracy f...
Robust quantization improves the tolerance of networks for various implementations, allowing reliabl...
This thesis explores the topic of quantization in the context of data science and digital signal pro...
While neural networks have been remarkably successful in a wide array of applications, implementing ...
We study two problems from mathematical signal processing. First, we consider problem of approximate...
We study the dynamics of gradient descent in learning neural networks for classification problems. U...
Artificial Neural Networks (NNs) can effectively be used to solve many classification and regression...
Artificial Neural Networks (NNs) can effectively be used to solve many classification and regression...
Artificial Neural Networks (NNs) can effectively be used to solve many classification and regression...
Quantization of deep neural networks is a common way to optimize the networks for deployment on ener...
Quantized neural networks (QNNs), which use low bitwidth numbers for representing parameters and per...
International audienceWe tackle the problem of producing compact models, maximizing their accuracy f...
International audienceWe tackle the problem of producing compact models, maximizing their accuracy f...
\u3cp\u3eArtificial Neural Networks (NNs) can effectively be used to solve many classification and r...
International audienceIn this paper, we address the problem of reducing the memory footprint of conv...
International audienceWe tackle the problem of producing compact models, maximizing their accuracy f...
Robust quantization improves the tolerance of networks for various implementations, allowing reliabl...