This paper investigates theoretical properties of subsampling and hashing as tools for approximate Euclidean norm-preserving embeddings for vectors with (unknown) additive Gaussian noises. Such embeddings are sometimes called Johnson-lindenstrauss embeddings due to their celebrated lemma. Previous work shows that as sparse embeddings, the success of subsampling and hashing closely depends on the $l_\infty$ to $l_2$ ratios of the vector to be mapped. This paper shows that the presence of noise removes such constrain in high-dimensions, in other words, sparse embeddings such as subsampling and hashing with comparable embedding dimensions to dense embeddings have similar approximate norm-preserving dimensionality-reduction properties. The key ...
The Johnson-Lindenstrauss lemma is a fundamental result in probability with several applications in ...
The problem of estimating a high-dimensional sparse vector $\theta \in \mathbb{R}^n$ from an observa...
A K*-sparse vector x* ∈ RN produces measurements via linear dimensionality reduction as u = Φx* +n, ...
We provide a deterministic construction of the sparse Johnson-Lindenstrauss transform of Kane & Nels...
Let \(\Phi \in \mathbb{R}^{m×n}\) be a sparse Johnson-Lindenstrauss transform [KN14] with s non-zero...
We consider the problem of embedding a subset of $\mathbb{R}^n$ into a low-dimensional Hamming cube ...
This paper develops a new method for recovering m-sparse signals that is simultaneously uniform and ...
We consider the problem of efficient randomized dimensionality reduction with norm-preservation guar...
Let Φ∈Rm x n be a sparse Johnson-Lindenstrauss transform [52] with column sparsity s. For a subset T...
An oblivious subspace embedding (OSE) given some parameters \(\epsilon\), d is a distribution \(\mat...
We give near-tight lower bounds for the sparsity required in several dimensionality reducing linear ...
Random embeddings project high-dimensional spaces to low-dimensional ones; they are careful construc...
International audienceThis paper proposes a binarization scheme for vectors of high dimension based ...
Recent work of [Dasgupta-Kumar-Sarl´os, STOC 2010] gave a sparse Johnson-Lindenstrauss transform and...
AbstractAn important problem in the theory of sparse approximation is to identify well-conditioned s...
The Johnson-Lindenstrauss lemma is a fundamental result in probability with several applications in ...
The problem of estimating a high-dimensional sparse vector $\theta \in \mathbb{R}^n$ from an observa...
A K*-sparse vector x* ∈ RN produces measurements via linear dimensionality reduction as u = Φx* +n, ...
We provide a deterministic construction of the sparse Johnson-Lindenstrauss transform of Kane & Nels...
Let \(\Phi \in \mathbb{R}^{m×n}\) be a sparse Johnson-Lindenstrauss transform [KN14] with s non-zero...
We consider the problem of embedding a subset of $\mathbb{R}^n$ into a low-dimensional Hamming cube ...
This paper develops a new method for recovering m-sparse signals that is simultaneously uniform and ...
We consider the problem of efficient randomized dimensionality reduction with norm-preservation guar...
Let Φ∈Rm x n be a sparse Johnson-Lindenstrauss transform [52] with column sparsity s. For a subset T...
An oblivious subspace embedding (OSE) given some parameters \(\epsilon\), d is a distribution \(\mat...
We give near-tight lower bounds for the sparsity required in several dimensionality reducing linear ...
Random embeddings project high-dimensional spaces to low-dimensional ones; they are careful construc...
International audienceThis paper proposes a binarization scheme for vectors of high dimension based ...
Recent work of [Dasgupta-Kumar-Sarl´os, STOC 2010] gave a sparse Johnson-Lindenstrauss transform and...
AbstractAn important problem in the theory of sparse approximation is to identify well-conditioned s...
The Johnson-Lindenstrauss lemma is a fundamental result in probability with several applications in ...
The problem of estimating a high-dimensional sparse vector $\theta \in \mathbb{R}^n$ from an observa...
A K*-sparse vector x* ∈ RN produces measurements via linear dimensionality reduction as u = Φx* +n, ...