We prove that any algorithm for learning parities requires either a memory of quadratic size or an exponential number of samples. This proves a recent conjecture of Steinhardt, Valiant and Wager and shows that for some learning problems a large storage space is crucial. More formally, in the problem of parity learning, an unknown string $x \in \{0,1\}^n$ was chosen uniformly at random. A learner tries to learn $x$ from a stream of samples $(a_1, b_1), (a_2, b_2)... $, where each $a_t$ is uniformly distributed over $\{0,1\}^n$ and $b_t$ is the inner product of $a_t$ and $x$, modulo 2. We show that any algorithm for parity learning, that uses less than $n^2/25$ bits of memory, requires an exponential number of samples. Previously, there was...
In the bounded-storage model for information-theoretically secure encryption and key-agreement one c...
We study the problem of learning parity functions that depend on at most k variables (k-parities) at...
We show the following reductions from the learning with errors problem (LWE) to the learning with ro...
We prove that any algorithm for learning parities requires either a memory of quadratic size or an e...
In a variety of PAC learning models, a tradeo between time and information seems to exist: with unl...
A line of recent works showed that for a large class of learning problems, any learning algorithm re...
Thesis (Ph.D.)--University of Washington, 2020We present several novel results on computational prob...
We give lower bounds on the amount of memory required by one-pass streaming algorithms for solving s...
We consider the problem of learning k-parities in the online mistake-bound model: given a hidden vec...
Abstract In a very strong positive result for passive learning algorithms, Bshouty et al. showed tha...
In a celebrated result Linial et al. [3] gave an algorithm which learns size-s depth-d AND/OR/NOT ci...
We address well-studied problems concerning the learnability of parities and halfspaces in the prese...
The research in complexity theory, for a long time now, has been conscious of memory as a resource i...
We prove several results giving new and stronger connections between learning theory, circuit comple...
AbstractThe PRAM model of parallel computation is examined with respect to wordsize, the number of b...
In the bounded-storage model for information-theoretically secure encryption and key-agreement one c...
We study the problem of learning parity functions that depend on at most k variables (k-parities) at...
We show the following reductions from the learning with errors problem (LWE) to the learning with ro...
We prove that any algorithm for learning parities requires either a memory of quadratic size or an e...
In a variety of PAC learning models, a tradeo between time and information seems to exist: with unl...
A line of recent works showed that for a large class of learning problems, any learning algorithm re...
Thesis (Ph.D.)--University of Washington, 2020We present several novel results on computational prob...
We give lower bounds on the amount of memory required by one-pass streaming algorithms for solving s...
We consider the problem of learning k-parities in the online mistake-bound model: given a hidden vec...
Abstract In a very strong positive result for passive learning algorithms, Bshouty et al. showed tha...
In a celebrated result Linial et al. [3] gave an algorithm which learns size-s depth-d AND/OR/NOT ci...
We address well-studied problems concerning the learnability of parities and halfspaces in the prese...
The research in complexity theory, for a long time now, has been conscious of memory as a resource i...
We prove several results giving new and stronger connections between learning theory, circuit comple...
AbstractThe PRAM model of parallel computation is examined with respect to wordsize, the number of b...
In the bounded-storage model for information-theoretically secure encryption and key-agreement one c...
We study the problem of learning parity functions that depend on at most k variables (k-parities) at...
We show the following reductions from the learning with errors problem (LWE) to the learning with ro...