All reports by Author Sumegha Garg:

__
TR21-083
| 21st June 2021
__

Mark Braverman, Sumegha Garg, Or Zamir#### Tight Space Complexity of the Coin Problem

__
TR20-139
| 11th September 2020
__

Mark Braverman, Sumegha Garg, David Woodruff#### The Coin Problem with Applications to Data Streams

__
TR19-071
| 14th May 2019
__

Sumegha Garg, Ran Raz, Avishay Tal#### Time-Space Lower Bounds for Two-Pass Learning

__
TR17-161
| 30th October 2017
__

Mark Braverman, Gil Cohen, Sumegha Garg#### Hitting Sets with Near-Optimal Error for Read-Once Branching Programs

Revisions: 1

__
TR17-121
| 31st July 2017
__

Sumegha Garg, Ran Raz, Avishay Tal#### Extractor-Based Time-Space Lower Bounds for Learning

Revisions: 1

Mark Braverman, Sumegha Garg, Or Zamir

In the coin problem we are asked to distinguish, with probability at least $2/3$, between $n$ $i.i.d.$ coins which are heads with probability $\frac{1}{2}+\beta$ from ones which are heads with probability $\frac{1}{2}-\beta$. We are interested in the space complexity of the coin problem, corresponding to the width of a read-once ... more >>>

Mark Braverman, Sumegha Garg, David Woodruff

Consider the problem of computing the majority of a stream of $n$ i.i.d. uniformly random bits. This problem, known as the {\it coin problem}, is central to a number of counting problems in different data stream models. We show that any streaming algorithm for solving this problem with large constant ... more >>>

Sumegha Garg, Ran Raz, Avishay Tal

A line of recent works showed that for a large class of learning problems, any learning algorithm requires either super-linear memory size or a super-polynomial number of samples [Raz16,KRT17,Raz17,MM18,BOGY18,GRT18]. For example, any algorithm for learning parities of size $n$ requires either a memory of size $\Omega(n^{2})$ or an exponential number ... more >>>

Mark Braverman, Gil Cohen, Sumegha Garg

Nisan (Combinatorica'92) constructed a pseudorandom generator for length $n$, width $n$ read-once branching programs (ROBPs) with error $\varepsilon$ and seed length $O(\log^2{n} + \log{n} \cdot \log(1/\varepsilon))$. A major goal in complexity theory is to reduce the seed length, hopefully, to the optimal $O(\log{n}+\log(1/\varepsilon))$, or to construct improved hitting sets, as ... more >>>

Sumegha Garg, Ran Raz, Avishay Tal

A matrix $M: A \times X \rightarrow \{-1,1\}$ corresponds to the following learning problem: An unknown element $x \in X$ is chosen uniformly at random. A learner tries to learn $x$ from a stream of samples, $(a_1, b_1), (a_2, b_2) \ldots$, where for every $i$, $a_i \in A$ is chosen ... more >>>