Revision #3 Authors: Avraham Ben-Aroya, Gil Cohen

Accepted on: 19th November 2012 11:45

Downloads: 1795

Keywords:

A (k,eps)-biased sample space is a distribution over {0,1}^n that eps-fools every nonempty linear test of size at most k. Since they were introduced by Naor and Naor (SIAM J. Computing, 1993), these sample spaces have become a central notion in theoretical computer science with a variety of applications. When constructing such spaces, one usually attempts to minimize the seed length as a function of n, k and eps. Given such a construction, if we reverse the roles and consider a fixed seed length, then the smaller we pick k, the better the bound on the bias eps becomes. However, once the space is constructed we have a single bound on the bias of all tests of size at most k.

In this work we initiate the study of a new pseudorandom object, which we call a gradual (k,eps)-biased sample space. Roughly speaking, this is a sample space that eps-fools linear tests of size exactly k and moreover, the bound on the bias for linear tests of size i < k decays as i gets smaller. We show how to construct gradual (k,eps)-biased sample spaces of size comparable to the (non-gradual) spaces constructed by Alon et al. (Random Structures and Algorithms, 1992), and prove a lower bound on their size. Our construction is based on the lossless expanders of Guruswami et al. (J. ACM, 2009), combined with the Quadratic Character Construction of Alon et al. (Random Structures and Algorithms, 1992).

Motivation added.

Revision #2 Authors: Avraham Ben-Aroya, Gil Cohen

Accepted on: 13th August 2012 22:19

Downloads: 1005

Keywords:

A $(k,\epsilon)$-biased sample space is a distribution over $\{0,1\}^n$ that $\epsilon$-fools every nonempty linear test of size at most $k$. Since they were introduced by Naor and Naor [SIAM J. Computing, 1993], these sample spaces have become a central notion in theoretical computer science with a variety of applications.

When constructing such spaces, one usually attempts to minimize the seed length as a function of $n, k$ and $\epsilon$. Given such a construction, if we reverse the roles and consider a fixed seed length, then the smaller we pick $k$, the better the bound on the bias $\epsilon$ becomes. However, once the space is constructed we have a \emph{single} bound on the bias of all tests of size at most $k$.

In this work we initiate the study of a new pseudorandom object, which we call a \emph{gradual} $(k,\epsilon)$-biased sample space. Roughly speaking, this is a sample space

that $\epsilon$-fools linear tests of size \emph{exactly} $k$ and moreover, the bound on the bias for linear tests of size $i \le k$ decays as $i$ gets smaller.

We show how to construct gradual $(k,\epsilon)$-biased sample spaces of size comparable to the (non-gradual) spaces constructed by Alon et al. [FOCS 1990], and prove a lower bound on their size. Our construction is based on the lossless expanders of Guruswami et al. [J. ACM, 2009], combined with the Quadratic Character Construction of Alon et al. [FOCS 1990].

Elborated section on non-linear decay.

Revision #1 Authors: Avraham Ben-Aroya, Gil Cohen

Accepted on: 8th May 2012 22:23

Downloads: 1146

Keywords:

A $(k,\epsilon)$-biased sample space is a distribution over $\{0,1\}^n$ that $\epsilon$-fools every nonempty linear test of size at most $k$. Since they were introduced by Naor and Naor [SIAM J. Computing, 1993], these sample spaces have become a central notion in theoretical computer science with a variety of applications.

When constructing such spaces, one usually attempts to minimize the seed length as a function of $n, k$ and $\epsilon$. Given such a construction, if we reverse the roles and consider a fixed seed length, then the smaller we pick $k$, the better the bound on the bias $\epsilon$ becomes. However, once the space is constructed we have a \emph{single} bound on the bias of all tests of size at most $k$.

In this work we consider the problem of getting ``more mileage" out of $(k,\epsilon)$-biased sample spaces. Namely, we study a generalization of these sample spaces which we call \emph{gradual} $(k,\epsilon)$-biased sample spaces. Roughly speaking, these are sample spaces that $\epsilon$-fool linear tests of size \emph{exactly} $k$ and moreover, the bound on the bias of linear tests of size $i \le k$ decays as $i$ gets smaller.

We show how to construct gradual $(k,\epsilon)$-biased sample spaces of size comparable to the (non-gradual) spaces constructed by Alon et al. [FOCS 1990]. Our construction is based on the lossless expanders of Guruswami et al.[J. ACM, 2009], combined with the Quadratic Character Construction of Alon et al. [FOCS 1990].

Fixed some typos.

TR12-050 Authors: Avraham Ben-Aroya, Gil Cohen

Publication: 27th April 2012 16:59

Downloads: 1142

Keywords:

A $(k,\epsilon)$-biased sample space is a distribution over $\{0,1\}^n$ that $\epsilon$-fools every nonempty linear test of size at most $k$. Since they were introduced by Naor and Naor [SIAM J. Computing, 1993], these sample spaces have become a central notion in theoretical computer science with a variety of applications.

When constructing such spaces, one usually attempts to minimize the seed length as a function of $n, k$ and $\epsilon$. Given such a construction, if we reverse the roles and consider a fixed seed length, then the smaller we pick $k$, the better the bound on the bias $\epsilon$ becomes. However, once the space is constructed we have a \emph{single} bound on the bias of all tests of size at most $k$.

In this work we consider the problem of getting ``more mileage" out of $(k,\epsilon)$-biased sample spaces. Namely, we study a generalization of these sample spaces which we call \emph{gradual} $(k,\epsilon)$-biased sample spaces. Roughly speaking, these are sample spaces that $\epsilon$-fool linear tests of size \emph{exactly} $k$ and moreover, the bound on the bias of linear tests of size $i \le k$ decays as $i$ gets smaller.

We show how to construct gradual $(k,\epsilon)$-biased sample spaces of size comparable to the (non-gradual) spaces constructed by Alon et al. [FOCS 1990]. Our construction is based on the lossless expanders of Gurusawmi et al.[J. ACM, 2009], combined with the Quadratic Character Construction of Alon et al. [FOCS 1990].