Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > DETAIL:

Paper:

TR10-160 | 28th October 2010 22:13

On Approximating the Entropy of Polynomial Mappings

RSS-Feed

Abstract:

We investigate the complexity of the following computational problem:

Polynomial Entropy Approximation (PEA):
Given a low-degree polynomial mapping
$p : F^n\rightarrow F^m$, where $F$ is a finite field, approximate the output entropy
$H(p(U_n))$, where $U_n$ is the uniform distribution on $F^n$ and $H$ may be any of several entropy measures.

We show:

1. Approximating the Shannon entropy of degree 3 polynomials $p : F_2^n\rightarrow F_2^m$ over $F_2$ to within an additive constant (or even $n^{.9}$)
is complete for $SZKP_L$, the class of problems
having statistical zero-knowledge proofs where the honest verifier and its simulator are computable in logarithmic space. ($SZKP_L$ contains most of the natural problems known to be in the full class $SZKP$.)

2. For prime fields $F\neq F_2$ and homogeneous quadratic polynomials $p : F^n\rightarrow F^m$, there is a probabilistic polynomial-time algorithm that distinguishes the case that $p(U_n)$ has entropy smaller than $k$ from the case that $p(U_n)$ has min-entropy (or even Renyi entropy) greater than $(2+o(1))k$.

3. For degree $d$ polynomials $p : F_2^n\rightarrow F_2^m$, there is a polynomial-time algorithm that distinguishes the case that $p(U_n)$ has max-entropy smaller than $k$ (where the max-entropy of a random variable is the logarithm of its support size) from the case that $p(U_n)$ has max-entropy at least $(1+o(1))\cdot k^d$ (for fixed $d$ and large $k$).



ISSN 1433-8092 | Imprint