RĂ¼diger Reischuk, Thomas Zeugmann

A new algorithm for learning one-variable pattern languages from positive data

is proposed and analyzed with respect to its average-case behavior.

We consider the total learning time that takes into account all

operations till convergence to a correct hypothesis is achieved.

For almost all meaningful distributions

defining how ...
more >>>

Jin-Yi Cai

We survey some recent developments in the study of

the complexity of lattice problems. After a discussion of some

problems on lattices which can be algorithmically solved

efficiently, our main focus is the recent progress on complexity

results of intractability. We will discuss Ajtai's worst-case/

average-case connections, NP-hardness and non-NP-hardness,

more >>>

Birgit Schelm

Both average-case complexity and the study of the approximability properties of NP-optimization problems are well established and active fields of research. By applying the notion of average-case complexity to approximation problems we provide a formal framework that allows the classification of NP-optimization problems according to their average-case approximability. Thus, known ... more >>>

Piotr Berman, Marek Karpinski

We prove that the problems of minimum bisection on k-uniform

hypergraphs are almost exactly as hard to approximate,

up to the factor k/3, as the problem of minimum bisection

on graphs. On a positive side, our argument gives also the

first approximation ...
more >>>

Noam Livne

In 1984 Levin put forward a suggestion for a theory of {\em average

case complexity}. In this theory a problem, called a {\em

distributional problem}, is defined as a pair consisting of a

decision problem and a probability distribution over the instances.

Introducing adequate notions of simple distributions and average

more >>>

Yonatan Bilu, Nathan Linial

We introduce the notion of a stable instance for a discrete

optimization problem, and argue that in many practical situations

only sufficiently stable instances are of interest. The question

then arises whether stable instances of NP--hard problems are

easier to solve. In particular, whether there exist algorithms

that solve correctly ...
more >>>

Andrew Drucker

We introduce a 2-round stochastic constraint-satisfaction problem, and show that its approximation version is complete for (the promise version of) the complexity class $\mathsf{AM}$. This gives a `PCP characterization' of $\mathsf{AM}$ analogous to the PCP Theorem for $\mathsf{NP}$. Similar characterizations have been given for higher levels of the Polynomial Hierarchy, ... more >>>

Nikolay Vereshchagin

When we represent a decision problem,like CIRCUIT-SAT, as a language over the binary alphabet,

we usually do not specify how to encode instances by binary strings.

This relies on the empirical observation that the truth of a statement of the form ``CIRCUIT-SAT belongs to a complexity class $C$''

more >>>

Shlomi Dolev, Nova Fandina, Dan Gutfreund

Finding a problem that is both hard to solve and hard to solve on many instances is a long standing issue

in theoretical computer science.

In this work, we prove that the Succinct Permanent $\bmod \; p$ is $NEXP$

time hard in the worst case (via randomized polynomial time ...
more >>>

Boaz Barak

In this survey, I discuss the general question of what evidence can we use to predict the answer for open questions in computational complexity, as well as the concrete evidence currently known for two conjectures: Khot's Unique Games Conjecture and Feige's Random 3SAT Hypothesis.

Marco Carmosino, Russell Impagliazzo, Manuel Sabin

We show that popular hardness conjectures about problems from the field of fine-grained complexity theory imply structural results for resource-based complexity classes. Namely, we show that if either k-Orthogonal Vectors or k-CLIQUE requires $n^{\epsilon k}$ time, for some constant $\epsilon > 1/2$, to count (note that these conjectures are significantly ... more >>>

Elazar Goldenberg, Karthik C. S.

In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products.

We say that an optimization problem $\Pi$ is direct product feasible if it is possible to efficiently aggregate any $k$ instances of $\Pi$ and form one large instance ...
more >>>

Yanyi Liu, Rafael Pass

We prove the equivalence of two fundamental problems in the theory of computation:

- Existence of one-way functions: the existence of one-way functions (which in turn are equivalent to pseudorandom generators, pseudorandom functions, private-key encryption schemes, digital signatures, commitment schemes, and more).

- Mild average-case hardness of $K^{poly}$-complexity: ...
more >>>

Yanyi Liu, Rafael Pass

Let $\mktp[s]$ be the set of strings $x$ such that $K^t(x) \leq s(|x|)$, where $K^t(x)$ denotes the $t$-bounded Kolmogorov complexity of the truthtable described by $x$. Our main theorem shows that for an appropriate notion of mild average-case hardness, for every $\varepsilon>0$, polynomial $t(n) \geq (1+\varepsilon)n$, and every ``nice'' class ... more >>>

Yanyi Liu, Rafael Pass

Liu and Pass (FOCS'20) recently demonstrated an equivalence between the existence of one-way functions (OWFs) and mild average-case hardness of the time-bounded Kolmogorov complexity problem. In this work, we establish a similar equivalence but to a different form of time-bounded Kolmogorov Complexity---namely, Levin's notion of Kolmogorov Complexity---whose hardness is closely ... more >>>

Yanyi Liu, Rafael Pass

We present the first natural $\NP$-complete problem whose average-case hardness w.r.t. the uniform distribution over instances implies the existence of one-way functions (OWF). In fact, we prove that the existence of OWFs is \emph{equivalent} to mild average-case hardness of this $\NP$-complete problem. The problem, which originated in the 1960s, is ... more >>>

Yanyi Liu, Rafael Pass

We show equivalence between the existence of one-way

functions and the existence of a \emph{sparse} language that is

hard-on-average w.r.t. some efficiently samplable ``high-entropy''

distribution.

In more detail, the following are equivalent:

- The existentence of a $S(\cdot)$-sparse language $L$ that is

hard-on-average with respect to some samplable ...
more >>>

Ron D. Rothblum, Michael Ezra

The inner product function $\langle x,y \rangle = \sum_i x_i y_i \bmod 2$ can be easily computed by a (linear-size) ${AC}^0(\oplus)$ circuit: that is, a constant depth circuit with AND, OR and parity (XOR) gates. But what if we impose the restriction that the parity gates can only be on ... more >>>

Shuichi Hirahara, Mikito Nanashima

A PAC learning model involves two worst-case requirements: a learner must learn all functions in a class on all example distributions. However, basing the hardness of learning on NP-hardness has remained a key challenge for decades. In fact, recent progress in computational complexity suggests the possibility that a weaker assumption ... more >>>

Lijie Chen, Shuichi Hirahara, Neekon Vafa

What is a minimal worst-case complexity assumption that implies non-trivial average-case hardness of NP or PH? This question is well motivated by the theory of fine-grained average-case complexity and fine-grained cryptography. In this paper, we show that several standard worst-case complexity assumptions are sufficient to imply non-trivial average-case hardness ... more >>>

Halley Goldberg, Valentine Kabanets

We give a simplified proof of Hirahara's STOC'21 result showing that $DistPH \subseteq AvgP$ would imply $PH \subseteq DTIME[2^{O(n/\log n)}]$. The argument relies on a proof of the new result: Symmetry of Information for time-bounded Kolmogorov complexity under the assumption that $NP$ is easy on average, which is interesting in ... more >>>

Vahid Reza Asadi, Alexander Golovnev, Tom Gur, Igor Shinkar, Sathyawageeswar Subramanian

We study the problem of designing worst-case to average-case reductions for quantum algorithms. For all linear problems, we provide an explicit and efficient transformation of quantum algorithms that are only correct on a small (even sub-constant) fraction of their inputs into ones that are correct on all inputs. This stands ... more >>>