Consider a problem involving updates and queries of a data structure.
Assume that there exists a family of algorithms which exhibit a
tradeoff between query and update time. We demonstrate a general
technique of constructing from such a family
a single algorithm with best amortized time. We indicate some ...
more >>>
We obtain improved lower bounds for a class of static and dynamic
data structure problems that includes several problems of searching
sorted lists as special cases.
These lower bounds nearly match the upper bounds given by recent
striking improvements in searching algorithms given by Fredman and
Willard's ...
more >>>
We construct efficient data structures that are resilient against a constant fraction of adversarial noise. Our model requires that the decoder answers most queries correctly with high probability and for the remaining queries, the
decoder with high probability either answers correctly or declares ``don't know.'' Furthermore, if there is no ...
more >>>
We study a variant of the classical circuit-lower-bound problems: proving lower bounds for sampling distributions given random bits. We prove a lower bound of $1-1/n^{\Omega(1)}$ on the statistical distance between (i) the output distribution of any small constant-depth (a.k.a.~$\mathrm{AC}^0$) circuit $f : \{0,1\}^{\mathrm{poly}(n)} \to \{0,1\}^n$, and (ii) the uniform distribution ... more >>>
This document collects the lecture notes from my course ``Communication Complexity (for Algorithm Designers),'' taught at
Stanford in the winter quarter of 2015. The two primary goals of the course are:
1. Learn several canonical problems that have proved the most useful for proving lower bounds (Disjointness, Index, Gap-Hamming, etc.). ... more >>>
The problem of dynamic connectivity in graphs has been extensively studied in the cell probe model. The task is to design a data structure that supports addition of edges and checks connectivity between arbitrary pair of vertices. Let $w, t_q, t_u$ denote the cell width, expected query time and worst ... more >>>
This paper proves the first super-logarithmic lower bounds on the cell probe complexity of dynamic \emph{boolean} (a.k.a. decision) data structure problems, a long-standing milestone in data structure lower bounds.
We introduce a new method for proving dynamic cell probe lower bounds and use it to prove a $\tilde{\Omega}(\log^{1.5} ...
more >>>
We develop a technique for proving lower bounds in the setting of asymmetric communication, a model that was introduced in the famous works of Miltersen (STOC'94) and Miltersen, Nisan, Safra and Wigderson (STOC'95). At the core of our technique is a novel simulation theorem: Alice gets a $p \times n$ ... more >>>
We show that static data structure lower bounds in the group (linear) model imply semi-explicit lower bounds on matrix rigidity. In particular, we prove that an explicit lower bound of $t \geq \omega(\log^2 n)$ on the cell-probe complexity of linear data structures in the group model, even against arbitrarily small ... more >>>
Recently, Dvir, Golovnev, and Weinstein have shown that sufficiently strong lower bounds for linear data structures would imply new bounds for rigid matrices. However, their result utilizes an algorithm that requires an $NP$ oracle, and hence, the rigid matrices are not explicit. In this work, we derive an equivalence between ... more >>>
Proving super-logarithmic data structure lower bounds in the static \emph{group model} has been a fundamental challenge in computational geometry since the early 80's. We prove a polynomial ($n^{\Omega(1)}$) lower bound for an explicit range counting problem of $n^3$ convex polygons in $\R^2$ (each with $n^{\tilde{O}(1)}$ facets/semialgebraic-complexity), against linear storage arithmetic ... more >>>
Multipoint evaluation is the computational task of evaluating a polynomial given as a list of coefficients at a given set of inputs. Besides being a natural and fundamental question in computer algebra on its own, fast algorithms for this problem is also closely related to fast algorithms for other natural ... more >>>
We present a new framework for designing worst-case to average-case reductions. For a large class of problems, it provides an explicit transformation of algorithms running in time $T$ that are only correct on a small (subconstant) fraction of their inputs into algorithms running in time $\widetilde{O}(T)$ that are correct on ... more >>>
We study the black-box function inversion problem, which is the problem of finding $x \in [N]$ such that $f(x) = y$, given as input some challenge point $y$ in the image of a function $f : [N] \to [N]$, using $T$ oracle queries to $f$ and preprocessed advice $\sigma \in ... more >>>
We study the problem of function inversion with preprocessing where, given a function $f : [N] \to [N]$ and a point $y$ in its image, the goal is to find an $x$ such that $f(x) = y$ using at most $T$ oracle queries to $f$ and $S$ bits of preprocessed ... more >>>