Parameterization and approximation are two popular ways of coping with NP-hard problems. More recently, the two have also been combined to derive many interesting results. We survey developments in the area both from the algorithmic and hardness perspectives, with emphasis on new techniques and potential future research directions.
more >>>In studying the expressiveness of neural networks, an important question is whether there are functions which can only be approximated by sufficiently deep networks, assuming their size is bounded. However, for constant depths, existing results are limited to depths $2$ and $3$, and achieving results for higher depths has been ... more >>>
In a seminal work, Kopparty et al. (J. ACM 2017) constructed asymptotically good $n$-bit locally decodable codes (LDC) with $2^{\widetilde{O}(\sqrt{\log{n}})}$ queries. A key ingredient in their construction is a distance amplification procedure by Alon et al. (FOCS 1995) which amplifies the distance $\delta$ of a code to a constant at ... more >>>