All reports by Author Ninad Rajgopal:

__
TR23-118
| 17th August 2023
__

Hugo Aaronson, Tom Gur, Ninad Rajgopal, Ron Rothblum#### Distribution-Free Proofs of Proximity

Revisions: 2

__
TR21-173
| 5th December 2021
__

Ninad Rajgopal, Rahul Santhanam#### On the Structure of Learnability beyond P/poly

__
TR19-168
| 20th November 2019
__

Igor Carboni Oliveira, Lijie Chen, Shuichi Hirahara, Ján Pich, Ninad Rajgopal, Rahul Santhanam#### Beyond Natural Proofs: Hardness Magnification and Locality

Hugo Aaronson, Tom Gur, Ninad Rajgopal, Ron Rothblum

Motivated by the fact that input distributions are often unknown in advance, distribution-free property testing considers a setting in which the algorithmic task is to accept functions $f : [n] \to \{0,1\}$ having a certain property $\Pi$ and reject functions that are $\varepsilon$-far from $\Pi$, where the distance is measured ... more >>>

Ninad Rajgopal, Rahul Santhanam

Motivated by the goal of showing stronger structural results about the complexity of learning, we study the learnability of strong concept classes beyond P/poly, such as PSPACE/poly and EXP/poly. We show the following:

1. (Unconditional Lower Bounds for Learning) Building on [KKO13], we prove unconditionally that BPE/poly cannot be weakly ... more >>>

Igor Carboni Oliveira, Lijie Chen, Shuichi Hirahara, Ján Pich, Ninad Rajgopal, Rahul Santhanam

Hardness magnification reduces major complexity separations (such as $EXP \not\subseteq NC^1$) to proving lower bounds for some natural problem $Q$ against weak circuit models. Several recent works [OS18, MMW19, CT19, OPS19, CMMW19, Oli19, CJW19a] have established results of this form. In the most intriguing cases, the required lower bound is ... more >>>