How can we trust the correctness of a learned model on a particular input of interest? Model accuracy is typically measured $on\ average$ over a distribution of inputs, giving no guarantee for any fixed input. This paper proposes a theoretically-founded solution to this problem: to train $Self$-$Proving\ models$ that prove ... more >>>
We introduce a variant of PCPs, that we refer to as *rectangular* PCPs, wherein proofs are thought of as square matrices, and the random coins used by the verifier can be partitioned into two disjoint sets, one determining the *row* of each query and the other determining the *column*.
We ... more >>>
Probabilistically checkable proofs (PCPs) can be verified based only on a constant amount of random queries, such that any correct claim has a proof that is always accepted, and incorrect claims are rejected with high probability (regardless of the given alleged proof). We consider two possible features of PCPs:
- ...
more >>>