How can we trust results computed by a third party, or the integrity of data stored by such a party? This is a classic question in systems security, and it is particularly relevant in the context of cloud computing.
Various solutions have been proposed that make assumptions about the class of computations, the failure modes of the performing computer, etc. However, deep results in theoretical computer science - interactive proofs, probabilistically checkable proofs (PCPs) coupled with cryptographic commitments, etc. -- tell us that a fully general solution exists that makes no assumptions about the third party: the local computer can check the correctness of a remotely executed computation by inspecting a proof returned by the third party. The rub is practicality: if implemented naively, the theory would be preposterously expensive (e.g., trillions of CPU-years or more to verify simple computations).
Over the last several years, a number of projects have reduced this theory to near-practice in the context of implemented systems; we call this field _proof-based verifiable computation_. The pace of progress has been rapid, and there have been many exciting developments. This paper covers the high-level problem, the theory that solves the problem in principle, the work required to bring this theory to near-practicality, the various projects in the area, and open questions. Many of these questions cut across multiple sub-disciplines of computer science: complexity theory, cryptography, systems, parallel programming, and programming languages.
Substantial text revisions. Additional sidebar. Performance study now compares to our implementation of TinyRAM. This is an expanded version of what will appear in Communications of the ACM.
How can we trust results computed by a third party, or the integrity of data stored by such a party? This is a classic question in systems security, and it is particularly relevant in the context of cloud computing.
Various solutions have been proposed that make assumptions about the class of computations, the failure modes of the performing computer, etc. However, deep results in theoretical computer science - interactive proofs, probabilistically checkable proofs (PCPs) coupled with cryptographic commitments, etc. -- tell us that a fully general solution exists that makes no assumptions about the third party: the local computer can check the correctness of a remotely executed computation by inspecting a proof returned by the third party. The rub is practicality: if implemented naively, the theory would be preposterously expensive (e.g., trillions of CPU-years or more to verify simple computations).
Over the last several years, a number of projects have reduced this theory to near-practice in the context of implemented systems; we call this field _proof-based verifiable computation_. The pace of progress has been rapid, and there have been many exciting developments. This paper covers the high-level problem, the theory that solves the problem in principle, the work required to bring this theory to near-practicality, the various projects in the area, and open questions. Many of these questions cut across multiple sub-disciplines of computer science: complexity theory, cryptography, systems, parallel programming, and programming languages.
[Note: this survey, which is about applying complexity-theoretic and cryptographic machinery, is written for a general computer science audience. However, we feel that it may be of interest to the ECCC community, so we are posting it here.]
Add clarification paragraph at end of abstract. Reinsert keywords.
How can we trust results computed by a third party, or the integrity of data stored by such a party? This is a classic question in systems security, and it is particularly relevant in the context of cloud computing.
Various solutions have been proposed that make assumptions about the class of computations, the failure modes of the performing computer, etc. However, deep results in theoretical computer science - interactive proofs, probabilistically checkable proofs (PCPs) coupled with cryptographic commitments, etc. -- tell us that a fully general solution exists that makes no assumptions about the third party: the local computer can check the correctness of a remotely executed computation by inspecting a proof returned by the third party. The rub is practicality: if implemented naively, the theory would be preposterously expensive (e.g., trillions of CPU-years or more to verify simple computations).
Over the last several years, a number of projects have reduced this theory to near-practice in the context of implemented systems; we call this field _proof-based verifiable computation_. The pace of progress has been rapid, and there have been many exciting developments. This paper covers the high-level problem, the theory that solves the problem in principle, the work required to bring this theory to near-practicality, the various projects in the area, and open questions. Many of these questions cut across multiple sub-disciplines of computer science: complexity theory, cryptography, systems, parallel programming, and programming languages.
[Note: this survey, which is about applying complexity-theoretic and
cryptographic machinery, is written for a general computer science
audience. However, we feel that it may be of interest to the ECCC
community, so we are posting it here.]
Added a clarification paragraph to the end of the abstract.
How can we trust results computed by a third party, or the integrity of data stored by such a party? This is a classic question in systems security, and it is particularly relevant in the context of cloud computing.
Various solutions have been proposed that make assumptions about the class of computations, the failure modes of the performing computer, etc. However, deep results in theoretical computer science - interactive proofs, probabilistically checkable proofs (PCPs) coupled with cryptographic commitments, etc. -- tell us that a fully general solution exists that makes no assumptions about the third party: the local computer can check the correctness of a remotely executed computation by inspecting a proof returned by the third party. The rub is practicality: if implemented naively, the theory would be preposterously expensive (e.g., trillions of CPU-years or more to verify simple computations).
Over the last several years, a number of projects have reduced this theory to near-practice in the context of implemented systems; we call this field _proof-based verifiable computation_. The pace of progress has been rapid, and there have been many exciting developments. This paper covers the high-level problem, the theory that solves the problem in principle, the work required to bring this theory to near-practicality, the various projects in the area, and open questions. Many of these questions cut across multiple sub-disciplines of computer science: complexity theory, cryptography, systems, parallel programming, and programming languages.