In interactive coding, Alice and Bob wish to compute some function $f$ of their individual private inputs $x$ and $y$. They do this by engaging in an interactive protocol to jointly compute $f(x,y)$. The goal is to do this in an error-resilient way, such that even given some fraction of adversarial corruptions to the protocol, both parties still learn $f(x,y)$.
Typically, the error resilient protocols constructed by interactive coding schemes are non-adaptive, that is, the length of the protocol as well as the speaker in each round is fixed beforehand. The maximal error resilience obtainable by non-adaptive schemes is now well understood. In order to circumvent known barriers and achieve higher error resilience, the work of Agrawal, Gelles, and Sahai (ISIT 2016) introduced to interactive coding the notion of adaptive schemes, where the length of the protocol or the speaker order are no longer necessarily fixed.
In this paper, we study the power of \emph{adaptive termination} in the context of the error resilience of interactive coding schemes. In other words, what is the power of schemes where Alice and Bob are allowed to disengage from the protocol early? We study this question in two contexts, both for the task of message exchange, where the goal is to learn the other party's input.
The first setting is the full termination model, where Alice and Bob's order of speaking is fixed but they may terminate at any point. Errors are counted as a fraction of the total number of rounds until the second party has terminated. This is a strengthening of a model proposed by Agrawal, Gelles, and Sahai that counts errors relative to the last round in which a party speaks: our model disallows the usage of silence to freely communicate information. We construct a protocol achieving $\approx 0.293$ error resilience in this model and show an upper bound of $0.358$. We also demonstrate that in the weaker model, one can utilize silence in a key way to communicate information to obtain a protocol resilient to $\approx 0.3625$ fraction of errors, improving upon Agrawal, Gelles, and Sahai's construction achieving $\frac13$ error resilience.
We also consider communication over channels with feedback. In this setting, Alice and Bob both learn at every point what has been received by both parties. This allows Alice and Bob to use the feedback to agree on a speaking order and agreed termination point for the protocol. In this model, we construct a protocol resilient to $\frac12$ fraction of errors using just a ternary alphabet, which has a natural matching upper bound. In the case of a binary alphabet, we extend the upper bound of $\frac13$ for protocols with adaptive speaking order to protocols that have adaptive termination as well.