We introduce a simple model illustrating the role of context in communication and the challenge posed by uncertainty of knowledge of context. We consider a variant of distributional communication complexity where Alice gets some information $x$ and Bob gets $y$, where $(x,y)$ is drawn from a known distribution, and Bob wishes to compute some function $g(x,y)$ (with high probability over $(x,y)$). In our variant, Alice does not know $g$, but only knows some function $f$ which is an approximation of $g$. Thus, the function being computed forms the context for the communication, and knowing it imperfectly models (mild) uncertainty in this context.
A naive solution would be for Alice and Bob to first agree on some common function $h$ that is close to both $f$ and $g$ and then use a protocol for $h$ to compute $h(x,y)$. We show that any such agreement leads to a large overhead in communication ruling out such a universal solution.
In contrast, we show that if $g$ has a one-way communication protocol with complexity $k$ in the standard setting, then it has a communication protocol with complexity $O(k \cdot (1+I))$ in the uncertain setting, where $I$ denotes the mutual information between $x$ and $y$. In the particular case where the input distribution is a product distribution, the protocol in the uncertain setting only incurs a constant factor blow-up in communication and error.
Furthermore, we show that the dependence on the mutual information $I$ is required. Namely, we construct a class of functions along with a non-product distribution over $(x,y)$ for which the communication complexity is a single bit in the standard setting but at least $\Omega(\sqrt{n})$ bits in the uncertain setting.
The upper bound now handles arbitrary distributions, and we added a lower bound for non-product distributions.
We introduce a simple model illustrating the role of context in communication and the challenge posed by uncertainty of knowledge of context. We consider a variant of distributional communication complexity where Alice gets some information $x$ and Bob gets $y$, where $(x,y)$ is drawn from a known distribution, and Bob wishes to compute some function $g(x,y)$ (with high probability over $(x,y)$). In our variant Alice does not know $g$, but only knows some function $f$ which is an approximation of $g$. Thus, the function being computed forms the context for the communication, and knowing it imperfectly models (mild) uncertainty in this context.
A naive solution would be for Alice and Bob to first agree on some common function $h$ that is close to both $f$ and $g$ and then use a protocol for $h$ to compute $h(x,y)$. We show that any such agreement leads to a large overhead in communication ruling out such a universal solution. In contrast, we show that if $g$ has a one-way communication protocol with low complexity in the standard setting, then it has a low communication protocol (with a constant factor blowup in communication and error) in the uncertain setting as well. We pose the possibility of a two-way version of this theorem as an open question.