We set out to study the impact of having access to correlated instances on the fine grained complexity of polynomial time problems, which have notoriously resisted improvement.
In particular, we show how to use a logarithmic number of auxiliary correlated instances to obtain $o(n^2)$ time algorithms for the longest common subsequence(LCS) problem and the minimum edit distance (EDIT) problem. For the problem of longest common subsequence of $k$ sequences we show an $O(nk\log n)$ time algorithm with access to a logarithmic number of auxiliary correlated instances. Our results hold for a worst case choice of the primary instance whereas the auxiliary correlated instances are chosen according to a natural correlation model between instances.
Previously, it has been shown that any improvement over $O(n^2)$ for the worst case complexity
of the longest common subsequence and minimum edit distance problem would imply radically improved algorithms than currently known for a host of long studied polynomial time problems such as finding a pair of orthogonal vectors as well as imply that the Strong Exponential Time Hypothesis is false. The best known algorithm for the multiple sequence longest common subsequence problem is a variant of dynamic programming which requires $O(n^k)$ worst case runtime.
We note that sequence alignment is often used in identifying conserved sequence regions across a group of sequences of DNA, RNA or proteins hypothesized to be evolutionarily related, or as aid in establishing evolutionary relationships by constructing phylogenetic trees, but is notoriously computationally prohibitive for $k>3$. An intriguing question, which served as an inspiration for our work, is to find correlation models which coincide with evolutionary models and other relationships and for which our approach to multiple sequence alignment gives provable guarantees.