摘要:We revisit the k-mismatch problem in the streaming model on a pattern of length m and a streaming text of length n, both over a size-Ïf alphabet. The current state-of-the-art algorithm for the streaming k-mismatch problem, by Clifford et al. [SODA 2019], uses OÌf(k) space and OÌf(â^Sk) worst-case time per character. The space complexity is known to be (unconditionally) optimal, and the worst-case time per character matches a conditional lower bound. However, there is a gap between the total time cost of the algorithm, which is OÌf(nâ^Sk), and the fastest known offline algorithm, which costs OÌf(n + min(nk/â^Sm, Ïfn)) time. Moreover, it is not known whether improvements over the OÌf(nâ^Sk) total time are possible when using more than O(k) space. We address these gaps by designing a randomized streaming algorithm for the k-mismatch problem that, given an integer parameter kâ¤sâ¤m, uses Ã.(s) space and costs Ã.(n+min(nk²/m, nk/â^Ss, Ïfnm/s)) total time. For s=m, the total runtime becomes OÌf(n + min(nk/â^Sm, Ïfn)), which matches the time cost of the fastest offline algorithm. Moreover, the worst-case time cost per character is still Ã.(â^Sk).