You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
O(m+n)
[...]
In (unlikely) bad cases, this function's time complexity degrades towards O(n*m).
How unlikely are these bad cases? In any case the O(m+n) is misleading if the worst case is proportional to n*m.
I think we may want to spell this out further.
The text was updated successfully, but these errors were encountered:
These bad cases arise in similar situations to when BMH would do badly. The way replace (and indeed, all string matching provided by this library) works is essentially a BMH variant, designed to be zero-allocating, based on this here.
I would say that this should go beyond spelling out further - the choice of implementation is poor. I've exhibited this previously, and while the method I use can't port 1-for-1 (due to the UTF-16 encoding), it can probably be made considerably better than what we have right now. Furthermore, the current implementation of all string matching functions is needlessly partial, and should be fixed not to be. I've already done this in text-ascii, without any loss of performance.
The docs say:
How unlikely are these bad cases? In any case the O(m+n) is misleading if the worst case is proportional to n*m.
I think we may want to spell this out further.
The text was updated successfully, but these errors were encountered: