It's too good to pass up. This time, we avoid quadratic behavior
with a simple work-around: we limit the amount of reverse searching
we do after having found a literal match. If the reverse search ends
at the beginning of its search text (whether a match or not), then we
stop the reverse suffix optimization and fall back to the standard forward
search.
This reverts commit 50d991eaf53e6c21b8101c82e01ab6cf36fe687c.
# Conflicts:
# src/exec.rs
TL;DR: As implemented, its worst case time complexity was quadratic.
Given the regex `.*z.*abcd` and the haystack `abcdabcdabcdabcd...`,
the reverse suffix literal optimization would find the first occurrence
of `abcd`, then try to match `.*z.*` in reverse. When the match fails,
it starts searching for `abcd` again after the last occurrence. On the
second match (immediately after the first match), it has to run the DFA
backwards all over again. This is repeated for every match of `abcd`.
In essence, it will have run for quadratic time before reporting that the
regex cannot match.
There is hope. If we can know that the "reverse" search, in this case,
`.*z.*` *cannot* match the suffix literal `abcd`, then we can cap the
haystack that is searched in reverse so that each byte is visited by the
DFA at most once. In particular, we can cause the DFA to only search as
far as the previous match of the suffix literal.
This is somewhat close to a generalization of Boyer-Moore in the sense
that we're utilizing information of mismatches and the nature of the needle
to limit how much work search has to do.
The principle change in this commit is a complete rewrite of how
literals are detected from a regular expression. In particular, we now
traverse the abstract syntax to discover literals instead of the
compiled byte code. This permits more tuneable control over which and
how many literals are extracted, and is now exposed in the
`regex-syntax` crate so that others can benefit from it.
Other changes in this commit:
* The Boyer-Moore algorithm was rewritten to use my own concoction based
on frequency analysis. We end up regressing on a couple benchmarks
slightly because of this, but gain in some others and in general should
be faster in a broader number of cases. (Principally because we try to
run `memchr` on the rarest byte in a literal.) This should also greatly
improve handling of non-Western text.
* A "reverse suffix" literal optimization was added. That is, if suffix
literals exist but no prefix literals exist, then we can quickly scan
for suffix matches and then run the DFA in reverse to find matches.
(I'm not aware of any other regex engine that does this.)
* The mutex-based pool has been replaced with a spinlock-based pool
(from the new `mempool` crate). This reduces some amount of constant
overhead and improves several benchmarks that either search short
haystacks or find many matches in long haystacks.
* Search parameters have been refactored.
* RegexSet can now contain 0 or more regular expressions (previously, it
could only contain 2 or more). The InvalidSet error variant is now
deprecated.
* A bug in computing start states was fixed. Namely, the DFA assumed the
start states was always the first instruction, which is trivially
wrong for an expression like `^☃$`. This bug persisted because it
typically occurred when a literal optimization would otherwise run.
* A new CLI tool, regex-debug, has been added as a non-published
sub-crate. The CLI tool can answer various facts about regular
expressions, such as printing its AST, its compiled byte code or its
detected literals.
Closes#96, #188, #189