This commit sets up the infrastructure for supporting various `unicode`
and `perf` features, which permit decreasing binary size, compile times
and the size of the dependency tree.
Most of the work here is in modifying the regex tests to make them
work in concert with the available Unicode features. In cases where
Unicode is irrelevant, we just turn it off. In other cases, we require
the Unicode features to run the tests.
This also introduces a new error in the compiler where by if a Unicode
word boundary is used, but the `unicode-perl` feature is disabled, then
the regex will fail to compile. (Because the necessary data to match
Unicode word boundaries isn't available.)
In other words, `\b` in a `bytes::Regex` can now be used in the DFA.
This leads to a big performance boost:
```
sherlock::word_ending_n 115,465,261 (5 MB/s) 3,038,621 (195 MB/s) -112,426,640 -97.37%
```
Unfortunately, Unicode word boundaries continue to elude the DFA. This
state of affairs is lamentable, but after a lot of thought, I've
concluded there are only two ways to speed up Unicode word boundaries:
1. Come up with a hairbrained scheme to add multi-byte look-behind/ahead
to the lazy DFA. (The theory says it's possible. Figuring out how to
do this without combinatorial state explosion is not within my grasp
at the moment.)
2. Build a second lazy DFA with transitions on Unicode codepoints
instead of bytes. (The looming inevitability of this makes me queasy
for a number of reasons.)
To ameliorate this state of affairs, it is now possible to disable
Unicode support in `Regex::new` with `(?-u)`. In other words, one can
now use an ASCII word boundary with `(?-u:\b)`.
Disabling Unicode support does not violate any invariants around UTF-8.
In particular, if the regular expression could lead to a match of
invalid UTF-8, then the parser will return an error. (This only happens
for `Regex::new`. `bytes::Regex::new` still of course allows matching
arbitrary bytes.)
Finally, a new `PERFORMANCE.md` guide was written.
The principle change in this commit is a complete rewrite of how
literals are detected from a regular expression. In particular, we now
traverse the abstract syntax to discover literals instead of the
compiled byte code. This permits more tuneable control over which and
how many literals are extracted, and is now exposed in the
`regex-syntax` crate so that others can benefit from it.
Other changes in this commit:
* The Boyer-Moore algorithm was rewritten to use my own concoction based
on frequency analysis. We end up regressing on a couple benchmarks
slightly because of this, but gain in some others and in general should
be faster in a broader number of cases. (Principally because we try to
run `memchr` on the rarest byte in a literal.) This should also greatly
improve handling of non-Western text.
* A "reverse suffix" literal optimization was added. That is, if suffix
literals exist but no prefix literals exist, then we can quickly scan
for suffix matches and then run the DFA in reverse to find matches.
(I'm not aware of any other regex engine that does this.)
* The mutex-based pool has been replaced with a spinlock-based pool
(from the new `mempool` crate). This reduces some amount of constant
overhead and improves several benchmarks that either search short
haystacks or find many matches in long haystacks.
* Search parameters have been refactored.
* RegexSet can now contain 0 or more regular expressions (previously, it
could only contain 2 or more). The InvalidSet error variant is now
deprecated.
* A bug in computing start states was fixed. Namely, the DFA assumed the
start states was always the first instruction, which is trivially
wrong for an expression like `^☃$`. This bug persisted because it
typically occurred when a literal optimization would otherwise run.
* A new CLI tool, regex-debug, has been added as a non-published
sub-crate. The CLI tool can answer various facts about regular
expressions, such as printing its AST, its compiled byte code or its
detected literals.
Closes#96, #188, #189
This commit enables support for compiling regular expressions that can
match on arbitrary byte slices. In particular, we add a new sub-module
called `bytes` that duplicates the API of the top-level module, except
`&str` for subjects is replaced by `&[u8]`. Additionally, Unicode
support in the regular expression is disabled by default but can be
selectively re-enabled with the `u` flag. (Unicode support cannot be
selectively disabled in the standard top-level API.)
Most of the interesting changes occurred in the `regex-syntax` crate,
where the AST now explicitly distinguishes between "ASCII compatible"
expressions and Unicode aware expressions.
This PR makes a few other changes out of convenience:
1. The DFA now knows how to "give up" if it's flushing its cache too
often. When the DFA gives up, either backtracking or the NFA algorithm
take over, which provides better performance.
2. Benchmarks were added for Oniguruma.
3. The benchmarks in general were overhauled to be defined in one place
by using conditional compilation.
4. The tests have been completely reorganized to make it easier to split
up the tests depending on which regex engine we're using. For example,
we occasionally need to be able to write tests specifically for
`regex::Regex` or specifically for `regex::bytes::Regex`.
5. Fixes a bug where NUL bytes weren't represented correctly in the byte
class optimization for the DFA.
Closes#85.