This commit does a number of manual fixups to the code after the
previous two commits were done via 'cargo fix' automatically.
Actually, this contains more 'cargo fix' annotations, since I had
forgotten to add 'edition = "2018"' to all sub-crates.
This was using `Vec::extend` to accumulate bytes in a buffer, but this
compiles down to less efficient code than, say, `Vec::extend_from_slice`.
However, that method is newly available as of Rust 1.6, so we do a small
backport to regain performance.
This bug was noticed by @llogiq here: https://github.com/TeXitoi/benchmarksgame-rs/issues/31
In particular, this increases the performance of bytes::Regex two-fold on
that benchmark.
A lazy DFA is much faster than executing an NFA because it doesn't
repeat the work of following epsilon transitions over and and over.
Instead, it computes states during search and caches them for reuse. We
avoid exponential state blow up by bounding the cache in size. When the
DFA isn't powerful enough to fulfill the caller's request (e.g., return
sub-capture locations), it still runs to find the boundaries of the
match and then falls back to NFA execution on the matched region. The
lazy DFA can otherwise execute on every regular expression *except* for
regular expressions that contain word boundary assertions (`\b` or
`\B`). (They are tricky to implement in the lazy DFA because they are
Unicode aware and therefore require multi-byte look-behind/ahead.)
The implementation in this PR is based on the implementation in Google's
RE2 library.
Adding a lazy DFA was a substantial change and required several
modifications:
1. The compiler can now produce both Unicode based programs (still used by the
NFA engines) and byte based programs (required by the lazy DFA, but possible
to use in the NFA engines too). In byte based programs, UTF-8 decoding is
built into the automaton.
2. A new `Exec` type was introduced to implement the logic for compiling
and choosing the right engine to use on each search.
3. Prefix literal detection was rewritten to work on bytes.
4. Benchmarks were overhauled and new ones were added to more carefully
track the impact of various optimizations.
5. A new `HACKING.md` guide has been added that gives a high-level
design overview of this crate.
Other changes in this commit include:
1. Protection against stack overflows. All places that once required
recursion have now either acquired a bound or have been converted to
using a stack on the heap.
2. Update the Aho-Corasick dependency, which includes `memchr2` and
`memchr3` optimizations.
3. Add PCRE benchmarks using the Rust `pcre` bindings.
Closes#66, #146.
The "full" automaton trades memory efficiency for more speed.
Since we're already bounding the number/size of prefixes, this
is OK. In the worst case, this automaton will use ~0.5MB.
Even though not all prefix machines can accurate preserve
priority, some can, so try to catch all those cases.
Also, impl a nicer `fmt::Debug` for the prefix machine.
Overview of changes:
* Instruction set has been redesigned to be smaller, mostly by
collapsing empty-width matches into one instruction type.
In addition to moving instruction-matching out of the matching
engine, this makes matching engine code much simpler.
* Rewrote input handling to use an inline representation of
`Option<char>` and clearer position handling with the `Input` trait.
* Added a new bounded backtracking matching engine that is invoked for
small regexes/inputs. It's about twice as fast as the full NFA
matching engine.
* Implemented caching for both the NFA and backtracking engines.
This avoids costly allocations on subsequent uses of the regex.
* Overhauled prefix handling at both discovery and matching.
Namely, sets of prefix literals can now be extracted from regexes.
Depending on what the prefixes look like, an Aho-Corasick DFA
is built from them.
(This adds a dependency on the `aho-corasick` crate.)
* When appropriate, use `memchr` to jump around in the input when
there is a single common byte prefix.
(This adds a dependency on the `memchr` crate.)
* Bring the `regex!` macro up to date. Unfortunately, it still
implements the full NFA matching engine and doesn't yet have
access to the new prefix DFA handling. Thus, its performance
has gotten *worse* than the dynamic implementation in most
cases. The docs have been updated to reflect this change.
Surprisingly, all of this required exactly one new application of
`unsafe`, which is isolated in the `memchr` crate. (Aho-Corasick has no
`unsafe` either!)
There should be *no* breaking changes in this commit. The only public
facing change is the addition of a method to the `Replacer` trait, but
it comes with a default implementation so that existing implementors
won't break. (Its purpose is to serve as a hint as to whether or not
replacement strings need to be expanded. This is crucial to speeding
up simple replacements.)
Closes#21.