This commit does a number of manual fixups to the code after the
previous two commits were done via 'cargo fix' automatically.
Actually, this contains more 'cargo fix' annotations, since I had
forgotten to add 'edition = "2018"' to all sub-crates.
This also removes Captures.{at,pos} and replaces it with Captures.get,
which now returns a Match. Similarly, Captures.name returns a Match as
well.
Fixes#276
This was using `Vec::extend` to accumulate bytes in a buffer, but this
compiles down to less efficient code than, say, `Vec::extend_from_slice`.
However, that method is newly available as of Rust 1.6, so we do a small
backport to regain performance.
This bug was noticed by @llogiq here: https://github.com/TeXitoi/benchmarksgame-rs/issues/31
In particular, this increases the performance of bytes::Regex two-fold on
that benchmark.
This commit contains three classes of improvements that are all
motivated by reducing constant overhead of matching by as much as
possible:
1. Reduce the number of state loads per iteration of the DFA loop from
2 to 1.
2. Use `inline(always)` to reduce the number of function calls made
between a public API call and the inner DFA loop.
3. Push more case analysis from match-time to compile-time. We can now
start searching with only 1 or 2 branches.
There are more `inline(always)` annotations than I'm comfortable with,
but I have actually benchmarked all (or most, I think) of them. Many of
them can result in a 10-25% performance boost, and generally
significantly reduces the number of instructions that need to be run
per-match. This does make `cargo build --release` a few seconds longer.
Despite all of that, I personally think it's awesome that we're at a
point where function inlining impacts search performance this
dramatically!
Other improvements:
1. There is now an *internal* trait, RegularExpression, that organizes
the internal API a bit more. Notably, it allows us to get rid of
duplicate implementations of `find_iter` and `captures_iter`.
2. The old `Params`-based internal API is gone. Its design moved too
much case analysis to match-time. We've replaced it with more
explicit implementations for each type of search, rather than
trying to dynamically reconfigure ourselves.
3. All iterators now amortize the overhead of the TLS fetch for the
cache. This isn't really a big deal in the single threaded case
(approximate 1ns of overhead per match on Linux), but should reduce
contention quite a bit in the multi-threaded case. This does mean
that the iterator types (`FindMatches` and `FindCaptures`) no longer
satisfy `Sync`. I suspect this is fine since I can't imagine any
productive use of `Sync` on these iterators anyway (their only public
methods take `&mut self`).
4. Anchored regexes with complete literals now have their own matching
paths, which makes obvious cases like `^foo` or `foo$` execute
exactly as fast as you might imagine.
Fixes#192.
This commit enables support for compiling regular expressions that can
match on arbitrary byte slices. In particular, we add a new sub-module
called `bytes` that duplicates the API of the top-level module, except
`&str` for subjects is replaced by `&[u8]`. Additionally, Unicode
support in the regular expression is disabled by default but can be
selectively re-enabled with the `u` flag. (Unicode support cannot be
selectively disabled in the standard top-level API.)
Most of the interesting changes occurred in the `regex-syntax` crate,
where the AST now explicitly distinguishes between "ASCII compatible"
expressions and Unicode aware expressions.
This PR makes a few other changes out of convenience:
1. The DFA now knows how to "give up" if it's flushing its cache too
often. When the DFA gives up, either backtracking or the NFA algorithm
take over, which provides better performance.
2. Benchmarks were added for Oniguruma.
3. The benchmarks in general were overhauled to be defined in one place
by using conditional compilation.
4. The tests have been completely reorganized to make it easier to split
up the tests depending on which regex engine we're using. For example,
we occasionally need to be able to write tests specifically for
`regex::Regex` or specifically for `regex::bytes::Regex`.
5. Fixes a bug where NUL bytes weren't represented correctly in the byte
class optimization for the DFA.
Closes#85.
Regex sets permit matching multiple (possibly overlapping) regular
expressions in a single scan of the search text. This adds a few new
types, with `RegexSet` being the primary one.
All matching engines support regex sets, including the lazy DFA.
This commit also refactors a lot of the code around handling captures
into a central `Search`, which now also includes a set of matches that
is used by regex sets to determine which regex has matched.
We also merged the `Program` and `Insts` type, which were split up when
adding the lazy DFA, but the code seemed more complicated because of it.
Closes#156.
A lazy DFA is much faster than executing an NFA because it doesn't
repeat the work of following epsilon transitions over and and over.
Instead, it computes states during search and caches them for reuse. We
avoid exponential state blow up by bounding the cache in size. When the
DFA isn't powerful enough to fulfill the caller's request (e.g., return
sub-capture locations), it still runs to find the boundaries of the
match and then falls back to NFA execution on the matched region. The
lazy DFA can otherwise execute on every regular expression *except* for
regular expressions that contain word boundary assertions (`\b` or
`\B`). (They are tricky to implement in the lazy DFA because they are
Unicode aware and therefore require multi-byte look-behind/ahead.)
The implementation in this PR is based on the implementation in Google's
RE2 library.
Adding a lazy DFA was a substantial change and required several
modifications:
1. The compiler can now produce both Unicode based programs (still used by the
NFA engines) and byte based programs (required by the lazy DFA, but possible
to use in the NFA engines too). In byte based programs, UTF-8 decoding is
built into the automaton.
2. A new `Exec` type was introduced to implement the logic for compiling
and choosing the right engine to use on each search.
3. Prefix literal detection was rewritten to work on bytes.
4. Benchmarks were overhauled and new ones were added to more carefully
track the impact of various optimizations.
5. A new `HACKING.md` guide has been added that gives a high-level
design overview of this crate.
Other changes in this commit include:
1. Protection against stack overflows. All places that once required
recursion have now either acquired a bound or have been converted to
using a stack on the heap.
2. Update the Aho-Corasick dependency, which includes `memchr2` and
`memchr3` optimizations.
3. Add PCRE benchmarks using the Rust `pcre` bindings.
Closes#66, #146.
But it's faster because it cheats by combining all of the replacement
regexes into one, and doing all of the replacements in a single scan
of the input.
The "full" automaton trades memory efficiency for more speed.
Since we're already bounding the number/size of prefixes, this
is OK. In the worst case, this automaton will use ~0.5MB.
Even though not all prefix machines can accurate preserve
priority, some can, so try to catch all those cases.
Also, impl a nicer `fmt::Debug` for the prefix machine.
Overview of changes:
* Instruction set has been redesigned to be smaller, mostly by
collapsing empty-width matches into one instruction type.
In addition to moving instruction-matching out of the matching
engine, this makes matching engine code much simpler.
* Rewrote input handling to use an inline representation of
`Option<char>` and clearer position handling with the `Input` trait.
* Added a new bounded backtracking matching engine that is invoked for
small regexes/inputs. It's about twice as fast as the full NFA
matching engine.
* Implemented caching for both the NFA and backtracking engines.
This avoids costly allocations on subsequent uses of the regex.
* Overhauled prefix handling at both discovery and matching.
Namely, sets of prefix literals can now be extracted from regexes.
Depending on what the prefixes look like, an Aho-Corasick DFA
is built from them.
(This adds a dependency on the `aho-corasick` crate.)
* When appropriate, use `memchr` to jump around in the input when
there is a single common byte prefix.
(This adds a dependency on the `memchr` crate.)
* Bring the `regex!` macro up to date. Unfortunately, it still
implements the full NFA matching engine and doesn't yet have
access to the new prefix DFA handling. Thus, its performance
has gotten *worse* than the dynamic implementation in most
cases. The docs have been updated to reflect this change.
Surprisingly, all of this required exactly one new application of
`unsafe`, which is isolated in the `memchr` crate. (Aho-Corasick has no
`unsafe` either!)
There should be *no* breaking changes in this commit. The only public
facing change is the addition of a method to the `Replacer` trait, but
it comes with a default implementation so that existing implementors
won't break. (Its purpose is to serve as a hint as to whether or not
replacement strings need to be expanded. This is crucial to speeding
up simple replacements.)
Closes#21.