19 Commits

Author SHA1 Message Date
Andrew Gallant
e2860fe037 edition: manual fixups to code
This commit does a number of manual fixups to the code after the
previous two commits were done via 'cargo fix' automatically.

Actually, this contains more 'cargo fix' annotations, since I had
forgotten to add 'edition = "2018"' to all sub-crates.
2021-04-30 20:02:56 -04:00
Andrew Gallant
94ce242913 edition: more 2018 migration (idioms) 2021-04-30 20:02:56 -04:00
Alex Touchet
83c2fcdfc8
examples: update benchmark game URL
The benchmark game no longer has the "regex-dna" benchmark,
but we update the URL anyway.

PR #724
2020-11-24 07:58:51 -05:00
Andrew Gallant
0e96af4166
style: start using rustfmt 2019-08-03 14:20:22 -04:00
Lukas Lueg
264ef3f421 Revert some unwarranted clippy-changes 2017-06-01 19:38:23 +02:00
Lukas Lueg
94f8213def Fix clippy warnings 2017-05-31 22:24:22 +02:00
Andrew Gallant
384e9376a4 find/find_iter now return a Match instead of (usize, usize).
This also removes Captures.{at,pos} and replaces it with Captures.get,
which now returns a Match. Similarly, Captures.name returns a Match as
well.

Fixes #276
2016-12-30 01:05:50 -05:00
Andrew Gallant
52165d6b31 Use Cow for replacements.
If `replace` doesn't find any matches, then it can return the original
string unchanged.
2016-12-30 01:05:50 -05:00
Andrew Gallant
6770e23454 Fixes a performance bug in bytes::Regex::replace.
This was using `Vec::extend` to accumulate bytes in a buffer, but this
compiles down to less efficient code than, say, `Vec::extend_from_slice`.
However, that method is newly available as of Rust 1.6, so we do a small
backport to regain performance.

This bug was noticed by @llogiq here: https://github.com/TeXitoi/benchmarksgame-rs/issues/31
In particular, this increases the performance of bytes::Regex two-fold on
that benchmark.
2016-04-22 19:30:16 -04:00
Andrew Gallant
5f32b5aa04 Refactor internal API to remove constant overhead.
This commit contains three classes of improvements that are all
motivated by reducing constant overhead of matching by as much as
possible:

  1. Reduce the number of state loads per iteration of the DFA loop from
     2 to 1.
  2. Use `inline(always)` to reduce the number of function calls made
     between a public API call and the inner DFA loop.
  3. Push more case analysis from match-time to compile-time. We can now
     start searching with only 1 or 2 branches.

There are more `inline(always)` annotations than I'm comfortable with,
but I have actually benchmarked all (or most, I think) of them. Many of
them can result in a 10-25% performance boost, and generally
significantly reduces the number of instructions that need to be run
per-match. This does make `cargo build --release` a few seconds longer.
Despite all of that, I personally think it's awesome that we're at a
point where function inlining impacts search performance this
dramatically!

Other improvements:

  1. There is now an *internal* trait, RegularExpression, that organizes
     the internal API a bit more. Notably, it allows us to get rid of
     duplicate implementations of `find_iter` and `captures_iter`.
  2. The old `Params`-based internal API is gone. Its design moved too
     much case analysis to match-time. We've replaced it with more
     explicit implementations for each type of search, rather than
     trying to dynamically reconfigure ourselves.
  3. All iterators now amortize the overhead of the TLS fetch for the
     cache. This isn't really a big deal in the single threaded case
     (approximate 1ns of overhead per match on Linux), but should reduce
     contention quite a bit in the multi-threaded case. This does mean
     that the iterator types (`FindMatches` and `FindCaptures`) no longer
     satisfy `Sync`. I suspect this is fine since I can't imagine any
     productive use of `Sync` on these iterators anyway (their only public
     methods take `&mut self`).
  4. Anchored regexes with complete literals now have their own matching
     paths, which makes obvious cases like `^foo` or `foo$` execute
     exactly as fast as you might imagine.

Fixes #192.
2016-04-05 20:30:02 -04:00
Andrew Gallant
d98ec1b1a5 Add regex matching for &[u8].
This commit enables support for compiling regular expressions that can
match on arbitrary byte slices. In particular, we add a new sub-module
called `bytes` that duplicates the API of the top-level module, except
`&str` for subjects is replaced by `&[u8]`. Additionally, Unicode
support in the regular expression is disabled by default but can be
selectively re-enabled with the `u` flag. (Unicode support cannot be
selectively disabled in the standard top-level API.)

Most of the interesting changes occurred in the `regex-syntax` crate,
where the AST now explicitly distinguishes between "ASCII compatible"
expressions and Unicode aware expressions.

This PR makes a few other changes out of convenience:

1. The DFA now knows how to "give up" if it's flushing its cache too
often. When the DFA gives up, either backtracking or the NFA algorithm
take over, which provides better performance.
2. Benchmarks were added for Oniguruma.
3. The benchmarks in general were overhauled to be defined in one place
by using conditional compilation.
4. The tests have been completely reorganized to make it easier to split
up the tests depending on which regex engine we're using. For example,
we occasionally need to be able to write tests specifically for
`regex::Regex` or specifically for `regex::bytes::Regex`.
5. Fixes a bug where NUL bytes weren't represented correctly in the byte
class optimization for the DFA.

Closes #85.
2016-03-09 21:23:29 -05:00
Andrew Gallant
94d0ad486e Add regex sets.
Regex sets permit matching multiple (possibly overlapping) regular
expressions in a single scan of the search text. This adds a few new
types, with `RegexSet` being the primary one.

All matching engines support regex sets, including the lazy DFA.

This commit also refactors a lot of the code around handling captures
into a central `Search`, which now also includes a set of matches that
is used by regex sets to determine which regex has matched.

We also merged the `Program` and `Insts` type, which were split up when
adding the lazy DFA, but the code seemed more complicated because of it.

Closes #156.
2016-02-21 20:52:07 -05:00
Andrew Gallant
2aa172779e Add a lazy DFA.
A lazy DFA is much faster than executing an NFA because it doesn't
repeat the work of following epsilon transitions over and and over.
Instead, it computes states during search and caches them for reuse. We
avoid exponential state blow up by bounding the cache in size. When the
DFA isn't powerful enough to fulfill the caller's request (e.g., return
sub-capture locations), it still runs to find the boundaries of the
match and then falls back to NFA execution on the matched region. The
lazy DFA can otherwise execute on every regular expression *except* for
regular expressions that contain word boundary assertions (`\b` or
`\B`). (They are tricky to implement in the lazy DFA because they are
Unicode aware and therefore require multi-byte look-behind/ahead.)
The implementation in this PR is based on the implementation in Google's
RE2 library.

Adding a lazy DFA was a substantial change and required several
modifications:

1. The compiler can now produce both Unicode based programs (still used by the
   NFA engines) and byte based programs (required by the lazy DFA, but possible
   to use in the NFA engines too). In byte based programs, UTF-8 decoding is
   built into the automaton.
2. A new `Exec` type was introduced to implement the logic for compiling
   and choosing the right engine to use on each search.
3. Prefix literal detection was rewritten to work on bytes.
4. Benchmarks were overhauled and new ones were added to more carefully
   track the impact of various optimizations.
5. A new `HACKING.md` guide has been added that gives a high-level
   design overview of this crate.

Other changes in this commit include:

1. Protection against stack overflows. All places that once required
   recursion have now either acquired a bound or have been converted to
   using a stack on the heap.
2. Update the Aho-Corasick dependency, which includes `memchr2` and
   `memchr3` optimizations.
3. Add PCRE benchmarks using the Rust `pcre` bindings.

Closes #66, #146.
2016-02-15 15:42:04 -05:00
Andrew Gallant
6ac1e36973 Allow deprecated connect. 2015-07-21 18:34:52 -04:00
Andrew Gallant
c9e6781a68 A single threaded version of shootout benchmark. 2015-06-20 13:02:06 -04:00
Andrew Gallant
4d5c84192b Add a 25% faster version of regex-dna benchmark.
But it's faster because it cheats by combining all of the replacement
regexes into one, and doing all of the replacements in a single scan
of the input.
2015-06-20 12:07:26 -04:00
Andrew Gallant
3ecb4b1d6b Update to use faster Aho-Corasick automaton.
The "full" automaton trades memory efficiency for more speed.
Since we're already bounding the number/size of prefixes, this
is OK. In the worst case, this automaton will use ~0.5MB.
2015-06-20 00:45:31 -04:00
Andrew Gallant
b381fa3499 Use Literals matching engine more aggressively.
Even though not all prefix machines can accurate preserve
priority, some can, so try to catch all those cases.

Also, impl a nicer `fmt::Debug` for the prefix machine.
2015-06-17 21:06:07 -04:00
Andrew Gallant
c86c025624 Major refactoring and performance improvements.
Overview of changes:

* Instruction set has been redesigned to be smaller, mostly by
  collapsing empty-width matches into one instruction type.
  In addition to moving instruction-matching out of the matching
  engine, this makes matching engine code much simpler.
* Rewrote input handling to use an inline representation of
  `Option<char>` and clearer position handling with the `Input` trait.
* Added a new bounded backtracking matching engine that is invoked for
  small regexes/inputs. It's about twice as fast as the full NFA
  matching engine.
* Implemented caching for both the NFA and backtracking engines.
  This avoids costly allocations on subsequent uses of the regex.
* Overhauled prefix handling at both discovery and matching.
  Namely, sets of prefix literals can now be extracted from regexes.
  Depending on what the prefixes look like, an Aho-Corasick DFA
  is built from them.
  (This adds a dependency on the `aho-corasick` crate.)
* When appropriate, use `memchr` to jump around in the input when
  there is a single common byte prefix.
  (This adds a dependency on the `memchr` crate.)
* Bring the `regex!` macro up to date. Unfortunately, it still
  implements the full NFA matching engine and doesn't yet have
  access to the new prefix DFA handling. Thus, its performance
  has gotten *worse* than the dynamic implementation in most
  cases. The docs have been updated to reflect this change.

Surprisingly, all of this required exactly one new application of
`unsafe`, which is isolated in the `memchr` crate. (Aho-Corasick has no
`unsafe` either!)

There should be *no* breaking changes in this commit. The only public
facing change is the addition of a method to the `Replacer` trait, but
it comes with a default implementation so that existing implementors
won't break. (Its purpose is to serve as a hint as to whether or not
replacement strings need to be expanded. This is crucial to speeding
up simple replacements.)

Closes #21.
2015-06-15 21:18:24 -04:00