Corrects `/-/.split("a-")` to return `["a", ""]` instead of `["a"]`.
(`/-/` is shorthand for `Regex::new("-").unwrap()`.)
This adds tests for both `split()` and `splitn()` covering a variety of
edge cases. One test is commented out because it is failing due to #521.
A future commit will fix it.
Note that the `split2` and `split3` tests were passing incorrectly
before this change. I have fixed them to expect the correct values.
Fixes#627
This also removes Captures.{at,pos} and replaces it with Captures.get,
which now returns a Match. Similarly, Captures.name returns a Match as
well.
Fixes#276
When a pattern is partially anchored and literal prefixes are detected,
we would scan for those prefixes like normal. On a match, we'd verify
whether the text starting at that prefix matched the rest of the regex.
This process doesn't work that well for partially anchored regexes, since
the prefix match presupposes that the regex is allowed to match at that
position. But if, say, a regex like `^z|a` finds a `z` in the middle of
the string, then it has lost the fact that `z` needs to appear at the
beginning of the string, and can therefore falsely report a match.
We could spend some effort and make this case work, but the literal
optimizer is already too complex. We need to simplify it to make future
optimizations like this possible.
Fixes#280.
The DFA handles word boundaries by tagging each state with an `is_word`
flag that lets us determine whether the next byte in the haystack should
cause a word boundary instruction to match. We were mishandling how this
tagging happened for start states. In particular, the tag was not used as
an index into the start state cache, and therefore could wind up choosing
an incorrect but previously computed start state with the wrong flags set.
This leads to incorrect matches.
We fix this by using the right flags to generate an index.
The principle change in this commit is a complete rewrite of how
literals are detected from a regular expression. In particular, we now
traverse the abstract syntax to discover literals instead of the
compiled byte code. This permits more tuneable control over which and
how many literals are extracted, and is now exposed in the
`regex-syntax` crate so that others can benefit from it.
Other changes in this commit:
* The Boyer-Moore algorithm was rewritten to use my own concoction based
on frequency analysis. We end up regressing on a couple benchmarks
slightly because of this, but gain in some others and in general should
be faster in a broader number of cases. (Principally because we try to
run `memchr` on the rarest byte in a literal.) This should also greatly
improve handling of non-Western text.
* A "reverse suffix" literal optimization was added. That is, if suffix
literals exist but no prefix literals exist, then we can quickly scan
for suffix matches and then run the DFA in reverse to find matches.
(I'm not aware of any other regex engine that does this.)
* The mutex-based pool has been replaced with a spinlock-based pool
(from the new `mempool` crate). This reduces some amount of constant
overhead and improves several benchmarks that either search short
haystacks or find many matches in long haystacks.
* Search parameters have been refactored.
* RegexSet can now contain 0 or more regular expressions (previously, it
could only contain 2 or more). The InvalidSet error variant is now
deprecated.
* A bug in computing start states was fixed. Namely, the DFA assumed the
start states was always the first instruction, which is trivially
wrong for an expression like `^☃$`. This bug persisted because it
typically occurred when a literal optimization would otherwise run.
* A new CLI tool, regex-debug, has been added as a non-published
sub-crate. The CLI tool can answer various facts about regular
expressions, such as printing its AST, its compiled byte code or its
detected literals.
Closes#96, #188, #189
This commit enables support for compiling regular expressions that can
match on arbitrary byte slices. In particular, we add a new sub-module
called `bytes` that duplicates the API of the top-level module, except
`&str` for subjects is replaced by `&[u8]`. Additionally, Unicode
support in the regular expression is disabled by default but can be
selectively re-enabled with the `u` flag. (Unicode support cannot be
selectively disabled in the standard top-level API.)
Most of the interesting changes occurred in the `regex-syntax` crate,
where the AST now explicitly distinguishes between "ASCII compatible"
expressions and Unicode aware expressions.
This PR makes a few other changes out of convenience:
1. The DFA now knows how to "give up" if it's flushing its cache too
often. When the DFA gives up, either backtracking or the NFA algorithm
take over, which provides better performance.
2. Benchmarks were added for Oniguruma.
3. The benchmarks in general were overhauled to be defined in one place
by using conditional compilation.
4. The tests have been completely reorganized to make it easier to split
up the tests depending on which regex engine we're using. For example,
we occasionally need to be able to write tests specifically for
`regex::Regex` or specifically for `regex::bytes::Regex`.
5. Fixes a bug where NUL bytes weren't represented correctly in the byte
class optimization for the DFA.
Closes#85.