47deaa1d03
if we reached the end of input, test with at_eof, to see if we should return the whole input slice or not. This will potentially break some existing parsers that rely on many0 trying to consume everything, but this behaviour is more correct and aligned with the rest of nom |
||
---|---|---|
assets | ||
benches | ||
doc | ||
src | ||
tests | ||
.gitignore | ||
.travis.yml | ||
Cargo.toml | ||
CHANGELOG.md | ||
CONTRIBUTING.md | ||
LICENSE | ||
README.md | ||
rustfmt.toml |
nom, eating data byte by byte
nom is a parser combinators library written in Rust. Its goal is to provide tools to build safe parsers without compromising the speed or memory consumption. To that end, it uses extensively Rust's strong typing, zero copy parsing, push streaming, pull streaming, and provides macros and traits to abstract most of the error prone plumbing.
nom can handle any format, binary or textual, with grammar from regular to context sensitive. There are already a lot of example parsers available on Github.
If you need any help developing your parsers, please ping geal
on IRC (mozilla, freenode, geeknode, oftc), go to #nom
on Mozilla IRC, or on the Gitter chat room.
Reference documentation is available here.
Various design documents and tutorials can be found in the doc directory.
Features
Here are the current and planned features, with their status:
- byte-oriented: the basic type is
&[u8]
and parsers will work as much as possible on byte array slices (but are not limited to them) - bit-oriented: nom can address a byte slice as a bit stream
- string-oriented: the same kind of combinators can apply on UTF-8 strings as well
- zero-copy:
- in the parsers: a parsing chain will almost always return a slice of its input data
- in the producers and consumers: some copying still happens
- streaming:
- push: a data producer can continuously feed consumers and parsers, as long as there is data available
- pull: a consumer will handle the produced data and drive seeking in the producer
- macro based syntax: easier parser building through macro usage
- state machine handling: consumers provide a basic way of managing state machines
- descriptive errors: the parsers can aggregate a list of error codes with pointers to the incriminated input slice. Those error lists can be pattern matched to provide useful messages.
- custom error types: you can provide a specific type to improve errors returned by parsers
- safe parsing: nom leverages Rust's safe memory handling and powerful types, and parsers are routinely fuzzed and tested with real world data. So far, the only flaws found by fuzzing were in code written outside of nom
- speed: benchmarks have shown that nom parsers often outperform many parser combinators library like Parsec and attoparsec, some regular expression engines and even handwritten C parsers
Some benchmarks are available on Github.
Installation
nom is available on crates.io and can be included in your Cargo enabled project like this:
[dependencies]
nom = "^3.2"
Then include it in your code like this:
#[macro_use]
extern crate nom;
NOTE: if you have existing code using nom below the 2.0 version, please take a look at the upgrade documentation to handle the breaking changes.
There are a few compilation features:
std
: (activated by default) if disabled, nom can work inno_std
buildsnightly
: enables helpful error messages if you use a nightly compilerregexp
: enables regular expression parsers with theregex
crateregexp_macros
: enables regular expression parsers with theregex
andregex_macros
crates. Regular expressions can be defined at compile time, but it requires a nightly version of rustcverbose-errors
: accumulate error codes and input positions as you backtrack through the parser tree. This gives you precise information about which part of the parser was affected by which part of the input
You can activate those features like this:
[dependencies.nom]
version = "^3.2"
features = ["regexp"]
Usage
Parser combinators
Parser combinators are an approach to parsers that is very different from software like lex and yacc. Instead of writing the grammar in a separate file and generating the corresponding code, you use very small functions with very specific purpose, like "take 5 bytes", or "recognize the word 'HTTP'", and assemble then in meaningful patterns like "recognize 'HTTP', then a space, then a version". The resulting code is small, and looks like the grammar you would have written with other parser approaches.
This has a few advantages:
- the parsers are small and easy to write
- the parsers components are easy to reuse (if they're general enough, please add them to nom!)
- the parsers components are easy to test separately (unit tests and property-based tests)
- the parser combination code looks close to the grammar you would have written
- you can build partial parsers, specific to the data you need at the moment, and ignore the rest
Here is an example of one such parser, to recognize text between parentheses:
named!(parens, delimited!(char!('('), is_not!(")"), char!(')')));
It defines a function named parens
, which will recognize a sequence of the character (
, the longest byte array not containing )
, then the character )
, and will return the byte array in the middle.
Here is another parser, written without using nom's macros this time:
fn take4(i:&[u8]) -> IResult<&[u8], &[u8]>{
if i.len() < 4 {
IResult::Incomplete(Needed::Size(4))
} else {
IResult::Done(&i[4..],&i[0..4])
}
}
This function takes a byte array as input, and tries to consume 4 bytes. With macros, you would write it like this:
named!(take4, take!(4));
A parser in nom is a function which, for an input type I
, an output type O
and an optional error type E
, will have the following signature:
fn parser(input: I) -> IResult<I, O, E>;
Or like this, if you don't want to specify a custom error type (it will be u32
by default):
fn parser(input: I) -> IResult<I, O>;
IResult
is an enumeration that can represent:
- a correct result
Done(I,O)
with the first element being the rest of the input (not parsed yet), and the second being the output value; - an error
Error(Err)
withErr
an enum that can represent an error with, optionally, position information and a chain of accumulated errors; - an
Incomplete(Needed)
indicating that more input is necessary.Needed
can indicate how much data is needed.
pub enum IResult<I,O,E=u32> {
Done(I,O),
Error(Err<I,E>),
Incomplete(Needed)
}
pub enum Err<P,E=u32>{
/// an error code
Code(ErrorKind<E>),
/// an error code, and the next error in the parsing chain
Node(ErrorKind<E>, Box<Err<P,E>>),
/// an error code and the related input position
Position(ErrorKind<E>, P),
/// an error code, the related input position, and the next error in the parsing chain
NodePosition(ErrorKind<E>, P, Box<Err<P,E>>)
}
pub enum Needed {
/// needs more data, but we do not know how much
Unknown,
/// contains the required data size
Size(usize)
}
There is already a large list of basic parsers available, like:
length_value
: a byte indicating the size of the following buffernot_line_ending
: returning as much data as possible until a line ending (\r or \n) is foundline_ending
: matches a line endingalpha
: will return the longest alphabetical array from the beginning of the inputdigit
: will return the longest numerical array from the beginning of the inputalphanumeric
: will return the longest alphanumeric array from the beginning of the inputspace
: will return the longest array containing only spacesmultispace
: will return the longest array containing space, \r or \nbe_u8
,be_u16
,be_u32
,be_u64
to parse big endian unsigned integers of multiple sizesbe_i8
,be_i16
,be_i32
,be_i64
to parse big endian signed integers of multiple sizesbe_f32
,be_f64
to parse big endian floating point numberseof
: a parser that is successful only if the input is over. In any other case, it returns an error.
Please refer to the documentation for an exhaustive list of parsers.
Making new parsers with macros
Macros are the main way to make new parsers by combining other ones. Those macros accept other macros or function names as arguments. You then need to make a function out of that combinator with named!
, or a closure with closure!
. Here is how you would do, with the tag!
and take!
combinators:
named!(abcd_parser, tag!("abcd")); // will consume bytes if the input begins with "abcd"
named!(take_10, take!(10)); // will consume 10 bytes of input
The named!
macro can take three different syntaxes:
named!(my_function( &[u8] ) -> &[u8], tag!("abcd"));
named!(my_function<&[u8], &[u8]>, tag!("abcd"));
named!(my_function, tag!("abcd")); // when you know the parser takes &[u8] as input, and returns &[u8] as output
IMPORTANT NOTE: Rust's macros can be very sensitive to the syntax, so you may encounter an error compiling parsers like this one:
named!(my_function<&[u8], Vec<&[u8]>>, many0!(tag!("abcd")));
You will get the following error: "error: expected an item keyword". This happens because >>
is seen as an operator, so the macro parser does not recognize what we want. There is a way to avoid it, by inserting a space:
named!(my_function<&[u8], Vec<&[u8]> >, many0!(tag!("abcd")));
This will compile correctly. I am very sorry for this inconvenience.
Common combinators
Here are the basic macros available:
tag!
: will match the byte array provided as argumentis_not!
: will match the longest array not containing any of the bytes of the array provided to the macrois_a!
: will match the longest array containing only bytes of the array provided to the macrotake_while!
: will walk the whole array and apply the closure to each suffix until the function failstake!
: will take as many bytes as the number providedtake_until!
: will take as many bytes as possible until it encounters the provided byte array, and will leave it in the remaining inputtake_until_and_consume!
: will take as many bytes as possible until it encounters the provided byte array, and will skip ittake_until_either_and_consume!
: will take as many bytes as possible until it encounters one of the bytes of the provided array, and will skip ittake_until_either!
: will take as many bytes as possible until it encounters one of the bytes of the provided array, and will leave it in the remaining inputmap!
: applies a function to the output of aIResult
and puts the result in the output of aIResult
with the same remaining inputflat_map!
: applies a parser to the output of aIResult
and returns a newIResult
with the same remaining input.map_opt!
: applies a function returning an Option to the output ofIResult
, returnsDone(input, o)
if the result isSome(o)
, orError(0)
map_res!
: applies a function returning a Result to the output ofIResult
, returnsDone(input, o)
if the result isOk(o)
, orError(0)
Please refer to the documentation for an exhaustive list of combinators.
Combining parsers
There are more high level patterns, like the alt!
combinator, which provides a choice between multiple parsers. If one branch fails, it tries the next, and returns the result of the first parser that succeeds:
named!(alt_tags, alt!(tag!("abcd") | tag!("efgh")));
assert_eq!(alt_tags(b"abcdxxx"), Done(&b"xxx"[..], &b"abcd"[..]));
assert_eq!(alt_tags(b"efghxxx"), Done(&b"xxx"[..], &b"efgh"[..]));
assert_eq!(alt_tags(b"ijklxxx"), Error(Position(Alt, &b"ijklxxx"[..])));
The pipe |
character is used as separator.
The opt!
combinator makes a parser optional. If the child parser returns an error, opt!
will succeed and return None:
named!( abcd_opt< &[u8], Option<&[u8]> >, opt!( tag!("abcd") ) );
assert_eq!(abcd_opt(b"abcdxxx"), Done(&b"xxx"[..], Some(&b"abcd"[..])));
assert_eq!(abcd_opt(b"efghxxx"), Done(&b"efghxxx"[..], None));
many0!
applies a parser 0 or more times, and returns a vector of the aggregated results:
use std::str;
named!(multi< Vec<&str> >, many0!( map_res!(tag!( "abcd" ), str::from_utf8) ) );
let a = b"abcdef";
let b = b"abcdabcdef";
let c = b"azerty";
assert_eq!(multi(a), Done(&b"ef"[..], vec!["abcd"]));
assert_eq!(multi(b), Done(&b"ef"[..], vec!["abcd", "abcd"]));
assert_eq!(multi(c), Done(&b"azerty"[..], Vec::new()));
Here are some basic combining macros available:
opt!
: will make the parser optional (if it returns theO
type, the new parser returnsOption<O>
)many0!
: will apply the parser 0 or more times (if it returns theO
type, the new parser returnsVec<O>
)many1!
: will apply the parser 1 or more times
Please refer to the documentation for an exhaustive list of combinators.
There are more complex (and more useful) parsers like do_parse!
and tuple!
, which are used to apply a series of parsers then assemble their results.
Example with tuple!
:
named!(tpl<&[u8], (u16, &[u8], &[u8]) >,
tuple!(
be_u16 ,
take!(3),
tag!("fg")
)
);
assert_eq!(
tpl(&b"abcdefgh"[..]),
Done(
&b"h"[..],
(0x6162u16, &b"cde"[..], &b"fg"[..])
)
);
assert_eq!(tpl(&b"abcde"[..]), Incomplete(Needed::Size(7)));
let input = &b"abcdejk"[..];
assert_eq!(tpl(input), Error(Position(ErrorKind::Tag, &input[5..])));
Example with do_parse!
:
#[derive(Debug, PartialEq)]
struct A {
a: u8,
b: u8
}
fn ret_int1(i:&[u8]) -> IResult<&[u8], u8> { Done(i,1) }
fn ret_int2(i:&[u8]) -> IResult<&[u8], u8> { Done(i,2) }
named!(f<&[u8],A>,
do_parse!( // the parser takes a byte array as input, and returns an A struct
tag!("abcd") >> // begins with "abcd"
opt!(tag!("abcd")) >> // this is an optional parser
aa: ret_int1 >> // the return value of ret_int1, if it does not fail, will be stored in aa
tag!("efgh") >>
bb: ret_int2 >>
tag!("efgh") >>
(A{a: aa, b: bb}) // the final tuple will be able to use the variable defined previously
)
);
let r = f(b"abcdabcdefghefghX");
assert_eq!(r, Done(&b"X"[..], A{a: 1, b: 2}));
let r2 = f(b"abcdefghefghX");
assert_eq!(r2, Done(&b"X"[..], A{a: 1, b: 2}));
The double right arrow >>
is used as separator between every parser in the sequence, and the last closure can see the variables storing the result of parsers. Unless the specified return type is already a tuple, the final line should be that type wrapped in a tuple.
More examples of do_parse!
and tuple!
usage can be found in the INI file parser example.
Parsers written with nom
Here is a list of known projects using nom:
- Text file formats:
- Programming languages:
- Interface definition formats:
- Audio, video and image formats:
- Document formats:
- Database formats:
- Network protocol formats:
- Language specifications:
- using nom as lexer and parser
Want to create a new parser using nom
? A list of not yet implemented formats is available here.
Want to add your parser here? Create a pull request for it!