How to rewrite your project in Rust

In a previous post, I explained why rewriting existing software in Rust could be a good idea. The main point being that you should not rewrite the whole application, but replace the weaker parts without disturbing most of the code, to strengthen the codebase without disruption.

I also provided pointers to projects where other people and I did it succesfully, but without giving too many details. So let's get a real introduction to Rust rewrites now. This article requires a little bit of knowledge about Rust, but you should be able to follow it even as a
beginner.

As a reminder, here are the benefits Rust bring into a rewrite:

  • it can easily call C code
  • it can easily be called by C code (it can export C compatible functions and structures)
  • it does not need a garbage collector
  • if you want, it does not even need to handle allocations
  • the Rust compiler can produce static and dynamic libraries, and even object files
  • the Rust compiler avoids most of the memory vulnerabilities you get in C (yes, I had to mention it)
  • Rust is easier to maintain than C (this is discutable, but not the point of this article)

As it turns out, this is more or less the plan to replace C code with Rust:

  • import C structures and functions in Rust
  • import Rust structures and functions from C
  • reuse the host application's memory allocations whenever possible
  • write code (yes, we have to do it at some point)
  • produce artefacts that can be linked with the host application
  • integrate with the build system

We'll see how to apply this with examples from the Rust VLC plugin.

Import C structures and functions in Rust

Rust can easily use C code directly, by writing functions and structures definitions. A lot of the techniques you would use for this come from the "unsafe Rust" chapter of "The Rust Programming Language" book. For the following C code:

[code lang=C]
struct vlc_object_t {
const char *object_type;
char *header;
int flags;
bool force;
libvlc_int_t *libvlc;
vlc_object_t *parent;
};
[/code]

You would get the following Rust structure:

[code lang=C]
extern crate libc;
use libc::c_char;

#[repr(C)]
pub struct vlc_object_t {
pub psz_object_type: *const c_char,
pub psz_header: *mut c_char,
pub i_flags: c_int,
pub b_force: bool,
pub p_libvlc: *mut libvlc_int_t,
pub p_parent: *mut vlc_object_t,
}
[/code]

the #[repr(C)] tag indicates to the compiler that the structure should have a memory layout similar to the one generated by a C
compiler. We import types from the libc crate, like c_char. Those types are platform dependent (with their different form already handled in libc). Here, we use a lot of raw pointers (indicated by *), which means by using this structure directly, we're basically writing C, which is no good! A good approach, as we'll see later, is to write safer wrappers above those C bindings.

Importing C functions is quite straightforward too:

[code lang=C]
ssize_t vlc_stream_Peek(stream_t *, const uint8_t **, size_t);
ssize_t vlc_stream_Read(stream_t *, void *buf, size_t len);
uint64_t vlc_stream_Tell(const stream_t *);
[/code]

These C function declarations would get translated to:

[code lang=C]
#[link(name = "vlccore")]
extern {
pub fn vlc_stream_Peek(stream: *mut stream_t, buf: *mut *const uint8_t, size: size_t) -> ssize_t;
pub fn vlc_stream_Read(stream: *mut stream_t, buf: *const c_void, size: size_t) -> ssize_t;
pub fn vlc_stream_Tell(stream: *const stream_t) -> uint64_t;
}
[/code]

The #[link(name = "vlccore")] tag indicates to which library we are linking. It is equivalent to passing a -lvlccore argument to the linker. Libvlccore is a library all VLC plugins must link to. Those functions are declared like regular Rust functions, but like the previous structures, will mainly work on raw pointers.

bindgen

You can always write all your bindings manually like this, but when the amount of code to import is a bit large, it can be a good idea to employ the awesome bindgen tool, that will generate Rust code from C headers.

It can work as a command line tool, but can also work at compile time from a build script. First, add the dependency to your Cargo.toml file:

[code lang=toml]
[build-dependencies.bindgen]
version = "^0.25"
[/code]

You can then write your build script like this:

[code lang=C]
extern crate bindgen;
use std::fs::File;
use std::io::Write;
use std::path::Path;

fn main() {
let include_arg = concat!("-I", env!("INCLUDE_DIR"));
let vlc_common_path = concat!(env!("INCLUDE_DIR"), "/vlc_common.h");

let _ = bindgen::builder()
.clang_arg(include_arg)
.clang_arg("-include")
.clang_arg(vlc_common_path)
.header(concat!(env!("INCLUDE_DIR"), "/vlc_block.h"))
.hide_type("vlc_object_t")
.whitelist_recursively(true)
.whitelisted_type("block_t")
.whitelisted_function("block_Init")
.raw_line("use ffi::common::vlc_object_t;")
.use_core()
.generate().unwrap()
.write_to_file("src/ffi/block.rs");
}
[/code]

So there's a lot to unpack here, because bindgen is very flexible:

  • we use clang_arg to pass the include folder path and pre include a header everywhere (vlc_common.h is included pretty puch everywhere in VLC)
  • the header method specifies the header from which we will import definitions
  • hide_type prevents redefinition of elements we already defined (liek the ones from the common header)
  • whitelisted_type and whitelisted_function specify types and functions for which bindgen will create definitions
  • raw_line writes its argument at the top of the file. I apply it to reuse definitions from other files
  • write_to_file writes the whole definition to the specified path

You can apply that process to any C header you must import. With the build script, it can run every time the library is compiled, but be careful, generating a lot of headers can take some time. It might be a good idea to pregenerate them and commit the generated files, and update them from time to time.

It is usually a good idea to separate the imported definitions in another crate with the -sys suffix, and write the safe code in the main crate.
As an example, see the crates openssl and openssl-sys.

Writing safe wrappers

Previously, we imported the C function ssize_t vlc_stream_Read(stream_t *, void *buf, size_t len) as the Rust version pub fn vlc_stream_Read(stream: *mut stream_t, buf: *const c_void, size: size_t) -> ssize_t but kept an unsafe interface. Since we want to use those functions safely, we can now make a better wrapper:

[code lang=C]
use ffi;

pub fn stream_Read(stream: *mut stream_t, buf: &mut [u8]) -> ssize_t {
unsafe {
ffi::vlc_stream_Read(stream, buf.as_mut_ptr() as *mut c_void, buf.len())
}
}
[/code]

Here we replaced the raw pointer to memory and the length with a mutable slice. We still use a raw pointer to the stream_t instance, maybe we can do better:

[code lang=C]
use ffi;

pub struct Stream(*mut stream_t);

pub fn stream_Read(stream: Stream, buf: &mut [u8]) -> ssize_t {
unsafe {
ffi::vlc_stream_Read(stream.0, buf.as_mut_ptr() as *mut c_void, buf.len())
}
}
[/code]

Be careful if you plan to implement Drop for this type: is the Rust code supposed to free that object? Is there some reference counting involved? Here is an example of Drop implementation from the openssl crate:

[code lang=C]
pub struct SslContextBuilder(*mut ffi::SSL_CTX);

impl Drop for SslContextBuilder {
fn drop(&mut self) {
unsafe { ffi::SSL_CTX_free(self.as_ptr()) }
}
}
[/code]

Remember that it's likely the host application has a lot of infrastructure to keep track of memory, and as a rule, we should reuse the tools it offers for the code at the interface between Rust and C. See the Rust FFI omnibus for more examples of safe wrappers you can write.

Side note: as of now (2017/07/10) custom allocators are still not stable

Exporting Rust code to be called from C

Since the host application is written in C, it might need to call your code. This is quite easy in Rust: you need to write unsafe wrappers.

Here we will use as example the inverted index library for mobile apps I wrote for a conference. In this library, we have an Index type that we want to use from Java. Here is its definition:

[code lang=C]
#[repr(C)]
pub struct Index {
pub index: HashMap<String, HashSet<i32>>,
}
[/code]

This type has a few method we want to provide:

[code lang=C]
impl Index {
pub fn new() -> Index {
Index {
index: HashMap::new(),
}
}

pub fn insert(&mut self, id: i32, data: &str) {
[...]
}

pub fn search_word(&self, word: &str) -> Option<&HashSet<i32>> {
self.index.get(word)
}

pub fn search(&self, text: &str) -> HashSet<i32> {
[...]
}
}
[/code]

First, we need to write the functions to allocate and deallocate our index. Every use from C will be wrapped in a Box.

[code lang=C]
#[no_mangle]
pub extern "C" fn index_create() -> *mut Index {
Box::into_raw(Box::new(Index::new()))
}
[/code]

The Box type indicates and owns a heap allocation. When the box is dropped, the underlying data is dropped as well and the memory is freed. The following function takes ownership of its argument, so it is dropped at the end.

[code lang=C]
#[no_mangle]
pub extern "C" fn index_free(ptr: *mut Index) {
let _ = unsafe { Box::from_raw(ptr) };
}
[/code]

Now that allocation is handled, we can work on a real method. The following method takes an index, and id for a text, and the text itself, as a C string (ie, terminated by a null character).

Since we're kinda writing C in Rust here, we have to first check if the pointers are null. Then we can transform the C string in a slice. Then we check if it is correctly encoded as UTF-8 before inserting it into our index.

[code lang=C]
#[no_mangle]
pub extern "C" fn index_insert(index: *mut Index, id: i32, raw_text: *const c_char) {
unsafe { if index.is_null() || raw_text.is_null() { return } };
let slice = unsafe { CStr::from_ptr(raw_text).to_bytes() };
if let Ok(text) = str::from_utf8(slice) {
(*index).insert(id, text);
}
}
[/code]

Most of the code for those kinds of wrappers is just there to transform between C and Rust types and checking that the arguments coming from C code are correct. Even if we have to trust the host application, we should program defensively at the boundary.

There are other methods we could implement for the index, we'll leave those as exercise for the reader :)

Now, we need to write the C definitions to import those functions and types:

[code lang=C]
typedef struct Index Index;

Index* index_create();
void index_free(Index* index);
void index_insert(Index* index, int32_t id, char const* raw_text);
[/code]

We defined Index as an opaque type here. Since Rust structures can be compatible with C structures, we could export the real type, but since it only contains a Rust specific type, HashMap, it is better to hide it completely and write accessors and wrappers.

Generating bindings with rusty-cheddar

Writing function imports from C to Rust is tedious, so we have bindgen for this. We also have a great tool to go the other way: rusty-cheddar.

In the same way, it can be used from a build script:

[code lang=C]
extern crate cheddar;

fn main() {
cheddar::Cheddar::new().expect("could not read definitions")
.run_build("include/main.h");
cheddar::Cheddar::new().expect("could not read definitions")
.module("index").expect("malformed module path")
.insert_code("#include \"main.h\"")
.run_build("include/index.h");
}
[/code]

Here we run rusty-cheddar a first time without specifying the module: it will default to generate a header for the definitions in src/lib.rs.
The second run specifies a different module, and can insert a file inclusion at the top.

It can be a good idea to commit the generated headers, since you will see immediately if you changed the interface in a breaking way.

Integrating with the build system

As you might know, we can make dynamic libraries and executables with rustc and cargo. But often, the host application will have its own build system, and it might disagree with the way cargo builds its projects. So we have multiple strategies:

  • build Rust code separately, store libraries and headers in Maven or something (don't laugh, I've worked with such a system once, and it was actually great)
  • try to let rustc build dynamic libraries from inside the build system. We tried that for VLC and it was not great at all
  • build a static library from inside or outside the build system, include it in the libraries at link. This was done in Rusticata
  • build an object file and let the build system link it. This is what we ended up doing with VLC

Building a static library is as easy as specifying crate-type = ["staticlib"] in your Cargo.toml file. To build an object file, use the command cargo rustc --release -- --emit obj. You can see how we added it to the autotools usage in VLC.

Unfortunately, for this part we still do not have automated ways to fix the issues. Maybe with some time, people will write scripts for autotools,
CMake and others to handle Rust and Cargo.

Side note on reproducible builds: if you want to fix the set of Rust dependencies used in your project and make them always available, you can use cargo-vendor to store them in a specific folder

As you might have guessed, this is the most complex part, for which I have no good generic answer. I'd recommend that you spend the most time on this during the project's prototyping phase: import very little C code, export very little Rust code, try to make it build entirely from within the host application's build system. Once this is done, extending the project will get much easier. You really don't want to discover this task at the end of your project and try to retrofit your code in there.

Going further

While this article just explores the surface of Rust rewrites, I hope it provides a good starting point on the tools and techniques you can apply.
Any rewrite will be a large and complex project, but the result is worth the effort. The code you will write will be stronger, and Rust's type system will force you to review the assumptions made in the C version. You might even find better ways to write it once you start refactoring your code in a more Rusty way, safely hidden behind your wrappers.

Why you should, actually, rewrite it in Rust

You might have seen those obnoxious "you should rewrite it in Rust comments" here and there:

It's like at every new memory vulnerability in well known software, there’s that one person saying Rust would have avoided the issue. We get it, it’s annoying, and it does not help us grow Rust. This attitude is generally frowned upon in the Rust community. You can't just show up into someone’s project telling them to rewrite everything.

so, why am I writing this? Why would I try to convince you, now, that you should actually rewrite your software in Rust?

That's because I have been working on this subject for a long time now:

  • I did multiple talks on it
  • I even co-wrote a paper
  • I did it both as client and personal work

So, I'm commited to this, and yes, I believe you should rewrite some code in Rust. But there's a right way to do it.

Why rewrite stuff?

Our software systems are built on sand. We got pretty good at maintaining and fixing them over the years, but the cracks are showing. We still have not fixed definitely most of the low level vulnerabilities: stack buffer overflow (yes, those still exist), heap overflow, use after free, double free, off by one; the list goes on. We have some tools, like DEP, ASLR, stack canaries, control flow integrity, fuzzing. Large projects with funding, like Chrome, can resort to sandboxing parts of their application. The rest of us can still run those applications inside a virtual machine. This situation will not improve. There's a huge amount of old (think 90s), bad quality, barely maintained code that we reuse everywhere endlessly. The good thing with hardware is that at some point, it gets replaced. Software just gets copied again. Worse, with the development of IoT, a lot of the code that ships will never be updated. It's likely that some of those old libraries will still be there 15, 20 years from now.

Let's not shy away from the issue here. Most of those are written in C or C++ (and usually an old version). It is well known that it is hard to write correct, reliable software in those languages. Think of all the security related things you have to keep track of in a C codebase:

  • pointer arithmetic
  • allocations and deallocations
  • data is mutable by default
  • functions return integers to mean pointers and error codes. Errors can be implicitely ignored
  • type casts, overflows and underflows are hard to track
  • buffer bounds in indexing and copying
  • all the undefined behaviours

Of course, some developers can do this work. Of course, there are sanitizers. But it's an enormous effort to perform everyday for every project.

Those languages are well suited for low level programming, but require extreme care and expertise to avoid most of those issues. And even then, we assume the developers will always be well rested, focused and careful. We're only humans, after all. Note that in 2017, there are still people claiming that a C developer with sufficient expertise would avoid all those issues. It's time we put this idea to rest. Yes, some projects can avoid a lot of vulnerabilities, with a team of good developers, frequent code reviews, a restricted set of features, funding, tools, etc. Most projects cannot. And as I said earlier, a lot of the code is not even maintained.

So we have to do something. We must make our software foundations stronger. That means fixing operating systems, drivers, libraries, command line tools, servers, everything. We might not be able to fix most of it today, or the next year, but maybe 10 years from now the situation will have improved.

Unfortunately, we cannot rewrite everything. If you ever attempted to rewrite a project from scratch, you'd know that while you can avoid some of the mistakes you made before, you will probably introduce a lot of regressions and new bugs. It's also wrong on the human side: if there are maintainers for the projects, they would need to work on the new and old one at the same time. Worse, you would have to teach them the new approach, the new language (which they might not like), and plan for an upgrade to the new project for all users.

This is not doable, and this is the part most people asking for project rewrites in Rust do not understand. What I'm advocating for is much simpler: surgically replace weaker parts but keep most of the project intact.

How

Most of the issues will happen around IO and input data handling, so it makes sense to focus on it. It happens there because that's where the code manipulates buffers, parsers, and uses a lot of pointer calculations. It is also the least interesting part for software maintainers, since it is usually not where you add useful features, business logic, etc. And this logic is usually working well, so you do not want to replace it. If we could rewrite a small part of an application or library without disrupting the rest of the code, we would get most of the benefits without the issues of a full rewrite. It is the exact same project, with the same interface, same distribution packaging as before, same developer team. We would just make an annoying part of the software stronger and more maintainable.

This is where Rust comes in. It is focused on providing memory safety, thread safety while keeping the code performant and the developer productive. As such, it is generally easier to get safe, reliable code in production while writing basic Rust, than a competent, well rested C developer using all the tools available could do.

Most of the other safe languages have strong requirements, like a runtime and a garbage collector. And usually, they expect to be the host application (how many languages assume they will handle the process's entry point?). Here, we are guests in someone else's house. We must integrate nicely and quietly.

Rust is a strong candidate for this because:

  • it can easily call C code
  • it can easily be called by C code (it can export C compatible functions and structures)
  • it does not need a garbage collector
  • if you want, it does not even need to handle allocations
  • the Rust compiler can produce static and dynamic libraries, and even object files
  • the Rust compiler avoids most of the memory vulnerabilities you get in C (yes, I had to mention it)

So you can actually take a piece of C code inside an existing project, import the C structures and functions to access them from Rust, rewrite the code in Rust, export the functions and structures from Rust, compile it and link it with the rest of the project.

If you don't believe it's possible, take a look at these two examples:

  • Rusticata integrates Rust parsers written with nom in Suricata, an intrusion detection system
  • a VLC media player plugin to parse FLV files, written entirely in Rust

You get a lot of benefits from this approach. First, Rust has great package management with Cargo and crates.io. That means you can separate some of the work in different libraries. See as an example the list of parsers from the Rusticata project. You can test them independently, and even reuse them in other projects. The FLV parser I wrote for VLC can also work in a Rust GStreamer plugin You can also make a separate library for the glue with the host application. I'm working on vlc_module exactly for that purpose: making Rust VLC plugins easier to write.

This approach works well for applications with a plugin oriented architecture, but you can also rewrite core parts of an application or library. The biggest issue is high coupling of C code, but it is usually easy to rewrite bit by bit by keeping a common interface. Whenever you have rewritten some coupled parts of of a project, you can take time to refactor it in a more Rusty way, and leverage the type system to help you. A good example of this is the rewrite of the Zopfli library from C to Rust.

This brings us to another important part of that infrastructure rewrite work: while we can rewrite part of an existing project without being too intrusive, we can also rewrite a library entirely, keeping exactly the same C API. You can have a Rust library, dynamic or static, with the exact same C header, that you could import in a project to replace the C one. This is a huge result. It's like replacing a load-bearing wall in an existing building. This is not an easy thing to realize, but once it's done, you can improve a lot of projects at once, provided your distribution's package manager supports that replacement, or other projects take the time to upgrade.

This is a lot of work, but every time we advance a little, everybody can benefit from it, and it will add up over the years. So we might as well start now.

Currently, I'm focused on VLC. This is a good target because it's a popular application that's often part of the basic stack of any computer (browser, office suite, media player). So it's a big target. But take a look at the list of dependencies in most web applications, or the dependency graph of common distributions. There is a lot of low hanging fruit there.

Now, how would you actually perform those rewrites? You can check out the next post and the paper explaining how we did it in Rusticata and VLC.

This year in nom: 2.0 is here!

Nearly one year ago, on November 15th 2015, I released the 1.0 version of nom, the fast parser combinators library I wrote in Rust. A lot happened around that project, and I have been really happy to interact with nom users around the world.

TL;DR: it's new nom day! The 2.0 release is here! Read the changelog. Follow the upgrade documentation if it breaks stuff.

celebrate

Interesting usage

I wouldn't be able to list all the projects using nom on this page, even the subset present on crates.io, but here are a few examples of what people built with nom:

And a lot of other projects. As a side note, people apparently like to build parsers for flac, bittorrent and bitcoin stuff, LISP and Scheme tokenizers and, oddly, ASN.1 libraries :D

I have been really humbled by what people achieved with this little library, and I hope it will enable even more awesome projects!

Growth and stabilization

The goal before 1.0 was to get a usable parsing library, and after 1.0, to add features people were missing and explore new ideas. A lot of code was contributed for bitstream and string parsing, and adding a lot of useful combinators like "peek!", "separated_list!" or "tuple!".

Unfortunately, a few parts of nom got increasingly painful to maintain and support, so the 2.0 was a good opportunity to clean them up, and add more features while we're at it.

The "chain!" combinator, which everybody uses to parse a sequence of things and accumulate the results in structs or tuple, is now deprecated, and will be replaced by "do_parse!", a simpler alternative. There are also a lot of specific helpers to make your code nicer, like "pair!", "preceded!", "delimited!", "separated_pair!", "separated_list!" and "delimited!". Yes, I went to great lengths to make sure you stop using chain :)

The "length_value!" and other associated combinators were refactored, to have more sensible names and behaviours. "eof", eol" and the basic token parsers like "digit" or "alphanumeric" got the same treatment. Those can be a source of issues in the upgrade to 2.0, but if the new behaviour does not work in your project, replacing them is still easy with the "is_a!" combinator and others.

At last, I changed the name of the "error!" macro that was conflicting with the one from the log crate. I hoped that by waiting long enough, the log people would change their macro, but it looks like I lost :p

New combinators

A few new simple combinators are here:

  • the previously mentioned "do_parse!" makes nicer code than "chain!":

The "chain!" version uses this weird closure-like syntax (while not actually using a closure) with a comma ending the parser list:

named!(filetype_parser<&[u8],FileType>, 
 chain!( 
 m: brand_name ~ 
 v: take!(4) ~ 
 c: many0!(brand_name) , 
 ||{ FileType{
   major_brand: m,
   major_brand_version:v,
   compatible_brands: c
 } } 
));

The "do_parse!" version only uses ">>" as separating token, and returns a value as a tuple. If the tuple contains only value, (A) is conveniently equivalent to A.

named!(filetype_parser<&[u8],FileType>, 
 do_parse!( 
   m: brand_name >> 
   v: take!(4) >> 
   c: many0!(brand_name) >> 
   (FileType{
     major_brand: m,
     major_brand_version:v,
     compatible_brands: c
   }) 
));

"chain!" had too many features, like a "?" indicating a parser was optional (which you can now do with "opt!"), and you could declare one of the values as mutable. All of those and the awkward syntax made it hard to maintain. Still, it was one of the first useful combinators in nom, and it can now happily retire

  • "permutation!" applies its child parser in any order, as long as all of them succeed once
  fn permutation() {
    named!(perm<(&[u8], &[u8], &[u8])>,
      permutation!(tag!("abcd"), tag!("efg"), tag!("hi"))
    );

    let expected = (&b"abcd"[..], &b"efg"[..], &b"hi"[..]);

    let a = &b"abcdefghijk"[..];
    assert_eq!(perm(a), Done(&b"jk"[..], expected));
    let b = &b"efgabcdhijk"[..];
    assert_eq!(perm(b), Done(&b"jk"[..], expected));
    let c = &b"hiefgabcdjk"[..];
    assert_eq!(perm(c), Done(&b"jk"[..], expected)
}

This one was very interesting to write :)

  • "tag_no_case!" works like "tag!", but compares independently from the case. This works great for ASCII strings, since the comparison requires no allocation, but the UTF-8 case is trickier, and I'm still looking for a correct way to handle it
  • "named_attr!" creates functions like "named!" but can add attributes like documentation. This was a big pain point, now nom parsers can have documentation generated by rustdoc
  • "many_till!" applies repeatedly its first child parser until the second succeeds

Whitespace separated formats

This is one of the biggest new additions, and a feature that people wanted for a long time. A lot of the other Rust parser libraries are designed with programming languages parsing in mind, while I started nom mainly to parse binary formats, like video containers. Those libraries usually handle whitespace parsing for you, and you only need to specify the different elements of your grammars. You essentially work on a list of already separated elements.

Previously, with nom, you had to explicitely parse the spaces, tabs and end of lines, which made the parsers harder to maintain. What we want in the following example is to recognize a "(", an expression, then a ")", and return the expression, but we have to introduce a lot more code:

named!(parens<i64>, delimited!(
    delimited!(opt!(multispace), tag!("("), opt!(multispace)),
    expr,
    delimited!(opt!(multispace), tag!(")"), opt!(multispace))
  )
);

This new release introduces "ws!", a combinator that will automatically insert the separators everywhere:

named!(parens<i64>, ws!(delimited!( tag!("("), expr, tag!(")") )) );

magicBy default, it removes spaces, tabs, carriage returns and line feed, but you can easily specify your own separator parser and make your own version of "ws!".

This makes whitespace separated formats very easy to write. See for example the quickly put together, probably not spec compliant JSON parser I added as test.

If you're working on a language parsers, this should help you greatly.

Architecture changes

Error management

The error management system that accumulated errors and input positions as it backtracks through the parser tree is great for some projects like language parsers, but others were not using it and got a penalty because of vectors allocation and deallocation.

In the 2.0 release, this error management system is now activated by the "verbose-errors" feature. Projects that don't use it should build correctly right away, and their parsers could get 30% to 50% faster!

Input types

One of nom's original assumptions was that it should work on byte slices and strings instead of byte or char iterators, because the CPU likes contiguous data. As always, the reality is a bit more complex than that, but it worked well and made the code very simple: I only passed subslices from one parser to the next.

But I wrongly assumed that because of that design, nom could only work on contiguous data. Carl Lerche made the interesting point that there are few points where nom actually needs to read a serie of bytes or chars and those could accomodate other data structures like ropes or a list of buffers.

So I got to work on an abstraction for input types that would work for &[u8] and &str, but also for other types. In the process, I was able to factor most of the &str specific combinators with the &[u8] ones. This will make them easier to maintain in the future.

The result of that work is a list of traits that any input type should implement to be usable with nom. I experimented a bit with the BlockBuf type, and this approach looks promising. I expect that people will find cool applications for this, like parsers returning references to not yet loaded data, or blocking a coroutine on a tag comparison until the data is available.

A smooth upgrade process

For the 1.0 release, I choose a few projects using nom, and tried to build them to test the new version and document the upgrade. This was so useful that I did it again for 2.0, so if you're lucky, you maintain one of the 30 crates I tested, and you received a pull request doing that upgrade for you. Otherwise, I wrote an upgrade documentation that you can follow to fix the migration issues. You're still lucky, though, because most crates will build (or only require a one line fix in Cargo.toml).

fixingstuff

I'll write soon about that process and the benefits you can get by applying it to your projects.

The future

I have a lot of ideas for the next version, also a lot of pull requests to merge and issues to fix. Not everything could make it into the 2.0, otherwise I would never have released it.

In short, the plan:

  • rewrite completely the producers and consumers system. It is not very usable right now. It could be replaced by an implementation based on futures
  • improve the performance. I got a good enough library by choosing the most naive solutions, but there are lots of points I could improve (especially in helping LLVM generate faster code)
  • implement a new serialization library. I believe there is some room for a serialization system that does not rely on automatic code generation, and it would go well with nom
  • continue my work on writing nom demuxers for VLC media player. I have a good proof of concept, now I need to make it production ready
  • add new, interesting examples: indentation based programming languages, tokio integration, integration in high performance networking systems
  • I'll release very soon a large networking tool that relies heavily on nom. Expect some big news :)

That's it, now go and upgrade your code, you'll enjoy this new version!

 

PoC: using LLVM's profile guided optimization in Rust

call graph

What does profile-guided optimization mean?

Some languages have a JIT (Just In Time) compiler available at runtime, that can optimize the executed code depending on current execution patterns. This is, in large part, the cause of the performance of Lua and the JVM. They can start a bit slow, but by accumulating information on actual running code, they make it faster and faster for the current load. PfLua is a great example: the firewall rules are optimized again and again, until the current network traffic is handled as quickly as possible.

When you use other languages, such as C, you usually cannot optimize the application once it is compiled. Except when you use an optimization technique known as Profile-Guided Optimization. From Wikipedia :

Profile-guided optimization (PGO, sometimes pronounced as pogo), also known as profile-directed feedback (PDF), is a compiler optimization technique in computer programming that uses profiling to improve program runtime performance.

It relies on profiling the compiled application, while it runs with the expected, real world load (web traffic, calculations, etc), and feed this profiling information to the compiler. On the next build, the compiler will have more information on which parts of the program are less used, which branches are taken more often, the expected values in a range, etc. Instead of guessing how the program would behave to choose optimizations, the compiler has true information, and can optimize more precisely. There's one issue with the process: you need two compilations and a profiling run to generate the final executable. But it gets easier when you automate it, as we can see in the Firefox build process.

PGO in LLVM

While it has been available in other systems for a long time (Visual Studio 2005, the Intel compiler ICC for Itanium), it appeared recently in LLVM.  It has since then been applied successfully to XCode (Objective C, Swift) and LDC, the D compiler.

LLVM has a great feature: it uses an Intermediate Representation code (IR), which is a kind of high level assembly language. It applies its optimizations and machine code generation to that representation. If you make a compiler for a new language, targeting the LLVM IR will give you these features (nearly) for free.

In practice, compiler frontends choose which features they use, so you may not access everything LLVM has to offer. In particular, the Rust compiler, as of now (April 2016), provides a llvm-args option, but that option filters what you can send to LLVM, so we cannot use PGO here.

PGO in Rust

Still, with rustc, you can generate directly the IR, or its binary encoding, named bitcode:

rustc --emit llvm-bc main.rs
# or, with cargo:
cargo rustc -- --emit llvm-bc

The approach I tried here is to take that bitcode, and manually apply LLVM's transformations until I get a compiled executable. This is not really usable for now, especially because I chose an example with very few dependencies. With more dependencies, the compilation and linking will get more complex and unmanageable manually.

LLVM comes with a few commands that you can use to build code manually. The first one is opt, and it applies optimizations and instrumentation on the bitcode file (here, the file target/release/pgo.bc):

opt-3.8 -O2 -pgo-instr-gen -instrprof target/release/pgo.bc -o pgo.bc

The new bitcode file contains code to profile the end application (mainly by counting how often we use each code path). We can now convert that bitcode file to an object file, and link it using clang:

llc-3.8 -O2 -filetype=obj pgo.bc
clang-3.8 -O2 -flto -fprofile-instr-generate pgo.o -L/usr/local/lib/rustlib/x86_64-apple-darwin/lib -lstd-ca1c970e -o pgo

Note: I built my own rustc from source, so your libstd file may not have the same hash. Since Rust (as of April 2016) uses LLVM 3.7, we can use LLVM 3.8's PGO features, since the bitcode format is apparently backward compatible. I use OS X, and Homebrew's LLVM 3.8 has compilation issues, so I needed to build the compiler runtime from source. It's a proof of concept, not production code ;)

We will now run the program we just built, preferably with production data and traffic. It will automatically generate a default.profraw file, containing the profiling information. This file must be transformed to a format that opt will understand with llvm-profdata:

llvm-profdata-3.8 merge -output=pgo.profdata default.profraw

This .profdata file will now be used in the compilation steps:

opt-3.8 -O2 -pgo-instr-use -pgo-test-profile-file=pgo.profdata target/release/pgo.bc -o pgo-opt.bc
llc-3.8 -O2 -filetype=obj pgo-opt.bc
clang-3.8 -O2 -flto -fprofile-instr-use=pgo.profdata pgo-opt.o -L/usr/local/lib/rustlib/x86_64-apple-darwin/lib -lstd-ca1c970e -o pgo-opt

We now have an executable compiled using profiling information. Is it fast?

The benchmarks

The program I tested is a n-body simulation. It was a great test target since libstd is the only dependency, and the load factor depends on a number given as command line argument. Here is a test with time (I know it's not the most precise benchmarking tool, but for a tenth of second precision, it works alright):

$ time ./target/release/pgo 1000000000
-0.169075164
-0.169051540

real    1m22.528s
user    1m22.214s
sys     0m0.173s

$ time ./pgo-opt 1000000000
-0.169075164
-0.169051540

real    1m9.810s
user    1m9.687s
sys     0m0.070s

As it turns out, we gain nearly 15% in running time on this program. Other examples could have less impact, but this is encouraging! So, what happened inside our program?

The generated code

I provide assembly dumps of the normal program, generated with cargo --release, and the one optimized with PGO. Mostly, the code has been reordered, probably to fit better in cache lines. You can also consult PDF files with call graphs: normal, PGO optimized.

The whole code for this article is available here if you want to reproduce the results or tinker with optimizations yourself.

This is a proof of concept, demonstrating that profile guided optimization could work in Rust. It is probably worthy of integration into rustc, but there's a lot of work before it could be usable. Still, there's a github issue where you can weigh in, if you would like this optimization in your applications.

 

Crypto problems you actually need to solve

To follow up on the small Twitter rant that got people to explain GPG and OTR to me for a whole day, I'll explain the ideas behind this.

tweet rant screenshotThere is always a new project around self hosting email and PGP, or building a new encrypted chat app. Let's say it aloud right now: those projects are not trying to improve the situation, they just want to build their own. Otherwise, they would help one of the 150 already existing projects. Worse, a lot of those try to build a business out of it, and end up making a system where you need to completely trust their business (as a side note, building a business is fine, but just be honest about your goals).

There are things that attract new projects, because the concept is easy to get, and this drives developers right into the "I'm smarter than everybody and can do better than existing projects" territory. But there's a reason they fail.

Making an encrypted chat app is solving the same problem as making a chat app -moving people off their existing platform, onto a new one- and additionally writing a safe protocol. Building a whole new infrastructure is not an easy task.

Making email and PGP easier to use is not only a UX issue. There is a complete mental model of how it works, what are the failure modes, and what are the trust levels. And you add above that the issues of email's metadata leaks, the lack of forward secrecy, the key management. Even with the simplest interface, it requires that the user builds that mental model, and that she understands other users may have different models. This is pervasive to the way PGP works. It has always been and will always be an expert's tool, and as such unfit to be deployed massively. You cannot solve infrastructure problems by teaching people.

You cannot solve infrastructure problems by teaching people

There is a pattern here: people want to build tools that will be the basis of safe communications for the years to come. Those are infrastructure problems, like electricity, running water or Internet access. They cannot be solved by teaching people how to use a tool. The tool has to work. They cannot be solved either by decentralization. At some point, someone has to pay for the infrastructure's maintenance. As much as I like the idea of decentralizing everything, this is not how people solve serious engineering problems. But the difference with other types of business is that infrastructure businesses are dumb pipes. They don't care what's running through them, and you can easily replace one with the other.

There is a strong need for "private by default", authenticated, anonymous communication. This will only come if the basic building blocks are here:

  • multiparty Off-The-Record(or other protocols like Axolotl): forward secret communication currently only works in two-party communication. Adding more members and making the protocols safe against partitions is a real challenge
  • multidevice PGP (or alternate message encryption and authentication system): currently, to use PGP on multiple devices, either you synchronize all your private keys on all your devices, or you accept that some devices will not be able to decrypt messages. This will probably not work unless you have a live server somewhere, or at least some hardware device containing keys
  • redundant key storage: systems where a single key holds everything are very seducing, but they're a nightmare for common operation. You will lose the key. And the backup. And the other backup. You will end up copying your master key everywhere, encrypted with a password used infrequently. Either it will be too simple and easily crackable, or too complex and you will forget it. How can a friend or family access the key in case of emergency?
  • private information retrieval: PIR systems are databases that you can query for data, and the database will not know (in some margins) which piece of data you wanted. You do not need to go wild on homomorphic encryption to build something useful
  • encrypted search: the tradeoffs in search on encrypted data are well known, but there are not enough implementations
  • usable cryptography primitives: there is some work in that space, but people still rely on OpenSSL as their goto crypto library. Crypto primitives should not even let you make mistakes

Those are worthwhile targets if you know a fair bit of cryptography and security, and know how to build a complete library or product.

But for products, we need better models for infrastructure needs. A like Tahoe-LAFS is a good model: the user is in control of the data, the provider is oblivious to the content, the user can be the client of multiple providers, and switching from one to another is easy. This does not make for shiny businesses. Infrastructure companies compete mainly on cost, scale, availability, speed, support, since they all provide roughly the same features.

Of course, infrastructure requires standards and interoperability, and this usually does not make good material for startup hype. But everyone wins from this in the end.