Adventures in logging

After working on the Sōzu HTTP reverse proxy for a while, I came up with an interesting approach to logging. Now why would I come up with my own logger, when there are existing solutions in Rust? Mainly, log and slog. That logging library grew up from testing things out with log, and changing requirements along the way.

Beginning with log and env_logger

Like a lot of other Rust developers, I started out with log and env_logger:

#[macro_use]
extern crate log;
extern crate env_logger;

fn main() {
  env_logger::init().unwrap();

  info!("starting up");
}

It’s nice and easy: every library that depends on log will use that same set of logging macros (error, info, warn, debug, trace) that will use whatever global logger was defined. Here we use env_logger to define one.

env_logger is useful because it can apply a filter to the log, from an
environment variable:

# will show the logs above info level
RUST_LOG=info ./main

# will show the logs above info level, but also logs above debug level
# for the dependency 'dep1'
RUST_LOG=info,dep1=debug ./main

You can also define the filter by module or apply a regular expression.

Custom formatter

env_logger allows you to build your own log formatter. This feature is especially important for me, as I like to add metadata to my logs.

Defining a custom formatter with env_logger is quite straightforward:

let format = |record: &LogRecord| {
  format!("{} - {}", record.level(), record.args())
};

let mut builder = LogBuilder::new();
builder.format(format).filter(None, LogLevelFilter::Info);

if env::var("RUST_LOG").is_ok() {
  builder.parse(&env::var("RUST_LOG").unwrap());
}

builder.init().unwrap();

It is easily combined with the filtering and usage of the RUST_LOG environment variable.

Where things get annoying: reducing allocations

If you take a look at env_logger, you’ll realize that it will allocate a String for every log line that will be written, using a formatting closure.

Let’s get one thing out of the way first: I completely agree with the idea you should not try to optimize stuff too much. But I’m in the case of a networking component that will handle a lot of traffic. I had debugging sessions where I generated tens of gigabytes of logs in a few seconds, and needed almost all of them, to debug async IO issues. In those cases, the time spent allocating and deallocating log lines becomes relevant.

So, how would I get a custom log formatter that does not allocate much? As it turns out, when you tell log to use your logger with log::set_logger, it requires something that implements Log. The logger’s log method receives a LogRecord, a structure that’s created on the fly from LogLocation, LogMetadata and Arguments.
The first two are internal to log, I can’t create them myself. The last one is interesting.

Arguments can be created from the format_args macro. That structure will roughly contain the format string split in the various substrings that appear between arguments. if you do println!("hello {}!", name), you would get a structure that contains "hello ", the content of name and "!". println! and other macros use this.

You can then use that Arguments with io::Write::write_fmt to write it directly to, say, a file or a socket. And it is implemented so that the individual parts are written one after another instead of allocating one big string.

So, how do I use that?

Well, it turns out that, basically, I can’t. If I implement Log, I can get a Logrecord which gives me a &Arguments, while write requires a Arguments. So now I have to clone it, which defeats a bit the purpose.

So let’s write our own then

There was another reason for the custom logging library: using a custom logging backend. Having the option between stdout and stderr is fine, but I might want to send them to a file or a socket.

So I started writing a specific logging library for sozu. First, copying the log filtering from env_logger. That part is mostly straightforward, but that’s still a lot of code to copy around.

The logging macros specify a logging level then they all call the same common macro.

#[macro_export]
macro_rules! error {
    ($format:expr, $($arg:tt)*) => {
        log!($crate::logging::LogLevel::Error, $format, "ERROR", $($arg)*);
    };
    ($format:expr) => {
        log!($crate::logging::LogLevel::Error, $format, "ERROR");
    };
}

The main logging macro has two interesting parts. First, we define some static metadata (that’s coming from the log crate):

static _META: $crate::logging::LogMetadata = $crate::logging::LogMetadata {
  level:  $lvl,
  target: module_path!(),
};

That object will not be allocated over and over, all the data in there will be defined at compile time.

Then we call the logger itself (ignore the line with try_lock for now):

if let Ok(mut logger) = LOGGER.try_lock() {
  logger.log(
    &_META,
    format_args!(
      concat!("{}\t{}\t{}\t{}\t{}\t", $format, '\n'),
        ::time::now_utc().rfc3339(), ::time::precise_time_ns(), *$crate::logging::PID,
        $level_tag, *$crate::logging::TAG));
}

So we give this metadata structure to our logger, then we make an Arguments structure with format_args!. The concat! macro is there to concatenate the formatting string with the custom prefix. That way, I could write debug!("hello {}", name) and have the resulting format string be "{}\t{}\t{}\t{}\t{}\thello {}\n", generated at compile time and transformed through the format_args call.

I added the date in ISO format, along with a monotonic timestamp (that becomes handy when multiple workers might write logs concurrently), the process identifier, the log level and a process wide logging tag (to better identify workers).

So this starts looking good, right? Now how do we write this to configurable backends? Some backends already implement io::Write, others will need an intermediary buffer:

pub fn log<'a>(&mut self, meta: &LogMetadata, args: Arguments) {
    if self.enabled(meta) {
        match self.backend {
            LoggerBackend::Stdout(ref mut stdout) => {
                stdout.write_fmt(args);
            },
            LoggerBackend::Unix(ref mut socket) => {
                socket.send(format(args).as_bytes());
            },
            LoggerBackend::Udp(ref mut socket, ref address) => {
                socket.send_to(format(args).as_bytes(), address);
            }
            LoggerBackend::Tcp(ref mut socket) => {
                socket.write_fmt(args);
            },
        }
    }
}

For Unix sockets and UDP, instead of allocating on the fly, it should probably use a buffer (hey, anyone wants to implement that?). Stdout and a TcpStream can be written to directly. Adding buffers might still be a good idea here, depending on what you want, because that write could fail. Would you like a logger that will send a partial log if it can’t write on the socket, or one using a buffer that can be filled up?

So, now, what’s next? Originally, sozu worked as one process with multiple threads, but evolved as a bunch of single threaded processes. But that raises an interesting question. How do you write logs concurrently?

Highly concurrent logs

It turns out that problem is not really easy. Most solutions end up in this list:

  • every thread or process writes to stdout or a file at the same time
  • synchronized access to the logging output
  • one common logger everybody sends to
  • every thread or process has its own connection to syslog (or even its own file to write to)

The first solution is easy, but has a few issues. First, writing to stdout is slow, and it can quickly overwhelm your terminal (yes, I know you can redirect to a file). Second, it’s not really synchronized, so you might end up with incoherently interleaved log lines.

So we often move to the second solution, where access to the logging stream is protected by a mutex. Now you get multiple threads or processes that might spend their time waiting on each other for serializing and writing logs. Having all threads sharing one lock can quickly affect your performance. It’s generally a better idea to have every thread or process running independently from the others (it’s one of the principles in sozu’s architecture, you can learn more about it in this french talk).

Alright then, moving on to the third solution: let’s have one of the threads or processes handle the logging, and send the logs to it via cross thread channels or IPC. That will surely be easier than having everybody share a lock, right? This is also intersting because you can offload serialization and filtering to the logging thread. Unfortunately, that means one thread will handle all the logs, and it can be overwhelming. That also means a lot of data moving between workers (if using processes).

The last solution relies on external services: use the syslog daemon on your system, or a log aggregator somewhere on another machine, that every worker will send the logs to. Let that service interleave logs, and maybe scale up if necessary. Since in a large architecture, you might have such an aggregator, talking to it directly might be a good idea (oh hey BTW I wrote a syslog crate if you need).

With sozu, I ended up with a mixed solution. You can send the logs to various backends. If you choose stdout, then all workers will write to it directly without synchronization which will be mostly fine if you don’t have a large traffic. But if you want, each worker can open its own connection to the logging service.

Now that concurrency is taken care of, there’s a last issue that has annoyed me for months: how to use the logger from everywhere in the code, when it’s obviously one big global mutable object?

the dreaded logging singleton

One thing I like about the macros from the log crate: you can use them anywhere in your code, and it will work. The other approach, used in languages like Haskell, or proposed by slog, is to carry your logger around, in function arguments or structure members. I can understand the idea, but I don’t like it much, because I’ll often need to add a debug call anywhere in the code, and when it’s deep in a serie of five method calls, with those methods coming from traits implemented here and there, updating the types quickly gets annoying.

So, even if the idea of that global mutable logger singleton usually looks like a bad pattern, it can still be useful. In log, the log macro calls the log::Log::log method, getting the logger instance from the logger method. That method gets the logger instance from a global pointer to the logger, with an atomic integer used to indicate if the logger was initialized:

static mut LOGGER: *const Log = &NopLogger;
static STATE: AtomicUsize = ATOMIC_USIZE_INIT;

const UNINITIALIZED: usize = 0;
const INITIALIZING: usize = 1;
const INITIALIZED: usize = 2;

[...]

pub fn logger() -> &'static Log {
    unsafe {
        if STATE.load(Ordering::SeqCst) != INITIALIZED {
            static NOP: NopLogger = NopLogger;
            &NOP
        } else {
            &*LOGGER
        }
    }
}

So how can that global mutable pointer be used from any thread? That’s because the Log trait requires Send and Sync. As a reminder, Send means it can be sent to another thread, Sync means it can be shared between threads.

That’s a cool trick, but usually we employ another pattern:

lazy_static! {
  pub static ref LOGGER: Mutex<Logger> = Mutex::new(Logger::new());
  pub static ref PID:    i32           = unsafe { libc::getpid() };
  pub static ref TAG:    String        = LOGGER.lock().unwrap().tag.clone();
}

lazy_static allows you to define static variables that will be initialized at runtime, at first access (using the same pattern as log with std::sync::Once. Since our logger’s log method needs an &amp;mut, it’s wrapped in a Mutex. That’s where the call to try_lock comes from.

We might think the mutex is costly compared to log’s solution, but remember that the logger instance has to be Sync, so depending on your implementation, there might be some synchronization somewhere. Except that for sozu, it’s not the case! Each worker is single threaded, and has its own instance of the logger (possibly with each of them a connection to the logging service). Can’t we have a logging system that does not require that mutex used everywhere?

Removing the last Mutex

This is a problem that annoyed me for months. It’s not that I really mind the cost of that mutex (since no other thread ever touches it). It’s just that I’d feel better not using one when I don’t need it 🙂

And the solution to that problem got quite interesting. To mess around a bit, here’s a playground of the logging solution based on lazy_static. You’ll see why the code in Foo::modify is important.

There’s a feature you can use to have a global variable available anywhere in a thread: thread_local. It uses thread local storage with an UnsafeCell to initialize and provide a variable specific to each thread:

thread_local!(static FOO: RefCell<u32> = RefCell::new(1));

FOO.with(|f| {
    assert_eq!(*f.borrow(), 1);
    *f.borrow_mut() = 2;
});

So I tried to use this with my logger, but encountered an interesting bug, as you can see in another playground. I replaced my logging macro with this:

#[macro_export]
macro_rules! log {
    ($format:expr, $($arg:tt)+) => ({
        {
            LOGGER.with(|l| {
                l.borrow_mut().log(
                    format_args!(
                        concat!("\t{}\t", $format, '\n'),
                        "a", $($arg)+));
            });
        }
    });
}

And got this error:

error[E0502]: cannot borrow `self` as immutable because `self.opt.0` is also borrowed as mutable
  --> src/main.rs:36:21
   |
36 |         LOGGER.with(|l| {
   |                     ^^^ immutable borrow occurs here
...
54 |         if let Some(ref mut s) = self.opt {
   |                     --------- mutable borrow occurs here
55 |             log!("changing {} to {}", self.bar, new);
   |             -----------------------------------------
   |             |                         |
   |             |                         borrow occurs due to use of `self` in closure
   |             in this macro invocation
56 |         }
   |         - mutable borrow ends here

When implementing this inside sozu’s source, I got about 330 errors like this one… So what happened? That with method requires a closure. Since we use a macro, if we use self.bar as argument, it will appear inside the closure. That becomes an issue with anything that has been already mutably borrowed somewhere.

I tried a few things, like calling format_args outside the closure, but I get the error error[E0597]: borrowed value does not live long enough. This is apparently a common problem with format_args.

But the real solution came from tomaka, with some macro trickery, as seen in one last playground:

#[macro_export]
macro_rules! log {
    ($format:expr $(, $args:expr)*) => ({
        log!(__inner__ $format, [], [a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v]
             $(, $args)*)
    });

    (__inner__ $format:expr, [$($transformed_args:ident),*], [$first_ident:ident $(, $other_idents:ident)*], $first_arg:expr $(, $other_args:expr)*) => ({
        let $first_ident = &$first_arg;
        log!(__inner__ $format, [$($transformed_args,)* $first_ident], [$($other_idents),*] $(, $other_args)*);
    });

    (__inner__ $format:expr, [$($final_args:ident),*], [$($idents:ident),*]) => ({
        LOGGER.with(move |l| {
          //let mut logger = *l.borrow_mut();
          l.borrow_mut().log(
            format_args!(
              concat!("\t{}\t", $format, '\n'),
              "a" $(, $final_args)*));
        });
    });
}

The basic idea is that we could avoid the borrowing issue by doing an additional borrow. But since some of the arguments might by expressions (like 1+1 or self.size), we will store the reference to it in a local variable, with let $first_ident = &amp;$first_arg;.

We cannot create variable or function names in macros out of thin air (sadly, because that would be extremely useful), so we instead do recursive macros calls, consuming arguments one after another.
In [$($transformed_args:ident),*], [$first_ident:ident $(, $other_idents:ident)*], $first_arg:expr $(, $other_args:expr)*,
transformed_args accumulates the idents (variable names) in which we stored the data.

[$first_ident:ident $(, $other_idents:ident)*] is matching on the list that started as [a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v], to get the first one in that list (storing it in $first_ident), and using it as variable names. As you might have guessed, that means I won’t be able to use a log line with more than 21 arguments. That’s a limitation I can live with.

The $first_arg:expr $(, $other_args:expr)* part matches on the log call’s arguments, and gets the first in the list as $first_arg. We then use those in the line let $first_ident = &amp;$first_arg; and recursively call log!, adding the variable name $first_ident to the list of transformed arguments, and the rest of the variable names list and the log call’s arguments:

log!(__inner__ $format, [$($transformed_args,)* $first_ident], [$($other_idents),*] $(, $other_args)*);

Once all of the logger’s arguments are consumed, we can call format_args on the list of $transformed_args:

(__inner__ $format:expr, [$($transformed_args:ident),*], [$($idents:ident),*]) => ({
    LOGGER.with(move |l| {
            //let mut logger = *l.borrow_mut();
            l.borrow_mut().log(
                    format_args!(
                        concat!("\t{}\t", $format, '\n'),
                        "a" $(, $transformed_args)*));
            });
    });
});

and it works!

So, that last part may not be completely relevant to your logger implementation, but I thought
it was quite cool 🙂

Despite my issues with the log crate, it’s quite useful and supported by a lot of libraries. It is currently getting a lot better as part of the libz blitz. I’d also encourage you to check out slog. I haven’t felt the need to integrate it in sozu yet, but it can be interesting for new projects, as it comes with an ecosystem of composable libraries to extend it.

Advertisements

How to rewrite your project in Rust

In a previous post, I explained why rewriting existing software in Rust could be a good idea. The main point being that you should not rewrite the whole application, but replace the weaker parts without disturbing most of the code, to strengthen the codebase without disruption.

I also provided pointers to projects where other people and I did it succesfully, but without giving too many details. So let’s get a real introduction to Rust rewrites now. This article requires a little bit of knowledge about Rust, but you should be able to follow it even as a
beginner.

As a reminder, here are the benefits Rust bring into a rewrite:

  • it can easily call C code
  • it can easily be called by C code (it can export C compatible functions and structures)
  • it does not need a garbage collector
  • if you want, it does not even need to handle allocations
  • the Rust compiler can produce static and dynamic libraries, and even object files
  • the Rust compiler avoids most of the memory vulnerabilities you get in C (yes, I had to mention it)
  • Rust is easier to maintain than C (this is discutable, but not the point of this article)

As it turns out, this is more or less the plan to replace C code with Rust:

  • import C structures and functions in Rust
  • import Rust structures and functions from C
  • reuse the host application’s memory allocations whenever possible
  • write code (yes, we have to do it at some point)
  • produce artefacts that can be linked with the host application
  • integrate with the build system

We’ll see how to apply this with examples from the Rust VLC plugin.

Import C structures and functions in Rust

Rust can easily use C code directly, by writing functions and structures definitions. A lot of the techniques you would use for this come from the “unsafe Rust” chapter of “The Rust Programming Language” book. For the following C code:

struct vlc_object_t {
    const char   *object_type;
    char         *header;
    int           flags;
    bool          force;
    libvlc_int_t *libvlc;
    vlc_object_t *parent;
};

You would get the following Rust structure:

extern crate libc;
use libc::c_char;

#[repr(C)]
pub struct vlc_object_t {
  pub psz_object_type: *const c_char,
  pub psz_header:      *mut c_char,
  pub i_flags:         c_int,
  pub b_force:         bool,
  pub p_libvlc:        *mut libvlc_int_t,
  pub p_parent:        *mut vlc_object_t,
}

the #[repr(C)] tag indicates to the compiler that the structure should have a memory layout similar to the one generated by a C
compiler. We import types from the libc crate, like c_char. Those types are platform dependent (with their different form already handled in libc). Here, we use a lot of raw pointers (indicated by *), which means by using this structure directly, we’re basically writing C, which is no good! A good approach, as we’ll see later, is to write safer wrappers above those C bindings.

Importing C functions is quite straightforward too:

ssize_t  vlc_stream_Peek(stream_t *, const uint8_t **, size_t);
ssize_t  vlc_stream_Read(stream_t *, void *buf, size_t len);
uint64_t vlc_stream_Tell(const stream_t *);

These C function declarations would get translated to:

#[link(name = "vlccore")]
extern {
  pub fn vlc_stream_Peek(stream: *mut stream_t, buf: *mut *const uint8_t, size: size_t) -> ssize_t;
  pub fn vlc_stream_Read(stream: *mut stream_t, buf: *const c_void, size: size_t) -> ssize_t;
  pub fn vlc_stream_Tell(stream: *const stream_t) -> uint64_t;
}

The #[link(name = "vlccore")] tag indicates to which library we are linking. It is equivalent to passing a -lvlccore argument to the linker. Libvlccore is a library all VLC plugins must link to. Those functions are declared like regular Rust functions, but like the previous structures, will mainly work on raw pointers.

bindgen

You can always write all your bindings manually like this, but when the amount of code to import is a bit large, it can be a good idea to employ the awesome bindgen tool, that will generate Rust code from C headers.

It can work as a command line tool, but can also work at compile time from a build script. First, add the dependency to your Cargo.toml file:

[build-dependencies.bindgen]
version = "^0.25"

You can then write your build script like this:

extern crate bindgen;
use std::fs::File;
use std::io::Write;
use std::path::Path;

fn main() {
  let include_arg = concat!("-I", env!("INCLUDE_DIR"));
  let vlc_common_path = concat!(env!("INCLUDE_DIR"), "/vlc_common.h");

  let _ = bindgen::builder()
    .clang_arg(include_arg)
    .clang_arg("-include")
    .clang_arg(vlc_common_path)
    .header(concat!(env!("INCLUDE_DIR"), "/vlc_block.h"))
    .hide_type("vlc_object_t")
    .whitelist_recursively(true)
    .whitelisted_type("block_t")
    .whitelisted_function("block_Init") 
    .raw_line("use ffi::common::vlc_object_t;")
    .use_core()
    .generate().unwrap()
    .write_to_file("src/ffi/block.rs");
}

So there’s a lot to unpack here, because bindgen is very flexible:

  • we use clang_arg to pass the include folder path and pre include a header everywhere (vlc_common.h is included pretty puch everywhere in VLC)
  • the header method specifies the header from which we will import definitions
  • hide_type prevents redefinition of elements we already defined (liek the ones from the common header)
  • whitelisted_type and whitelisted_function specify types and functions for which bindgen will create definitions
  • raw_line writes its argument at the top of the file. I apply it to reuse definitions from other files
  • write_to_file writes the whole definition to the specified path

You can apply that process to any C header you must import. With the build script, it can run every time the library is compiled, but be careful, generating a lot of headers can take some time. It might be a good idea to pregenerate them and commit the generated files, and update them from time to time.

It is usually a good idea to separate the imported definitions in another crate with the -sys suffix, and write the safe code in the main crate.
As an example, see the crates openssl and openssl-sys.

Writing safe wrappers

Previously, we imported the C function ssize_t vlc_stream_Read(stream_t *, void *buf, size_t len) as the Rust version pub fn vlc_stream_Read(stream: *mut stream_t, buf: *const c_void, size: size_t) -&gt; ssize_t but kept an unsafe interface. Since we want to use those functions safely, we can now make a better wrapper:

use ffi;

pub fn stream_Read(stream: *mut stream_t, buf: &mut [u8]) -> ssize_t {
  unsafe {
    ffi::vlc_stream_Read(stream, buf.as_mut_ptr() as *mut c_void, buf.len())
  }
}

Here we replaced the raw pointer to memory and the length with a mutable slice. We still use a raw pointer to the stream_t instance, maybe we can do better:

use ffi;

pub struct Stream(*mut stream_t);

pub fn stream_Read(stream: Stream, buf: &mut [u8]) -> ssize_t {
  unsafe {
    ffi::vlc_stream_Read(stream.0, buf.as_mut_ptr() as *mut c_void, buf.len())
  }
}

Be careful if you plan to implement Drop for this type: is the Rust code supposed to free that object? Is there some reference counting involved? Here is an example of Drop implementation from the openssl crate:

pub struct SslContextBuilder(*mut ffi::SSL_CTX);

impl Drop for SslContextBuilder {
    fn drop(&mut self) {
        unsafe { ffi::SSL_CTX_free(self.as_ptr()) }
    }
}

Remember that it’s likely the host application has a lot of infrastructure to keep track of memory, and as a rule, we should reuse the tools it offers for the code at the interface between Rust and C. See the Rust FFI omnibus for more examples of safe wrappers you can write.

Side note: as of now (2017/07/10) custom allocators are still not stable

Exporting Rust code to be called from C

Since the host application is written in C, it might need to call your code. This is quite easy in Rust: you need to write unsafe wrappers.

Here we will use as example the inverted index library for mobile apps I wrote for a conference. In this library, we have an Index type that we want to use from Java. Here is its definition:

#[repr(C)]
pub struct Index {
  pub index: HashMap<String, HashSet<i32>>,
}

This type has a few method we want to provide:

impl Index {
  pub fn new() -> Index {
    Index {
      index: HashMap::new(),
    }
  }

  pub fn insert(&mut self, id: i32, data: &str) {
    [...]
  }

  pub fn search_word(&self, word: &str) -> Option<&HashSet<i32>> {
    self.index.get(word)
  }

  pub fn search(&self, text: &str) -> HashSet<i32> {
    [...]
  }
}

First, we need to write the functions to allocate and deallocate our index. Every use from C will be wrapped in a Box.

#[no_mangle]
pub extern "C" fn index_create() -> *mut Index {
  Box::into_raw(Box::new(Index::new()))
}

The Box type indicates and owns a heap allocation. When the box is dropped, the underlying data is dropped as well and the memory is freed. The following function takes ownership of its argument, so it is dropped at the end.

#[no_mangle]
pub extern "C" fn index_free(ptr: *mut Index) {
    let _ = unsafe { Box::from_raw(ptr) };
}

Now that allocation is handled, we can work on a real method. The following method takes an index, and id for a text, and the text itself, as a C string (ie, terminated by a null character).

Since we’re kinda writing C in Rust here, we have to first check if the pointers are null. Then we can transform the C string in a slice. Then we check if it is correctly encoded as UTF-8 before inserting it into our index.

#[no_mangle]
pub extern "C" fn index_insert(index: *mut Index, id: i32, raw_text: *const c_char) {
  unsafe { if index.is_null() || raw_text.is_null() { return } };
  let slice = unsafe { CStr::from_ptr(raw_text).to_bytes() };
  if let Ok(text) = str::from_utf8(slice) {
    (*index).insert(id, text);
  }
}

Most of the code for those kinds of wrappers is just there to transform between C and Rust types and checking that the arguments coming from C code are correct. Even if we have to trust the host application, we should program defensively at the boundary.

There are other methods we could implement for the index, we’ll leave those as exercise for the reader 🙂

Now, we need to write the C definitions to import those functions and types:

typedef struct Index Index;

Index* index_create();
void   index_free(Index* index);
void   index_insert(Index* index, int32_t id, char const* raw_text);

We defined Index as an opaque type here. Since Rust structures can be compatible with C structures, we could export the real type, but since it only contains a Rust specific type, HashMap, it is better to hide it completely and write accessors and wrappers.

Generating bindings with rusty-cheddar

Writing function imports from C to Rust is tedious, so we have bindgen for this. We also have a great tool to go the other way: rusty-cheddar.

In the same way, it can be used from a build script:

extern crate cheddar;

fn main() {
  cheddar::Cheddar::new().expect("could not read definitions")
    .run_build("include/main.h");
  cheddar::Cheddar::new().expect("could not read definitions")
    .module("index").expect("malformed module path")
    .insert_code("#include \"main.h\"")
    .run_build("include/index.h");
}

Here we run rusty-cheddar a first time without specifying the module: it will default to generate a header for the definitions in src/lib.rs.
The second run specifies a different module, and can insert a file inclusion at the top.

It can be a good idea to commit the generated headers, since you will see immediately if you changed the interface in a breaking way.

Integrating with the build system

As you might know, we can make dynamic libraries and executables with rustc and cargo. But often, the host application will have its own build system, and it might disagree with the way cargo builds its projects. So we have multiple strategies:

  • build Rust code separately, store libraries and headers in Maven or something (don’t laugh, I’ve worked with such a system once, and it was actually great)
  • try to let rustc build dynamic libraries from inside the build system. We tried that for VLC and it was not great at all
  • build a static library from inside or outside the build system, include it in the libraries at link. This was done in Rusticata
  • build an object file and let the build system link it. This is what we ended up doing with VLC

Building a static library is as easy as specifying crate-type = ["staticlib"] in your Cargo.toml file. To build an object file, use the command cargo rustc --release -- --emit obj. You can see how we added it to the autotools usage in VLC.

Unfortunately, for this part we still do not have automated ways to fix the issues. Maybe with some time, people will write scripts for autotools,
CMake and others to handle Rust and Cargo.

Side note on reproducible builds: if you want to fix the set of Rust dependencies used in your project and make them always available, you can use cargo-vendor to store them in a specific folder

As you might have guessed, this is the most complex part, for which I have no good generic answer. I’d recommend that you spend the most time on this during the project’s prototyping phase: import very little C code, export very little Rust code, try to make it build entirely from within the host application’s build system. Once this is done, extending the project will get much easier. You really don’t want to discover this task at the end of your project and try to retrofit your code in there.

Going further

While this article just explores the surface of Rust rewrites, I hope it provides a good starting point on the tools and techniques you can apply.
Any rewrite will be a large and complex project, but the result is worth the effort. The code you will write will be stronger, and Rust’s type system will force you to review the assumptions made in the C version. You might even find better ways to write it once you start refactoring your code in a more Rusty way, safely hidden behind your wrappers.

PoC: using LLVM’s profile guided optimization in Rust

call graph

What does profile-guided optimization mean?

Some languages have a JIT (Just In Time) compiler available at runtime, that can optimize the executed code depending on current execution patterns. This is, in large part, the cause of the performance of Lua and the JVM. They can start a bit slow, but by accumulating information on actual running code, they make it faster and faster for the current load. PfLua is a great example: the firewall rules are optimized again and again, until the current network traffic is handled as quickly as possible.

When you use other languages, such as C, you usually cannot optimize the application once it is compiled. Except when you use an optimization technique known as Profile-Guided Optimization. From Wikipedia :

Profile-guided optimization (PGO, sometimes pronounced as pogo), also known as profile-directed feedback (PDF), is a compiler optimization technique in computer programming that uses profiling to improve program runtime performance.

It relies on profiling the compiled application, while it runs with the expected, real world load (web traffic, calculations, etc), and feed this profiling information to the compiler. On the next build, the compiler will have more information on which parts of the program are less used, which branches are taken more often, the expected values in a range, etc. Instead of guessing how the program would behave to choose optimizations, the compiler has true information, and can optimize more precisely. There’s one issue with the process: you need two compilations and a profiling run to generate the final executable. But it gets easier when you automate it, as we can see in the Firefox build process.

PGO in LLVM

While it has been available in other systems for a long time (Visual Studio 2005, the Intel compiler ICC for Itanium), it appeared recently in LLVM.  It has since then been applied successfully to XCode (Objective C, Swift) and LDC, the D compiler.

LLVM has a great feature: it uses an Intermediate Representation code (IR), which is a kind of high level assembly language. It applies its optimizations and machine code generation to that representation. If you make a compiler for a new language, targeting the LLVM IR will give you these features (nearly) for free.

In practice, compiler frontends choose which features they use, so you may not access everything LLVM has to offer. In particular, the Rust compiler, as of now (April 2016), provides a llvm-args option, but that option filters what you can send to LLVM, so we cannot use PGO here.

PGO in Rust

Still, with rustc, you can generate directly the IR, or its binary encoding, named bitcode:

rustc –emit llvm-bc main.rs
# or, with cargo:
cargo rustc — –emit llvm-bc

The approach I tried here is to take that bitcode, and manually apply LLVM’s transformations until I get a compiled executable. This is not really usable for now, especially because I chose an example with very few dependencies. With more dependencies, the compilation and linking will get more complex and unmanageable manually.

LLVM comes with a few commands that you can use to build code manually. The first one is opt, and it applies optimizations and instrumentation on the bitcode file (here, the file target/release/pgo.bc):

opt-3.8 -O2 -pgo-instr-gen -instrprof target/release/pgo.bc -o pgo.bc

The new bitcode file contains code to profile the end application (mainly by counting how often we use each code path). We can now convert that bitcode file to an object file, and link it using clang:

llc-3.8 -O2 -filetype=obj pgo.bc
clang-3.8 -O2 -flto -fprofile-instr-generate pgo.o -L/usr/local/lib/rustlib/x86_64-apple-darwin/lib -lstd-ca1c970e -o pgo

Note: I built my own rustc from source, so your libstd file may not have the same hash. Since Rust (as of April 2016) uses LLVM 3.7, we can use LLVM 3.8’s PGO features, since the bitcode format is apparently backward compatible. I use OS X, and Homebrew’s LLVM 3.8 has compilation issues, so I needed to build the compiler runtime from source. It’s a proof of concept, not production code 😉

We will now run the program we just built, preferably with production data and traffic. It will automatically generate a default.profraw file, containing the profiling information. This file must be transformed to a format that opt will understand with llvm-profdata:

llvm-profdata-3.8 merge -output=pgo.profdata default.profraw

This .profdata file will now be used in the compilation steps:

opt-3.8 -O2 -pgo-instr-use -pgo-test-profile-file=pgo.profdata target/release/pgo.bc -o pgo-opt.bc
llc-3.8 -O2 -filetype=obj pgo-opt.bc
clang-3.8 -O2 -flto -fprofile-instr-use=pgo.profdata pgo-opt.o -L/usr/local/lib/rustlib/x86_64-apple-darwin/lib -lstd-ca1c970e -o pgo-opt

We now have an executable compiled using profiling information. Is it fast?

The benchmarks

The program I tested is a n-body simulation. It was a great test target since libstd is the only dependency, and the load factor depends on a number given as command line argument. Here is a test with time (I know it’s not the most precise benchmarking tool, but for a tenth of second precision, it works alright):

$ time ./target/release/pgo 1000000000
-0.169075164
-0.169051540

real    1m22.528s
user    1m22.214s
sys     0m0.173s

$ time ./pgo-opt 1000000000
-0.169075164
-0.169051540

real    1m9.810s
user    1m9.687s
sys     0m0.070s

As it turns out, we gain nearly 15% in running time on this program. Other examples could have less impact, but this is encouraging! So, what happened inside our program?

The generated code

I provide assembly dumps of the normal program, generated with cargo –release, and the one optimized with PGO. Mostly, the code has been reordered, probably to fit better in cache lines. You can also consult PDF files with call graphs: normal, PGO optimized.

The whole code for this article is available here if you want to reproduce the results or tinker with optimizations yourself.

This is a proof of concept, demonstrating that profile guided optimization could work in Rust. It is probably worthy of integration into rustc, but there’s a lot of work before it could be usable. Still, there’s a github issue where you can weigh in, if you would like this optimization in your applications.

 

Frustrating communication

I’m getting less and less satisfied with Twitter to exchange thoughts. The 140 characters is not the obvious problem, since you can chain messages easily. The issue is that those thoughts are ephemeral. This medium does not optimize for smart discussion with relevant people, but for quick wit from currently available people, before being dumped under a stack of comments on the latest news. The retweeting does not help much, since the primary reason for retweeting are 1. it’s funny 2. it is shocking 3. it’s inspiring, and long last “maybe it’s interesting”. They don’t create much discussion.

Until now, I have primarily used this blog for long posts (thus explaining why I don’t write much here). As my friends say “if it’s more than 3 tweets, write a blog post”.

So in the following months, I’ll try to post short, not well researched but spontaneous articles, instead of ranting in 140 characters.

A world without certificate authorities

love locks
When networks began to expand and people saw the need for secure communication, they designed complex systems based on public key cryptography, that worked more or less. Problem: how do you trust that the key a server sent you is the right one? How can you make sure that it is not somebody else trying to impersonate that website?

Multiple solutions were proposed, and the most promising was a public directory of domain names and associated public keys, maintained by a peer to peer network named KeyCoin. It looked better than so called Web Of Trust solutions, because everybody could agree on what was the correct key for a given domain. As long as nobody hold 51% of the network, no change could happen without being validated by a lot of different peers. The network was maintained by 10000 enthusiast system administrators who took their task very seriously (after all, the security of the whole system depended on their honesty), and nobody had enough computing power to take over the network.

After a while, people began using the system, since it was directly integrated in their browsers, but they did not want to run a node on the network themselves. It was too bothersome, and they could trust the administrators. Also, they had to ask one of them to make a change everytime. The whole process was a bit artisanal.

In the meantime, some people demonstrated the 51% attack on networks of reduced size, and that worried people. They wanted a safe system, one that was not only relying on those sysadmins that could do anything. Who were they anyway? Running that system was still too complex for non technical too run it themselves anyway, so they did not worry enough. But some governments found that rewriting the truth of name/key matching was interesting. Maybe to catch pedophiles, terrorists, criminals. Or maybe to censor websites, I do not know, they told me it was for my own good.

Some smart person found a good solution: if controlling the whole system necessitated owning 51% of the system, the easiest way was to have a lot of machines, enough to counteract the sysadmins. That did not seem risky when people designed the system. Nobody could have enough computing power to take over the whole network, and there would be even more nodes every day.

Yet, that person got enough funding to install tens of thousands of machines and make them join the network. They even provided a nice enough interface for people and businesses to input their domain name and public key, as long as they paid some fees. The sysadmins welcomed him at first, since money coming in the system validated their ideas. Atfer a while, they started worrying, since none of them could keep up with the computing power, but that company asssured them it would never attain 51% of the network.

Other companies jumped on the bandwagon and started to profit from that new business opportunity. Governments started their own server farms to participate too. Problem: now that everybody (except the sysadmins) had a lot of computing power, nobody had enough to control the network entirely.

So they started making alliances. If a few major players work as a team, they can do whatever they want on the network. If one of them decided to try and replace a key on the ledger, others could help it. Of course, once they begun doing that, others wanted to participate. So they created a few rules to join their club. First, you needed to have enough machines. That was a good rule, because that made a big barrier to entry. You could not start as a small player. The other rules? You had to submit to an audit, performed by the other players. Yet another barrier to entry. And once they deemed you acceptable, you had to follow the requests of governments, which were arbitrarily refusing candidates.

Even with the big barriers to entry, a few hundred players came up, often backed by governments. Of course, all ended up in the same team, doing whatever they wanted, as long as nobody was complaining, because anytime one of them had something shady to do, all of them followed automatically.

Since building those big companies required money, they made their clients pay more and more, and to make it easier to accept, provided “premium” options where they show they trust you more, since they took the time to phone your company and ask a few questions.

Some found that big system too centralized, too obedient to states, and decided to fork it. There are separate public ledgers, but they do not come directly embedded in browsers, you need to integrate them yourself, and that’s bothersome. Also, most of those networks have a few hundred nodes at best.

From a nice, decentralized, home made system, we ended up with a centralized system controlled by corporations and governments.

Now let me tell you about that system I designed. It is based on a concept named certificate, a cryptographically signed file that links the public key to a domain name. Now here’s the catch: a certificate represents a key, and is signed by another key, which is represented by another certificate, and so on and so forth until a certificate that signs itself. That system is good, because you just have to embed the root certificate that your friends gives you, and you’ll be able to verify the key of his websites, even if those keys change. And this, without even asking the public ledger, so that is a truly decentralized and more anonymous system! Nothing could go wrong with that, right?

Programming VS Mathematics, and other pointless debates

I do not know who started this argument a few days ago. It feels like something coming from HN. Do you need to know mathematics to be a good programmer?

There is a lot of differing opinions. Maybe programming is a subbranch of mathematics, or programming is using mathematics. Or learning programming is closer to learning a new language. For me, saying that programming is about languages is like saying that literature is about languages. Sure, you need words to indicate concepts, some languages are better suited than others for that, and some concepts are better expressed in other languages. It is more like a hierarchy to me: philosophy formalizes concepts used by authors to write in common languages. Mathematics formalize concepts used by programmers to create code in common languages.
But this is besides the point.

This debate sparks outrage, since it touches a central point of our education, and one that is often not taught very well. “Look, I do not use geometry while writing a loop, so maths are pointless for me”. A lot of developers will never learn basic algebra or logic and will never need it in their day job. And that’s okay.
Programming is not a single profession anymore. Each and every one of us has a different definition. A mechanical engineer working on bridges, another on metallic parts for cars and another one on plastic toys all have different needs, different techniques for their job, although the fundamental basis (evaluating breaking strength, time of assembly, production costs) is the same. That does not make one of these jobs worth more than the other.

The real problem is that we are still fighting among ourselves to define what our job is. The other pointless debate, about software being engineering, science or craft, is evidence of that. And it will stay hard to define for a long time.
We are in a unique position. Usually, when a new field emerges, either tinkerers are launching it and later, good practices are studied to make it engineering, or scientists create it, then means of production become cheaper and crafters take over.
Computers were started by scientists, but the ease of access gave crafters a good opportunity to take over. But that does not mean research stopped when people started coding at home. So now, in a relatively new field (less than a century), while we are still exploring, we have a very large spectrum of jobs and approaches, from the most scientific to the most artistic kind. And that is okay. More world views will help us get better at our respective jobs.

So, while you are arguing that the other side is misguided, irrealistic or unrigorous, take time to consider where they come from. They do not have the same job, and that job can seem pointless to you, but they can be good at it, so there is probably something good you can learn from their approach. The only thing you should not forgive from the other side is the lack of curiosity.