How to rewrite your project in Rust

In a previous post, I explained why rewriting existing software in Rust could be a good idea. The main point being that you should not rewrite the whole application, but replace the weaker parts without disturbing most of the code, to strengthen the codebase without disruption.

I also provided pointers to projects where other people and I did it succesfully, but without giving too many details. So let’s get a real introduction to Rust rewrites now. This article requires a little bit of knowledge about Rust, but you should be able to follow it even as a
beginner.

As a reminder, here are the benefits Rust bring into a rewrite:

  • it can easily call C code
  • it can easily be called by C code (it can export C compatible functions and structures)
  • it does not need a garbage collector
  • if you want, it does not even need to handle allocations
  • the Rust compiler can produce static and dynamic libraries, and even object files
  • the Rust compiler avoids most of the memory vulnerabilities you get in C (yes, I had to mention it)
  • Rust is easier to maintain than C (this is discutable, but not the point of this article)

As it turns out, this is more or less the plan to replace C code with Rust:

  • import C structures and functions in Rust
  • import Rust structures and functions from C
  • reuse the host application’s memory allocations whenever possible
  • write code (yes, we have to do it at some point)
  • produce artefacts that can be linked with the host application
  • integrate with the build system

We’ll see how to apply this with examples from the Rust VLC plugin.

Import C structures and functions in Rust

Rust can easily use C code directly, by writing functions and structures definitions. A lot of the techniques you would use for this come from the “unsafe Rust” chapter of “The Rust Programming Language” book. For the following C code:

struct vlc_object_t {
    const char   *object_type;
    char         *header;
    int           flags;
    bool          force;
    libvlc_int_t *libvlc;
    vlc_object_t *parent;
};

You would get the following Rust structure:

extern crate libc;
use libc::c_char;

#[repr(C)]
pub struct vlc_object_t {
  pub psz_object_type: *const c_char,
  pub psz_header:      *mut c_char,
  pub i_flags:         c_int,
  pub b_force:         bool,
  pub p_libvlc:        *mut libvlc_int_t,
  pub p_parent:        *mut vlc_object_t,
}

the #[repr(C)] tag indicates to the compiler that the structure should have a memory layout similar to the one generated by a C
compiler. We import types from the libc crate, like c_char. Those types are platform dependent (with their different form already handled in libc). Here, we use a lot of raw pointers (indicated by *), which means by using this structure directly, we’re basically writing C, which is no good! A good approach, as we’ll see later, is to write safer wrappers above those C bindings.

Importing C functions is quite straightforward too:

ssize_t  vlc_stream_Peek(stream_t *, const uint8_t **, size_t);
ssize_t  vlc_stream_Read(stream_t *, void *buf, size_t len);
uint64_t vlc_stream_Tell(const stream_t *);

These C function declarations would get translated to:

#[link(name = "vlccore")]
extern {
  pub fn vlc_stream_Peek(stream: *mut stream_t, buf: *mut *const uint8_t, size: size_t) -> ssize_t;
  pub fn vlc_stream_Read(stream: *mut stream_t, buf: *const c_void, size: size_t) -> ssize_t;
  pub fn vlc_stream_Tell(stream: *const stream_t) -> uint64_t;
}

The #[link(name = "vlccore")] tag indicates to which library we are linking. It is equivalent to passing a -lvlccore argument to the linker. Libvlccore is a library all VLC plugins must link to. Those functions are declared like regular Rust functions, but like the previous structures, will mainly work on raw pointers.

bindgen

You can always write all your bindings manually like this, but when the amount of code to import is a bit large, it can be a good idea to employ the awesome bindgen tool, that will generate Rust code from C headers.

It can work as a command line tool, but can also work at compile time from a build script. First, add the dependency to your Cargo.toml file:

[build-dependencies.bindgen]
version = "^0.25"

You can then write your build script like this:

extern crate bindgen;
use std::fs::File;
use std::io::Write;
use std::path::Path;

fn main() {
  let include_arg = concat!("-I", env!("INCLUDE_DIR"));
  let vlc_common_path = concat!(env!("INCLUDE_DIR"), "/vlc_common.h");

  let _ = bindgen::builder()
    .clang_arg(include_arg)
    .clang_arg("-include")
    .clang_arg(vlc_common_path)
    .header(concat!(env!("INCLUDE_DIR"), "/vlc_block.h"))
    .hide_type("vlc_object_t")
    .whitelist_recursively(true)
    .whitelisted_type("block_t")
    .whitelisted_function("block_Init") 
    .raw_line("use ffi::common::vlc_object_t;")
    .use_core()
    .generate().unwrap()
    .write_to_file("src/ffi/block.rs");
}

So there’s a lot to unpack here, because bindgen is very flexible:

  • we use clang_arg to pass the include folder path and pre include a header everywhere (vlc_common.h is included pretty puch everywhere in VLC)
  • the header method specifies the header from which we will import definitions
  • hide_type prevents redefinition of elements we already defined (liek the ones from the common header)
  • whitelisted_type and whitelisted_function specify types and functions for which bindgen will create definitions
  • raw_line writes its argument at the top of the file. I apply it to reuse definitions from other files
  • write_to_file writes the whole definition to the specified path

You can apply that process to any C header you must import. With the build script, it can run every time the library is compiled, but be careful, generating a lot of headers can take some time. It might be a good idea to pregenerate them and commit the generated files, and update them from time to time.

It is usually a good idea to separate the imported definitions in another crate with the -sys suffix, and write the safe code in the main crate.
As an example, see the crates openssl and openssl-sys.

Writing safe wrappers

Previously, we imported the C function ssize_t vlc_stream_Read(stream_t *, void *buf, size_t len) as the Rust version pub fn vlc_stream_Read(stream: *mut stream_t, buf: *const c_void, size: size_t) -> ssize_t but kept an unsafe interface. Since we want to use those functions safely, we can now make a better wrapper:

use ffi;

pub fn stream_Read(stream: *mut stream_t, buf: &mut [u8]) -> ssize_t {
  unsafe {
    ffi::vlc_stream_Read(stream, buf.as_mut_ptr() as *mut c_void, buf.len())
  }
}

Here we replaced the raw pointer to memory and the length with a mutable slice. We still use a raw pointer to the stream_t instance, maybe we can do better:

use ffi;

pub struct Stream(*mut stream_t);

pub fn stream_Read(stream: Stream, buf: &mut [u8]) -> ssize_t {
  unsafe {
    ffi::vlc_stream_Read(stream.0, buf.as_mut_ptr() as *mut c_void, buf.len())
  }
}

Be careful if you plan to implement Drop for this type: is the Rust code supposed to free that object? Is there some reference counting involved? Here is an example of Drop implementation from the openssl crate:

pub struct SslContextBuilder(*mut ffi::SSL_CTX);

impl Drop for SslContextBuilder {
    fn drop(&mut self) {
        unsafe { ffi::SSL_CTX_free(self.as_ptr()) }
    }
}

Remember that it’s likely the host application has a lot of infrastructure to keep track of memory, and as a rule, we should reuse the tools it offers for the code at the interface between Rust and C. See the Rust FFI omnibus for more examples of safe wrappers you can write.

Side note: as of now (2017/07/10) custom allocators are still not stable

Exporting Rust code to be called from C

Since the host application is written in C, it might need to call your code. This is quite easy in Rust: you need to write unsafe wrappers.

Here we will use as example the inverted index library for mobile apps I wrote for a conference. In this library, we have an Index type that we want to use from Java. Here is its definition:

#[repr(C)]
pub struct Index {
  pub index: HashMap<String, HashSet<i32>>,
}

This type has a few method we want to provide:

impl Index {
  pub fn new() -> Index {
    Index {
      index: HashMap::new(),
    }
  }

  pub fn insert(&mut self, id: i32, data: &str) {
    [...]
  }

  pub fn search_word(&self, word: &str) -> Option<&HashSet<i32>> {
    self.index.get(word)
  }

  pub fn search(&self, text: &str) -> HashSet<i32> {
    [...]
  }
}

First, we need to write the functions to allocate and deallocate our index. Every use from C will be wrapped in a Box.

#[no_mangle]
pub extern "C" fn index_create() -> *mut Index {
  Box::into_raw(Box::new(Index::new()))
}

The Box type indicates and owns a heap allocation. When the box is dropped, the underlying data is dropped as well and the memory is freed. The following function takes ownership of its argument, so it is dropped at the end.

#[no_mangle]
pub extern "C" fn index_free(ptr: *mut Index) {
    let _ = unsafe { Box::from_raw(ptr) };
}

Now that allocation is handled, we can work on a real method. The following method takes an index, and id for a text, and the text itself, as a C string (ie, terminated by a null character).

Since we’re kinda writing C in Rust here, we have to first check if the pointers are null. Then we can transform the C string in a slice. Then we check if it is correctly encoded as UTF-8 before inserting it into our index.

#[no_mangle]
pub extern "C" fn index_insert(index: *mut Index, id: i32, raw_text: *const c_char) {
  unsafe { if index.is_null() || raw_text.is_null() { return } };
  let slice = unsafe { CStr::from_ptr(raw_text).to_bytes() };
  if let Ok(text) = str::from_utf8(slice) {
    (*index).insert(id, text);
  }
}

Most of the code for those kinds of wrappers is just there to transform between C and Rust types and checking that the arguments coming from C code are correct. Even if we have to trust the host application, we should program defensively at the boundary.

There are other methods we could implement for the index, we’ll leave those as exercise for the reader 🙂

Now, we need to write the C definitions to import those functions and types:

typedef struct Index Index;

Index* index_create();
void   index_free(Index* index);
void   index_insert(Index* index, int32_t id, char const* raw_text);

We defined Index as an opaque type here. Since Rust structures can be compatible with C structures, we could export the real type, but since it only contains a Rust specific type, HashMap, it is better to hide it completely and write accessors and wrappers.

Generating bindings with rusty-cheddar

Writing function imports from C to Rust is tedious, so we have bindgen for this. We also have a great tool to go the other way: rusty-cheddar.

In the same way, it can be used from a build script:

extern crate cheddar;

fn main() {
  cheddar::Cheddar::new().expect("could not read definitions")
    .run_build("include/main.h");
  cheddar::Cheddar::new().expect("could not read definitions")
    .module("index").expect("malformed module path")
    .insert_code("#include \"main.h\"")
    .run_build("include/index.h");
}

Here we run rusty-cheddar a first time without specifying the module: it will default to generate a header for the definitions in src/lib.rs.
The second run specifies a different module, and can insert a file inclusion at the top.

It can be a good idea to commit the generated headers, since you will see immediately if you changed the interface in a breaking way.

Integrating with the build system

As you might know, we can make dynamic libraries and executables with rustc and cargo. But often, the host application will have its own build system, and it might disagree with the way cargo builds its projects. So we have multiple strategies:

  • build Rust code separately, store libraries and headers in Maven or something (don’t laugh, I’ve worked with such a system once, and it was actually great)
  • try to let rustc build dynamic libraries from inside the build system. We tried that for VLC and it was not great at all
  • build a static library from inside or outside the build system, include it in the libraries at link. This was done in Rusticata
  • build an object file and let the build system link it. This is what we ended up doing with VLC

Building a static library is as easy as specifying crate-type = ["staticlib"] in your Cargo.toml file. To build an object file, use the command cargo rustc --release -- --emit obj. You can see how we added it to the autotools usage in VLC.

Unfortunately, for this part we still do not have automated ways to fix the issues. Maybe with some time, people will write scripts for autotools,
CMake and others to handle Rust and Cargo.

Side note on reproducible builds: if you want to fix the set of Rust dependencies used in your project and make them always available, you can use cargo-vendor to store them in a specific folder

As you might have guessed, this is the most complex part, for which I have no good generic answer. I’d recommend that you spend the most time on this during the project’s prototyping phase: import very little C code, export very little Rust code, try to make it build entirely from within the host application’s build system. Once this is done, extending the project will get much easier. You really don’t want to discover this task at the end of your project and try to retrofit your code in there.

Going further

While this article just explores the surface of Rust rewrites, I hope it provides a good starting point on the tools and techniques you can apply.
Any rewrite will be a large and complex project, but the result is worth the effort. The code you will write will be stronger, and Rust’s type system will force you to review the assumptions made in the C version. You might even find better ways to write it once you start refactoring your code in a more Rusty way, safely hidden behind your wrappers.

Advertisements

Why you should, actually, rewrite it in Rust

You might have seen those obnoxious “you should rewrite it in Rust comments” here and there:

It’s like at every new memory vulnerability in well known software, there’s that one person saying Rust would have avoided the issue. We get it, it’s annoying, and it does not help us grow Rust. This attitude is generally frowned upon in the Rust community. You can’t just show up into someone’s project telling them to rewrite everything.

so, why am I writing this? Why would I try to convince you, now, that you should actually rewrite your software in Rust?

That’s because I have been working on this subject for a long time now:

  • I did multiple talks on it
  • I even co-wrote a paper
  • I did it both as client and personal work

So, I’m commited to this, and yes, I believe you should rewrite some code in Rust. But there’s a right way to do it.

Why rewrite stuff?

Our software systems are built on sand. We got pretty good at maintaining and fixing them over the years, but the cracks are showing. We still have not fixed definitely most of the low level vulnerabilities: stack buffer overflow (yes, those still exist), heap overflow, use after free, double free, off by one; the list goes on. We have some tools, like DEP, ASLR, stack canaries, control flow integrity, fuzzing. Large projects with funding, like Chrome, can resort to sandboxing parts of their application. The rest of us can still run those applications inside a virtual machine. This situation will not improve. There’s a huge amount of old (think 90s), bad quality, barely maintained code that we reuse everywhere endlessly. The good thing with hardware is that at some point, it gets replaced. Software just gets copied again. Worse, with the development of IoT, a lot of the code that ships will never be updated. It’s likely that some of those old libraries will still be there 15, 20 years from now.

Let’s not shy away from the issue here. Most of those are written in C or C++ (and usually an old version). It is well known that it is hard to write correct, reliable software in those languages. Think of all the security related things you have to keep track of in a C codebase:

  • pointer arithmetic
  • allocations and deallocations
  • data is mutable by default
  • functions return integers to mean pointers and error codes. Errors can be implicitely ignored
  • type casts, overflows and underflows are hard to track
  • buffer bounds in indexing and copying
  • all the undefined behaviours

Of course, some developers can do this work. Of course, there are sanitizers. But it’s an enormous effort to perform everyday for every project.

Those languages are well suited for low level programming, but require extreme care and expertise to avoid most of those issues. And even then, we assume the developers will always be well rested, focused and careful. We’re only humans, after all. Note that in 2017, there are still people claiming that a C developer with sufficient expertise would avoid all those issues. It’s time we put this idea to rest. Yes, some projects can avoid a lot of vulnerabilities, with a team of good developers, frequent code reviews, a restricted set of features, funding, tools, etc. Most projects cannot. And as I said earlier, a lot of the code is not even maintained.

So we have to do something. We must make our software foundations stronger. That means fixing operating systems, drivers, libraries, command line tools, servers, everything. We might not be able to fix most of it today, or the next year, but maybe 10 years from now the situation will have improved.

Unfortunately, we cannot rewrite everything. If you ever attempted to rewrite a project from scratch, you’d know that while you can avoid some of the mistakes you made before, you will probably introduce a lot of regressions and new bugs. It’s also wrong on the human side: if there are maintainers for the projects, they would need to work on the new and old one at the same time. Worse, you would have to teach them the new approach, the new language (which they might not like), and plan for an upgrade to the new project for all users.

This is not doable, and this is the part most people asking for project rewrites in Rust do not understand. What I’m advocating for is much simpler: surgically replace weaker parts but keep most of the project intact.

How

Most of the issues will happen around IO and input data handling, so it makes sense to focus on it. It happens there because that’s where the code manipulates buffers, parsers, and uses a lot of pointer calculations. It is also the least interesting part for software maintainers, since it is usually not where you add useful features, business logic, etc. And this logic is usually working well, so you do not want to replace it. If we could rewrite a small part of an application or library without disrupting the rest of the code, we would get most of the benefits without the issues of a full rewrite. It is the exact same project, with the same interface, same distribution packaging as before, same developer team. We would just make an annoying part of the software stronger and more maintainable.

This is where Rust comes in. It is focused on providing memory safety, thread safety while keeping the code performant and the developer productive. As such, it is generally easier to get safe, reliable code in production while writing basic Rust, than a competent, well rested C developer using all the tools available could do.

Most of the other safe languages have strong requirements, like a runtime and a garbage collector. And usually, they expect to be the host application (how many languages assume they will handle the process’s entry point?). Here, we are guests in someone else’s house. We must integrate nicely and quietly.

Rust is a strong candidate for this because:

  • it can easily call C code
  • it can easily be called by C code (it can export C compatible functions and structures)
  • it does not need a garbage collector
  • if you want, it does not even need to handle allocations
  • the Rust compiler can produce static and dynamic libraries, and even object files
  • the Rust compiler avoids most of the memory vulnerabilities you get in C (yes, I had to mention it)

So you can actually take a piece of C code inside an existing project, import the C structures and functions to access them from Rust, rewrite the code in Rust, export the functions and structures from Rust, compile it and link it with the rest of the project.

If you don’t believe it’s possible, take a look at these two examples:

  • Rusticata integrates Rust parsers written with nom in Suricata, an intrusion detection system
  • a VLC media player plugin to parse FLV files, written entirely in Rust

You get a lot of benefits from this approach. First, Rust has great package management with Cargo and crates.io. That means you can separate some of the work in different libraries. See as an example the list of parsers from the Rusticata project. You can test them independently, and even reuse them in other projects. The FLV parser I wrote for VLC can also work in a Rust GStreamer plugin You can also make a separate library for the glue with the host application. I’m working on vlc_module exactly for that purpose: making Rust VLC plugins easier to write.

This approach works well for applications with a plugin oriented architecture, but you can also rewrite core parts of an application or library. The biggest issue is high coupling of C code, but it is usually easy to rewrite bit by bit by keeping a common interface. Whenever you have rewritten some coupled parts of of a project, you can take time to refactor it in a more Rusty way, and leverage the type system to help you. A good example of this is the rewrite of the Zopfli library from C to Rust.

This brings us to another important part of that infrastructure rewrite work: while we can rewrite part of an existing project without being too intrusive, we can also rewrite a library entirely, keeping exactly the same C API. You can have a Rust library, dynamic or static, with the exact same C header, that you could import in a project to replace the C one. This is a huge result. It’s like replacing a load-bearing wall in an existing building. This is not an easy thing to realize, but once it’s done, you can improve a lot of projects at once, provided your distribution’s package manager supports that replacement, or other projects take the time to upgrade.

This is a lot of work, but every time we advance a little, everybody can benefit from it, and it will add up over the years. So we might as well start now.

Currently, I’m focused on VLC. This is a good target because it’s a popular application that’s often part of the basic stack of any computer (browser, office suite, media player). So it’s a big target. But take a look at the list of dependencies in most web applications, or the dependency graph of common distributions. There is a lot of low hanging fruit there.

Now, how would you actually perform those rewrites? You can check out the next post and the paper explaining how we did it in Rusticata and VLC.

The network is the computer, the cluster is the RAM

There is a very weird part of web applications, where all the nice abstractions and syntax reasoning go wrong, at the interface between the code and a database. At best, there is a leaky abstraction of the database with an ORM, and you have to think about what methods to apply to get the underlying SQL query you need, at worse, you write queries and deserialize manually.

This happens because at one point, applications needed to manipulate more data than their host’s memory could handle. This required a good abstraction over storage, efficient data walking algorithms and fine tuned caching. This also required thousands of hours of engineering, to get a database that is at least bearable to use. Since so much work was put in those databases, you might as well implement as many features as possible, to reuse all this fine engineering.

To work efficiently with these data warehouses and offload a part of the selection work from the application, query languages inspired from logic programming were invented. Basically, they make it easy to work with relations: entity/attribute/value triplets like RDF, or tabled data. Those query language are voluntarily not Turing complete: they do not include loops, negation or unbounded recursion. This helps a lot in optimizing the queries.

Unfortunately, this query language is the barrier between an application and its data. Instead of reasoning about what is in memory, the code must be transformed to load data from the database through a query, deserialize it, compute, reserialize data and put it in the database. Even worse, for efficiency’s sake, some developers push more and more logic to the database, with even more complex queries, views and stored procedures.

What if we could reason directly on a cluster of data as if it was already in the memory? I do not want to create a structure from a deserialized row, change a value then put that row “where id = $myId”. I want to access a structure that is already there in memory, and change the value directly (or clone it and change my index, but that talk is for another blog post).

“No, you cannot access directly data that is not already in your memory”. Sure I can. We already have powerful tools for that. L1 and L2 caches are using that principle, to load data from the RAM and make it available faster to the CPU. Memory mapped file can be lazily loaded page by page in the virtual memory. Imagine loading data lazily from the network, in your address space… Nowadays, we can index data on 64 bits, enough to address the whole world!

“But this totally breaks your security model”. No, it does not. First, most database clusters already assume they’re running on a trusted network. Second, since I see the cluster as a part of my hardware, I think adding a MMU to the lot would work quite well.

“It does not work, because of concurrent accesses”. This already happens in databases, and this is where their powerful query language gets things wrong: if you have a powerful way to access multiple rows at the same time, you have to lock huge parts of the database at once in a transaction to run your mutating query. For virtual memory, we have a lot of interesting tools. Memory pages can be read-write or read-only. Locking through mutexes or Software Transactional Memory could also be implemented at a cluster’s scale. But concurrency is a hard problem, that is often better solved through good data architecture. Immutable data, colocating related data, append-only datastructures, all work as well in memory as on a networked cluster.

This is of course a very big gap to jump, from our traditional databases to a total abstraction in memory, but I think it is an interesting alternative to consider.

There is another model to consider here, one that is currently adopted by large distributed databases: since an application cannot do all the work by just loading data in its memory space, let’s push code onto the data, and run a Turing complete query language on the cluster. This is actually the same kind of model, with worker threads running on your data while you wait for a “work done” message, but you still need to interface your app with the query language. Maybe someday I’ll be able to send a compiled function to run on a cluster.

All in all, the big tools that people built to fight the inefficiencies of yesterday’s technology have to be questioned today. By removing the complex abstractions and their obsolete limitations, we could obtain powerful and simple model to write our future code.

Criterions for a crypto app

Following the previous article, people have asked me what I would consider as a good secure system, and others asked me to review their app, so I think it will be interesting to expose my process when studying those projects.

Threat modeling

The most important point I look for in a project is the threat model. This is the document that will explain for whom the project was created, who are the adversaries, what they are trying to obtain, and which of these threats you are addressing.

Without that document, I cannot know if you considered all the possible actors, and I must infer it from the protocol, which is relatively easy, but my view of the threat model might not correspond to what you expected.

With a good threat model, I can know right away what is your target market (ex: sexting for teens, or secure reporting for journalists in war environments), see if your users will understand the implications, if it will need training, and more importantly, if your system can be safe for that context.

You cannot create a project and say that it will solve all of the privacy problems with some magical crypto algorithm, against all adversaries, even the state actors. I would prefer a useful tool for a niche with real and well defined needs.

Prior art

As you have probably seen, the secure messaging space is already very crowded. If you come up with a new solution to an already solved problem, you need to justify it. Why didn’t you improve an existing project? Couldn’t you adapt someone else’s code, add a better UI?

the NIH syndrome is at the heart of innovation, so I am not against it. But in the case of crypto applications, it might be a good idea to employ already existing (and already audited) code, instead of writing a whole new protocol or algorithm from scratch.

Otherwise, if you are working on an unsolved problem, or improving on current solutions, be prepared to justify it, and a lot, if you employ unusual systems. I am not telling you to avoid funny stuff like Pailler’s cryptosystem, PIR or pairing based cryptography. Just be aware that people will ask you about these.

Publications

That part is fundamental: if you are providing a new protocol or algorithm, you should publish it and ask for review before you start coding and get users. I am not advising you to start up LaTeX and write a paper in ACM format. Just explaining your system on a webpage is fine. The crypto community is full of nice people that will be able to point out if there is any problem (and if you use the academic way of publishing, you might even profit from other people’s funding to get reviews :p).

Some said that the crypto community is full of bitter people eager to hit any new project, following the whole Telegram debacle. That tends to happen when you make a big announcement to get users, telling that it will solve any security problem, and dismiss the opinions of experts, without having asked for review previously.

Note that some of those experts have worked for years on a project before even thinking of communicating about it. As examples, check out Briar, Pond or Cryptosphere: those are quiet but interesting projects. They are not trying to get a lot of users quickly or profit from the post Snowden panic. They have been at it for a long time.

So, publish, ask for review, fix flaws, publish again, fix stuff, and repeat again and again. That is the smartest way to spend your time and money on your project. Once everything is developed and deployed, you will have a hard time trying to plug the holes.

Protocol design

Once we get in the technical stuff, the protocol design is interesting to get a high level view of what you want to achieve. I’ll ask questions like:

  • Is it server centric or P2P? (note: a network of server introduces routing, but is not P2P)
  • Does it include authentication?
  • Is it encrypted end to end?
  • How do you protect against DoS?
  • Is it versioned? Do you allow for protocol version negotiation? Are the algorithms negotiated?
  • Can you revoke keys or identities?

Often, the protocol show what you want to achieve with your system, and it is often answering more threats than the crypto algorithms themselves. A good way to present your protocols is to use diagrams and present the message contents.

Do not insist on algorithms at this point: use general words to describe the primitive you need, like authenticated cipher, public key, key derivation function, MAC. You might change the algorithms later, so stating the properties you need will help reviewers understand what you want to achieve.

A specific note on server VS peer to peer: it is a very understandable feeling for geeks that P2P architectures look better, because they’re decentralizing everything, etc. But they can introduce other problems (like hole punching or sybil attacks), and in some case, you will not be able to avoid servers (for message routing and retries, for mobile systems, etc). Both types of systems are fine, just be aware of their shortcomings.

Cryptographic constructs

Cryptographic algorithms are not enough, you need to apply them correctly. I will have no pity if you say you use “military grade AES 256 encryption” but do not know what is a block cipher mode or Encrypt-Then-MAC. A lot of ugly details can hide here, so do not try to be clever, use battle tested systems:

  • add a separate authentication layer to Diffie-Hellman key exchanges
  • use an authenticated encryption mode
  • use RSA-OAEP instead of PKCS1 padding
  • know well if you need a nonce, an unpredictable number or a time based ID
  • etc.

This is one of the parts where crypto experts will ask annoying questions, because a lot of bugs come from there. They can also propose better solutions (safer, more performant, etc), so listen to them.

If you are employing an unusual scheme here, be prepared to justify it. It might be ok for you, but if the design looks weird to cryptographers, that will raise alarms. Your scheme could be safe, but if it has never been proven right, you are taking a risk, and your users will take that risk too. Is it worth it? Hint: your weird design should provide a unique property that no other algorithm has.

Choice of algorithms

Yes, I do not worry about algorithms until I am already deep in the system. It is not that hard to make correct choices there. Just listen to the recent attacks (ie, avoid RC4) choose large enough keys, choose correct elliptic curves.

Every algorithm has parameters that you need to get right, so be sure to document yourself on your algorithm choices:

  • AES-CBC needs an initialization vector, but AES-CTR uses an incremented nonce
  • RSA needs a good exponent
  • Some elliptic curves work better for some operations

Even if you choose dubious algorithms, if your protocol was well designed, you will be able to move to better algorithm. Be careful with algorithm negotiation, though, a lot of smart people were bitten before.

The implementation

This is probably the part that I will skip, because I do not have the time nor the funding to audit thoroughly the code of every new projects. I will often grep a bit through the code, look for some important points, but this is not something that should be done quickly. This is where the protocol review shows its limits.

Even with a good design, a lot of vulnerabilities can be present in a flawed implementation. Crypto projects should undergo a careful audit like the one Least Authority performed recently on Cryptocat. And that is why you should not communicate about your project before it has been reviewed.

There are things you should always look for in your software projects:

  • encrypting data at rest: if you worry about stolen data, know that a mobile phone or laptop can be stolen
  • random number generation: you should use a CSPRNG, with a good source, and probably some user or device specific data
  • data backup: is it possible? is it safe?
  • software updates: are they downloaded from a secure source? Are the updates verified?
  • Do you use public key pinning?
  • How long are they private keys stored as plaintext in memory?

The implementation details are as important as the whole protocol. You can have a good protocol, but a small error in the code could greatly affect your users. Nevertheless, specifying your protocol is useful, because people can provide better implementations, or make it interoperate with other software. Having other implementations is a good thing: you will not control those versions, but they will be able to construct cool stuff around your system, and make a part of your PR.

User interface

this part is more and more important, because we have been able to create safe systems for years, but often at the price of usability. The user experience of crypto apps needs a lot of innovation, and I’ll follow closely any interesting idea in that space: onboarding experience, useful alerts, user decision making, etc. People should be able to understand when there is a security problem.

I’ll state it once more: if you create a new crypto software, you HAVE to make it easy to use and understand. Some complexity is acceptable, but it must be compensated by documentation (with screenshots, etc) or training.

Other criterions

There are two others that I could think of, but they do not matter that much.

The first is the team. I have been accused of making fun of Telegram for waving around their team of PhDs, but the truth is that I was hopeful: a team full of smart people can come up with interesting design and solve complex problems. If they do not deliver on that, I could be less indulgent. That does not mean I will think less of people without big diplomas. I know too many smart people that dropped out of school to make that mistake. Ultimately, the important thing to judge is the design.

The last parameter is attitude. It is normal to be defensive when someone else reviews your work, but that does not justify denial and dishonesty. People are often taking time off of their job to study your system, so they will be quick and get to the point. If you do not answer or refuse to explain your decisions, it will smell fishy. Even more if you did not ask for a review before communicating about your project. But it does not matter that much. If you are humble and quick to answer, people may help you out of good will, but if you anger cryptographers, you may just have won a free thorough audit 😀

 

Telegram, AKA “Stand back, we have Math PhDs!”

Disclaimer: this post is now very old and may not reflect the current state of Telegram’s protocol. There has been other research in the meantime, and this post should not be used for your choice of secure messaging app. That said, on a personal note, I still think Telegram’s cryptosystem is weird, and its justifications are fallacious. If you want a recommendation on secure messaging apps: use a system based on the Axolotl/Signal protocol. It is well designed and has been audited. Signal and WhatsApp are both using that protocol, and there are others.

Here is the second entry in our serie about weird encryption apps, about Telegram, which got some press recently.

According to their website, Telegram is “cloud based and heavily encrypted”. How secure is it?

Very secure. We are based on a new protocol, MTProto, built by our own specialists, employing time-tested security algorithms. At this moment, the biggest security threat to your Telegram messages is your mother reading over your shoulder. We took care of the rest.

(from their FAQ)

Yup. Very secure, they said it.

So, let’s take a look around.

Available technical information

Their website details the protocol. They could have added some diagrams, instead of text-only, but that’s still readable. There is also an open source Java implementation of their protocol. That’s a good point.

About the team (yes, I know, I said I would not do ad hominem attacks, but they insist on that point):

The team behind Telegram, led by Nikolai Durov, consists of six ACM champions, half of them Ph.Ds in math. It took them about two years to roll out the current version of MTProto. Names and degrees may indeed not mean as much in some fields as they do in others, but this protocol is the result of thougtful and prolonged work of professionals

(Seen on Hacker News)

They are not cryptographers, but they have some background in maths. Great!

So, what is the system’s architecture? Basically, a few servers everywhere in the world, routing messages between clients. Authentication is only done between the client and the server, not between clients communicating with each other. Encryption happens between the client and the server, but not using TLS (some home made protocol instead). Encryption can happen end to end between clients, but there is no authentication, so the server can perform a MITM attack.

Basically, their threat model is a simple “trust the server”. What goes around the network may be safely encrypted, although we don’t know anything about their server to server communication, nor about their data storage system. But whatever goes through the server is available in clear. By today’s standards, that’s boring, unsafe and careless. For equivalent systems, see Lavabit or iMessage. They will not protect your messages against law enforcement eavesdropping or server compromise. Worse: you cannot detect MITM between you and your peers.

I could stop there, but that would not be fun. The juicy bits are in the crypto design. The ideas are not wrong per se, but the algorithm choices are weird and unsafe, and they take the most complicated route for everything.

Network protocol

The protocol has two phases: the key exchange and the communication.

The key exchange registers a device to the server. They wrote a custom protocol for that, because TLS was too slow and complicated. That’s true, TLS needs two roundtrips between the client and the server to exchange a key. It also needs x509 certificates, and a combination of a public key algorithm like RSA or DSA, and eventually a key exchange algorithm like Diffie-Hellman.

Telegram greatly simplified the exchange by requiring three roundtrips, using RSA, AES-IGE (some weird mode that nobody uses), and Diffie-Hellman, along with a proof of work (the client has to factor a number, probably a DoS protection). Also, they employ some home made function to generate the AES key and IV from nonces generated by the server and the client (server_nonce appears in plaintext during the communication):

  • key = SHA1(new_nonce + server_nonce) + substr (SHA1(server_nonce + new_nonce), 0, 12);
  • IV = substr (SHA1(server_nonce + new_nonce), 12, 8) + SHA1(new_nonce + new_nonce) + substr (new_nonce, 0, 4);

Note that AES-IGE is not an authenticated encryption mode. So they verify the integrity. By using plain SHA1 (nope, not a real MAC) on the plaintext. And encrypting the hash along with the plaintext (yup, pseudoMAC-Then-Encrypt).

The final DH exchange creates the authorization key that will be stored (probably in plaintext) on the client and the server.

I really don’t understand why they needed such a complicated protocol. They could have made something like: the client generates a key pair, encrypts the public key with the server’s public key, sends it to the server with a nonce, and the server sends back the nonce encrypted with the client’s public key. Simple and easy. And this would have provided public keys for the clients, for end-to-end authentication.

About the communication phase: they use some combination of server salt, message id and message sequence number to prevent replay attacks. Interestingly, they have a message key, made of the 128 lower order bits of the SHA1 of the message. That message key transits in plaintext, so if you know the message headers, there is probably some nice info leak there.

The AES key (still in IGE mode) used for message encryption is generated like this:

The algorithm for computing aes_key and aes_iv from auth_key and msg_key is as follows:

  • sha1_a = SHA1 (msg_key + substr (auth_key, x, 32));
  • sha1_b = SHA1 (substr (auth_key, 32+x, 16) + msg_key + substr (auth_key, 48+x, 16));
  • sha1_с = SHA1 (substr (auth_key, 64+x, 32) + msg_key);
  • sha1_d = SHA1 (msg_key + substr (auth_key, 96+x, 32));
  • aes_key = substr (sha1_a, 0, 8) + substr (sha1_b, 8, 12) + substr (sha1_c, 4, 12);
  • aes_iv = substr (sha1_a, 8, 12) + substr (sha1_b, 0, 8) + substr (sha1_c, 16, 4) + substr (sha1_d, 0, 8);

where x = 0 for messages from client to server and x = 8 for those from server to client.

Since the auth_key is permanent, and the message key only depends on the server salt (living 24h), the session (probably permanent, can be forgotten by the server) and the beginning of the message, the message key may be the same for a potentially large number of messages. Yes, a lot of messages will probably share the same AES key and IV.

Edit: Following Telegram’s comment, the AES key and IV will be different for every message. Still, they depend on the content of the message, and that is a very bad design. Keys and initialization vectors should always be generated from a CSPRNG, independent from the encrypted content.

Edit 2: the new protocol diagram makes it clear that the key is generated by a weak KDF from the auth key and some data transmitted as plaintext. There should be some nice statistical analysis to do there.

Edit 3: Well, if you send the same message twice (in a day, since the server salt lives 24h), the key and IV will be the same, and the ciphertext will be the same too. This is a real flaw, that is usually fixed by changing IVs regularly (even broken protocols like WEP do it) and changing keys regularly (cf Forward Secrecy in TLS or OTR). The unencrypted message contains a (time-dependent) message ID and sequence number that are incremented, and the client won’t accept replayed messages, or too old message IDs.

Edit 4: Someone found a flaw in the end to end secret chat. The key generated from the Diffie-Hellman exchange was combined with a server-provided nonce: key = (pow(g_a, b) mod dh_prime) xor nonce. With that, the server can perform a MITM on the connection and generate the same key for both peers by manipulating the nonce, thus defeating the key verification. Telegram has updated their protocol description and will fix the flaw. (That nonce was introduced to fix RNG issues on mobile devices).

Seriously, I have never seen anyone use the MAC to generate the encryption key. Even if I wanted to put a backdoor in a protocol, I would not make it so evident…

To sum it up: avoid at all costs. There are no new ideas, and they add their flawed homegrown mix of RSA, AES-IGE, plain SHA1 integrity verification, MAC-Then-Encrypt, and a custom KDF. Instead of Telegram, you should use well known and audited protocols, like OTR (usable in IRC, Jabber) or the Axolotl key ratcheting of TextSecure.

SafeChat, P2P encrypted messages?

For the first article in the new post serie about “let’s pick apart the new kickstarted secure decentralized software of the week”, I chose SafeChat, which started just two days ago. Yes, I like to hunt young preys :p

A note, before we begin: this analysis is based on publicly available information at the time of writing. If the authors of the project give more information, I can update the article to match it. The goal is to assert, with what little we know about the project, if it is a good idea to give money to this project. I will only concentrate on the technical parts, not on the team itself (even if, for some of those projects, I think they’re idiots running with scissors in hand).

What is SafeChat?

Open source encryption based instant messaging software

SafeChat is a brilliantly simple deeply secure instant messaging system for mobile phones and computers

SafeChat is an instant messaging software designed by Commercial Free. There is no real indication about who really works there, and where the company is based, except for David Crawford, who created the Kickstarter project and is based in Montreal in Canada.

Note that SafeChat is only a small part of the services they want to provide. Commercial Free will also have plans including an email encryption service (no info about that one) and cloud storage.

Available technical information

There is not much to see. They say they are almost done with the core code, but the only thing they present is some videos of what the interaction with the app could be.

Apparently, it is an instant messaging application with Android and iOS applications and some server components.  Session keys are generated for the communication between users. They will manage the server component, and the service will be available with a yearly subscription.

It seems they don’t want to release much information about the cryptographic components they use. They talk about “peer to peer encryption” (lol) which is open source and standard. If anyone understands what algorithm or protocol they refer to, please enlighten me. They also say they will mix in some proprietary code (so much for open source).

I especially like the part about NIST. They mock NIST, telling that they have thrown “all standard encryption commonly used today out the window”. I am still wondering what “open source and standard peer to peer encryption” means.

Network protocol

The iOS and Android applications will apparently provide direct communication between users. I guess that from their emphasis on P2P, but also from the price they claim: $10 per user per year would be a bit small to pay for server costs if they had to route all the messages.

P2P communication between phones is technically feasible. They would probably need to implement some TCP hole punching in their solution, but it is doable.

Looking athe the video, it seems there is a key agreement before communication. I do not really like the interaction they chose to represent key agreement (with the colors and the smileys). There are too many different states, while  people only need to know “are we safe now?”

I am not sure if there is a presence protocol. The video does not really show it. If there is no presence system, are messages stored until the person is online? Stored on the server or on the client? Does the server notify the client when the person becomes available?

Cryptography

By bringing together existing theories of cryptography and some proprietary code to bind them together, we are making a deeply encrypted private chatting system that continues to evolve as the field of cryptography does.

Yup, I really feel safe now.

Joke aside, here is what we can guess:

  • session keys for the communication between users. I don’t know if it is a Diffie-Hellman based protocol
  • no rekeying, ie no perfect forward secrecy
  • no info on message authentication or integrity verification
  • I am not sure if the app generates some asymmetric keys for authentication, if there is trust on first use, or whatever else
  • the server might not be very safe, because they really, really want to rely on German laws to protect it. If the crypto was fully managed client side, they would not care about servers taken down, they could just pop another somewhere.

There could be a PKI managed by Commercial Free. That would be consistent with the subscription model (short lived certificates is an easy way of limiting the usage of a service).

Threat model

Now, we can draw the rough threat model they are using:

What we want to do is make it impractical for an organization to snoop your communications as it would become very hard to find them and then harder still to decrypt them.

Pro tip: a system with a central server does not make it hard to find communications.

Attacker types:

  • phone thief: I don’t think they use client side encryption for credentials and logs. Phone thiefs and forensics engineers won’t have a real problem there
  • network operator: they can disrupt the communication, but will probably not be able to decrypt or do MITM (I really think the server is managing the authentication part, along with setting up the communication)
  • law enforcement: they want to rely on German laws to protect their system. At the same time, they do not say they will move out to Germany to operate the system. If they stay in Canada, that changes the legal part. If they use a certificate authority, protecting the server will be useless, because they can just ask the key at the company.
  • server attacker: the server will probably be Windows based (see the core developer’s skills). Since that design is really server centric, taking down the server might take down the whole service. And attacking it will reveal lots of interesting metadata, and probably offer MITM capabilities
  • nation state: please, stop joking…

So…

Really, nothing interesting here. I do not see any reason to give money to this project: there is nothing new, it does not solve big problems like anonymous messaging, or staying reliable if one server is down. Worse, it is probably possible to perform a MITM attack if you manage the server. Nowadays, if you create a cryptographic protocol with client side encryption, you must make sure that your security is based on the client, not the server.

Alternatives to this service:

  • Apple iMessage: closed source, only for iOS, encrypted message, MITM is permitted for Apple by the protocol, but “we have not architected the server for this”. Already available.
  • Text Secure by OpenWhisperSystems: open source, available for iOS and Android, uses SMS as a transport protocol, uses OTR (Off the Record protocol) to protect the communication, no server component. Choose Text Secure! It is really easy to use, and OTR is well integrated in the interface.