Telegram, AKA “Stand back, we have Math PhDs!”

Disclaimer: this post is now very old and may not reflect the current state of Telegram’s protocol. There has been other research in the meantime, and this post should not be used for your choice of secure messaging app. That said, on a personal note, I still think Telegram’s cryptosystem is weird, and its justifications are fallacious. If you want a recommendation on secure messaging apps: use a system based on the Axolotl/Signal protocol. It is well designed and has been audited. Signal and WhatsApp are both using that protocol, and there are others.

Here is the second entry in our serie about weird encryption apps, about Telegram, which got some press recently.

According to their website, Telegram is “cloud based and heavily encrypted”. How secure is it?

Very secure. We are based on a new protocol, MTProto, built by our own specialists, employing time-tested security algorithms. At this moment, the biggest security threat to your Telegram messages is your mother reading over your shoulder. We took care of the rest.

(from their FAQ)

Yup. Very secure, they said it.

So, let’s take a look around.

Available technical information

Their website details the protocol. They could have added some diagrams, instead of text-only, but that’s still readable. There is also an open source Java implementation of their protocol. That’s a good point.

About the team (yes, I know, I said I would not do ad hominem attacks, but they insist on that point):

The team behind Telegram, led by Nikolai Durov, consists of six ACM champions, half of them Ph.Ds in math. It took them about two years to roll out the current version of MTProto. Names and degrees may indeed not mean as much in some fields as they do in others, but this protocol is the result of thougtful and prolonged work of professionals

(Seen on Hacker News)

They are not cryptographers, but they have some background in maths. Great!

So, what is the system’s architecture? Basically, a few servers everywhere in the world, routing messages between clients. Authentication is only done between the client and the server, not between clients communicating with each other. Encryption happens between the client and the server, but not using TLS (some home made protocol instead). Encryption can happen end to end between clients, but there is no authentication, so the server can perform a MITM attack.

Basically, their threat model is a simple “trust the server”. What goes around the network may be safely encrypted, although we don’t know anything about their server to server communication, nor about their data storage system. But whatever goes through the server is available in clear. By today’s standards, that’s boring, unsafe and careless. For equivalent systems, see Lavabit or iMessage. They will not protect your messages against law enforcement eavesdropping or server compromise. Worse: you cannot detect MITM between you and your peers.

I could stop there, but that would not be fun. The juicy bits are in the crypto design. The ideas are not wrong per se, but the algorithm choices are weird and unsafe, and they take the most complicated route for everything.

Network protocol

The protocol has two phases: the key exchange and the communication.

The key exchange registers a device to the server. They wrote a custom protocol for that, because TLS was too slow and complicated. That’s true, TLS needs two roundtrips between the client and the server to exchange a key. It also needs x509 certificates, and a combination of a public key algorithm like RSA or DSA, and eventually a key exchange algorithm like Diffie-Hellman.

Telegram greatly simplified the exchange by requiring three roundtrips, using RSA, AES-IGE (some weird mode that nobody uses), and Diffie-Hellman, along with a proof of work (the client has to factor a number, probably a DoS protection). Also, they employ some home made function to generate the AES key and IV from nonces generated by the server and the client (server_nonce appears in plaintext during the communication):

  • key = SHA1(new_nonce + server_nonce) + substr (SHA1(server_nonce + new_nonce), 0, 12);
  • IV = substr (SHA1(server_nonce + new_nonce), 12, 8) + SHA1(new_nonce + new_nonce) + substr (new_nonce, 0, 4);

Note that AES-IGE is not an authenticated encryption mode. So they verify the integrity. By using plain SHA1 (nope, not a real MAC) on the plaintext. And encrypting the hash along with the plaintext (yup, pseudoMAC-Then-Encrypt).

The final DH exchange creates the authorization key that will be stored (probably in plaintext) on the client and the server.

I really don’t understand why they needed such a complicated protocol. They could have made something like: the client generates a key pair, encrypts the public key with the server’s public key, sends it to the server with a nonce, and the server sends back the nonce encrypted with the client’s public key. Simple and easy. And this would have provided public keys for the clients, for end-to-end authentication.

About the communication phase: they use some combination of server salt, message id and message sequence number to prevent replay attacks. Interestingly, they have a message key, made of the 128 lower order bits of the SHA1 of the message. That message key transits in plaintext, so if you know the message headers, there is probably some nice info leak there.

The AES key (still in IGE mode) used for message encryption is generated like this:

The algorithm for computing aes_key and aes_iv from auth_key and msg_key is as follows:

  • sha1_a = SHA1 (msg_key + substr (auth_key, x, 32));
  • sha1_b = SHA1 (substr (auth_key, 32+x, 16) + msg_key + substr (auth_key, 48+x, 16));
  • sha1_с = SHA1 (substr (auth_key, 64+x, 32) + msg_key);
  • sha1_d = SHA1 (msg_key + substr (auth_key, 96+x, 32));
  • aes_key = substr (sha1_a, 0, 8) + substr (sha1_b, 8, 12) + substr (sha1_c, 4, 12);
  • aes_iv = substr (sha1_a, 8, 12) + substr (sha1_b, 0, 8) + substr (sha1_c, 16, 4) + substr (sha1_d, 0, 8);

where x = 0 for messages from client to server and x = 8 for those from server to client.

Since the auth_key is permanent, and the message key only depends on the server salt (living 24h), the session (probably permanent, can be forgotten by the server) and the beginning of the message, the message key may be the same for a potentially large number of messages. Yes, a lot of messages will probably share the same AES key and IV.

Edit: Following Telegram’s comment, the AES key and IV will be different for every message. Still, they depend on the content of the message, and that is a very bad design. Keys and initialization vectors should always be generated from a CSPRNG, independent from the encrypted content.

Edit 2: the new protocol diagram makes it clear that the key is generated by a weak KDF from the auth key and some data transmitted as plaintext. There should be some nice statistical analysis to do there.

Edit 3: Well, if you send the same message twice (in a day, since the server salt lives 24h), the key and IV will be the same, and the ciphertext will be the same too. This is a real flaw, that is usually fixed by changing IVs regularly (even broken protocols like WEP do it) and changing keys regularly (cf Forward Secrecy in TLS or OTR). The unencrypted message contains a (time-dependent) message ID and sequence number that are incremented, and the client won’t accept replayed messages, or too old message IDs.

Edit 4: Someone found a flaw in the end to end secret chat. The key generated from the Diffie-Hellman exchange was combined with a server-provided nonce: key = (pow(g_a, b) mod dh_prime) xor nonce. With that, the server can perform a MITM on the connection and generate the same key for both peers by manipulating the nonce, thus defeating the key verification. Telegram has updated their protocol description and will fix the flaw. (That nonce was introduced to fix RNG issues on mobile devices).

Seriously, I have never seen anyone use the MAC to generate the encryption key. Even if I wanted to put a backdoor in a protocol, I would not make it so evident…

To sum it up: avoid at all costs. There are no new ideas, and they add their flawed homegrown mix of RSA, AES-IGE, plain SHA1 integrity verification, MAC-Then-Encrypt, and a custom KDF. Instead of Telegram, you should use well known and audited protocols, like OTR (usable in IRC, Jabber) or the Axolotl key ratcheting of TextSecure.

Theoretical definitions for crypto wannabes

Every week, I hear about a new secure software designed to protect your privacy, thwart the NSA/GCHQ and save kittens. Most of the time, though, they’re started by people that are very enthusiastic yet unskilled.

They tend to concentrate directly on choosing algorithms and writing code, instead of stepping back and thinking a bit about what they want to develop.

Sure, they probably spent some time saying things like:

  • that piece of data should absolutely be encrypted
  • users will all have key pairs to authenticate themselves
  • we should use AES, that’s the safest choice (what are theses “modes” you’re talking about?)

That is not how you design a protocol. That is not how you design a software using encryption. And that is not how you will design the next secure distributed social network.

To design your system, you need three things:

  • a good threat model
  • theoretical tools addressing the threats
  • algorithms implementing these theoretical tools

As you see, most of the projects only have the third item, and that’s insufficient to design a correct system. If you don’t have a good threat model, you don’t have a good mental model of your users and attackers, their means and their objectives. If you don’t have the theoretical tools, you will try to shoehorn your favorite algorithm on the problem without knowing if it really fits (example: using hash algorithms to store passwords :p).

So, in this post, I’ll provide those (simplified) theoretical definitions. You will probably recognize some of them.

High level view

First, you need to forget notions like “privacy”, use any of these terms to describe the properties you want to achieve:

  • authentication: you can recognize which entity you are communicating with
  • authorization: the entity cannot get access to data it has no permission on (note that it is different from authentication)
  • confidentiality: the data should not be readable by an entity that has no permission on it (it can be protected by crypto, but also by policies in the code)
  • integrity: unauthorized modification of the data can be detected and marked as invalid
  • non repudiation: an entity cannot deny it has executed an action
  • deniability: an entity _can_ deny it has executed an action

Ok, now that we have some basic properties, let’s apply them: think for a long time about the actors of the system (users, malicious users, admins, sysadmins, random attacker on the network, etc), what authorizations they have, what they should not get access to, what data moves on the network and between whom.

You should now have a very basic threat model and a rough overview of your system or protocol: you know what part of the network communications should be confidential, you know where you would need to authenticate.

You will now need some ideas about the type of attacks that could happen to your system, because you probably did not think of everything. Separate your systems in logical parts (like “client”, and “server”, etc), observe them, and observe how they communicate.

Common attacks

Security properties

Here are some security properties that will be useful when you will try to choose algorithms later:

  • Random oracle: a system that answers deterministically every question with a random answer from its answer space. You cannot predict what it will answer, but if you send the same question twice, it will answer the same both times.
  • Perfect secrecy: for any two plaintext messages of same size, an attacker cannot distinguish which plaintext maps to which ciphertext. Basically, the adversary learns nothing from the ciphertext only.
  • Semantic security: same thing as perfect secrecy, except what the adversary learns on the plaintext is negligible. example: One Time Pad

Security tests

those properties can be tested by creating a “game”, where the attacker tries to guess information on the data:

  • IND-CPA (indistinguishability under the chosen plaintext attack) test: the adversary can generate as many ciphertexts as he wants. Then, he chooses two messages m0 and m1 of same length, those messages are encrypted, and one of the ciphertexts is sent back to him. The adversary should not be able to guess which message was used to generate that ciphertext (note that this is just one way of testing for CPA, there are many other schemes, some with stronger properties)
  • IND-CCA (indistinguishability under the chosen ciphertext attack): the attacker can get the decryption of arbitrary ciphertexts, but should not, from this, be able to decrypt any other ciphertext.

Attack patterns

Here are some common attack types that can be applied to crypto protocols. The list is not exhaustive, and covers only crypto attacks: there are many more ways to attack a system.

  • Replay attacks: the attacker has observed some valid (encrypted or not) data going on the wire, and tries to send it again. Obviously, it should not be accepted
  • MITM (Man In The Middle): the attacker can observe and modify live data running between two actors of the system. In the worst case, the attacker should not be able to forge valid data, decrypt data, or impersonate one of the users. In the best case, it should be detectable.
  • Oracle attack: when an algorithm or protocol has a part that can act as an oracle (can be asked something and give answer, like a server), an attacker could exploit flaws in the algorithm to get useful information on the data (then the oracle is not a random oracle). Timing attacks are part of this type of attack. See also padding oracle attacks, or the recent BREACH attack on TLS.
  • Offline attack: the attacker got access to some encrypted data (on the wire, or by accessing a disk somewhere), stored it, and tries to decrypt it for an amount of time

You should now have a better view of the system: what are the parts of the system that need protection, what attacks they must resist, and what properties they should have.

That means we can go to the next part: choosing the tools to implement the solution.

General cryptographic functions

No, we will not choose algorithms right now. That would be too easy 😀

We will choose from a list of cryptographic constructions that implement some of the security properties of the system, and combine them to meet all the needed properties:

  • Secure Pseudo Random Function: function from spaces K (keys) and X (message) to Y (other message space). The basic definition is that if you choose randomly one of these functions (like choosing randomly a k from K), its output will appear totally random (testable with IND-CPA).
  • Pseudo Random Permutation: this is a PRF where X and Y are the same space. It is bijective (every y from Y maps to exactly one x from X, and every y is an output of the function), and there exists an efficient inversion function for it (from Y to X). Example: AES (in ECB, which is unsafe for common use)
  • Message Authentication Code: defines a pair of algorithms. One of them takes a key k and a message m and outputs a code c. The other algorithm takes k, m, c and outputs True or False. An attacker should not be able to forge a valid c without knowing k. A PRF could be used to construct a MAC system. A hash function too. Example: HMAC
  • Authenticated encryption: an encryption function (with semantic security under the CPA) where the attacker cannot forge new ciphertexts that decrypt correctly. Example: AES-GCM.
  • Hash function: collision resistance (cannot find different messages m1, m2 so that H(m1) == H(m2), with different collision levels). Not easily invertible. Usually, they are fast. Example: SHA2.
  • Trapdoor function: a function that is easy to compute, for which finding its inverse is hard, unless you have specific information. (secure under IND-CCA). examples: RSA, DSA.
  • Zero knowledge proof: a way to prove something to the other party in a communication, without giving her any info, except the proof.

There are a lot of other constructions, depending on your needs, from low level algorithms like the Diffie Hellman key exchange to higher level protocols like OTR. Again, the constructions will depend on the security properties.

Choosing the algorithms

Can we do it now? YES! But there are rules. You must not choose an algorithm because it’s hype or because someone said so in an old book. Basically, you choose algorithms that implement the properties you need (like authenticated encryption), and you choose the parameters of the algorithm (key size, exponent, elliptic curve) depending on the strength you need. Basically, a key size can define how much time encrypted data should remain impossible to decrypt. Those parameters also define the performance of the algorithm. Don’t choose them without consulting experts, or you will face problems similar to those encountered by the projects that used low RSA exponents (it looks good from a performance standpoint, but it introduces very bad security).

Am I done now?

Nope. We have only define some very high level parts. Creating a protocol implies a lot of thoughts on:

  • how you establish the communications
  • protocol negotiation (version, algorithms, etc)
  • key exchange
  • authentication
  • nonce usage
  • storing sessions
  • handling lost connections
  • renegotiation
  • closing the connection
  • etc.

As you can see, designing a protocol involves a lot more than choosing a few algorithms. Note that this was only a very rough overview of what you would need to create a safe system. And we did not even start coding!

So, if you want to build the next privacy protecting system, please talk to experts. They don’t necessarily want to make you feel bad. They just have a lot of formal tools and the experience needed to see what will not work.

Warning your users about a vulnerability

Somebody just told you about a vulnerability in your code. Moreover, they published a paper about it. Even worse, people have been very vocal about it.
What can you do now? The usual (and natural) reaction is to downplay the vulnerability, and try to keep it confidential. After all, publishing a vuln will make your users unsafe, and it is bad publicity for your project, right?

WRONG!

The best course of action is to communicate a lot. Here’s why:

Communicate with the security researcher

Yes, I know, we security people tend to be harsh, quickly flagging a vulnerable project as “broken”, “dangerous for your customers” or even “EPIC FAIL“. That is, unfortunately, part of the folklore. But behind that, security researchers are often happy to help you fix the problem, so take advantage of that!

If you try to silence the researcher, you will not get any more bug reports, but your software will still be vulnerable. Worse, someone else will undoubtedly find the vulnerability, and may sell it to governments and/or criminals.

Vulnerabilities are inevitable, like bugs. Even if you’re very careful, people will still find ways to break your software. Be humble and accept them, they’re one of the costs of writing software. The only thing that should be on your mind when you receive a security report is protecting your users. The rest is just ego and fear of failure.

So, be open, talk to security researchers, and make it easy for them to contact you (tips: create a page dedicated to security on your website, and provide a specific email for that).

Damage control of a public vulnerability

What do you do when you discover the weakness on the news/twitter/whatever?

Communicate. Now.

Contact the researchers, contact the journalists writing about it, the users whining about the issue, write on your blog, on twitter, on facebook, on IRC, and tell them this: “we know about the issue, we’re looking into it, we’re doing everything we can to fix it, and we’ll update you as soon as it’s fixed“.

This will buy you a few hours or days to fix the issue. You can’t say anything else, because you probably don’t know enough about the problem to formulate the right opinion. What you should not say:

  • “we’ve seen no report of this exploited in the wild”, yet
  • “as far as we know, the bug is not exploitable”, but soon it will be
  • “the issue happens on a test server, not in production” = “we assume that the researcher didn’t also test on prod servers”
  • “no worries, there’s a warning on our website telling that it’s beta software”. Beta means people are using it.

Anything you could say in the few days you got might harm your project. Take it as an opportunity to cool your head, work on the vulnerability, test regressions, find other mitigations, and plan the new release.

Once it is done, publish the security report:

Warning your users

So, now, you fixed the vulnerability. You have to tell your users about it. As time passes, more and more people will learn about the issue, so they might as well get it from you.

You should have a specific webpage for security issues. You may fear that criminals might use it to attack your software or website. Don’t worry, they don’t need it, they have other ways to know about software weaknesses. This webpage is for your users. It has multiple purposes:

  • showing that you handle security issues
  • telling which versions someone should not use
  • explaining how to fix some vulnerabilities if people cannot update

For each vulnerability, here is what you should display:

  • the title (probably chosen by the researcher)
  • the security researcher’s name
  • the CVE identifier
  • the chain of events:
    • day of report
    • dates of back and forth emails with the researcher
    • day of fix
    • day of release (for software)
    • day of deploy (for web apps)
  • the affected versions (for software, not web apps)
  • the affected components (maybe the issue only concerns specific parts of a library)
  • how it can be exploited. You don’t need to get into the details of the exploit.
  • The target of the exploit. Be very exhaustive about the consequences. Does the attacker get access to my contacts list? Can he run code on my computer? etc.
  • The mitigations. Telling to update to the latest version is not enough. Some users will not update right away, because they have their own constraints. Maybe they have a huge user base and can’t update quickly. Maybe they rely on an old feature removed in the latest version. Maybe their users refuse to update because it could introduce regressions. What you should tell:
    • which version is safe (new versions with the fix, but also old versions if the issue was introduced recently)
    • if it’s open source, which commit introduced the fix. That way, people managing their own internal versions of software can fix it quickly
    • how to fix the issue without updating (example for a recent Skype bug). You can provide a quick fix, that will buy time for sysadmins and downstream developers to test, fix and deploy the new version.

Now that you have a good security report, contact again the security researchers, journalists and users, provide them with the report, and emphasize what users risked, and how they can fix it. You can now comment publicly about the issue, make a blog post, send emails to your users to tell them how much you worked to protect them. And don’t forget the consecrated formule: “we take security very seriously and have taken measures to prevent this issue from happening ever again”.

Do you need help fixing your vulnerabilities? Feel free to contact me!

Worst Case Messaging Protocol

We already have lots of tools to communicate safely, and in some cases, anonymously. These tools work well for their purpose, but spying and law enforcement tools have also improved.

People are forced to decrypt their hard drives in airports. Rootkits are developed by police forces. DPI systems are installed on a country level. Using the graph of who communicates with whom, it is possible to map the whole hierarchy of an organisation, even if all communications are encrypted. How can we defend against that?

I believe that the analysis of organisations is one of the biggest threats right now. The ability to recognize an organization is the ability to destroy it by seizing a few members. That may be useful against terrorists, but these tools are too powerful, and they may be used by the wrong hands. Terrorists already know how to defend against such systems, but what of common people? Any company, association or union could be broken anytime.

I think we need tools to mount resilient organizations. A compromised node shouldn’t bring down the whole system. Intercepted messages shouldn’t give too much information on the different parties. People can still find ways to send their messages safely (PGP, dead drops, giving USB keys hand to hand), but we need a framework to build decentralized human systems.

Here is the solution I propose: a messaging protocol designed to protect identities and route around interception. This protocol is designed to work in the worst case (hence its name), when any communication could be intercepted, and even when we can only use offline communication.

Every message is encrypted, signed and nearly anonymized (if you already know the emitter or receiver, you can recognize their presence, but this can be mitigated). To introduce redundant routes in the network, a simple broadcast is used (the protocol is not designed for IP networks, but for human networks, with limited P2P communication). Any node can be used to transmit all the messages. The fundamental idea is that if you’re compromised, the whole organization can still work safely without you.

Prerequisites

Assumptions

  • Every communication can be intercepted
  • if a node is compromised, every message it can read or write is compromised
  • People can still create a P2P (IRL) network to communicate (aka giving an USB key to someone else)

Goals

  • Confidentiality
  • Anonymity
  • Authentication
  • Deniability? (I am just a messenger)
  • Untraceability
  • Can work offline (worst case)

For this protocol, we need a RNG, a HMAC function, a symmetric cipher and an asymmetric cipher. I have not yet decided what algorithms must be used. They could be specified in the message format, to permit future changes in algorithms.

Message format

  • nonce1
  • HMAC( sender’s public key, nonce1 )
  • expiration date
  • symcrypt( symmetric key, message )
  • nonce2
  • For each receiver:
    • asymcrypt( receiver1’s public key, symmetric key )
    • HMAC( receiver1’s public key, nonce2 )
    • asymcrypt( receiver2’s public key, symmetric key )
    • HMAC( receiver2’s public key, nonce2 )
  • asymsign( sender’s private key, whole message )

Here is how it works:
An identity is represented by one or more public keys (you can have a different public key for each of your contacts, it makes messages difficult to link to one person). They are not publicly linked to a name or email address, but the software layer could handle the complexity with a good enough UI. I do not cover key exchange: users must design their own way to meet or establish communications.

Every user is a hub: all messages are broadcasted by all users. To prevent data explosion, expiration dates are used. Users recognize the message they receive and the identity of the emitter, using the HMAC of the public keys. All communication are encrypted, but I use the convergent encryption ideas to have multirecipient messages.

Handling the messages

Parsing

  • verify if one of my public keys is in the receivers by calculating the HMAC with nonce2
  • validate the signature (if I already know the emitter’s public key)
  • decrypt the symmetric key that was encrypted with my public key
  • decrypt the message

Creating messages

  • write the message
  • choose the emitter’s identity and public key
  • choose the receivers and the public keys used
  • choose an expiration date
  • generate two random nonces
  • generate a symmetric key
  • insert nonce 1 and the HMAC of nonce 1 and my key
  • insert the expiration date
  • encrypt the message with the symmetric key and insert it
  • insert nonce 2
  • for each receiver:
    • encrypt the symmetric key with the receiver’s public key and insert the result
    • insert the HMAC of nonce 2 and the receiver’s public key
  • sign the whole message with my private key and append the signature

Sending messages

  • Take all the messages you received
  • filter them according to some rules:
    • remove expired messages
    • remove messages too large?
    • etc
  • add your messages to the list
  • give the message list to the next node in the network

This protocol is voluntarily simple and extensible. New techniques can be devised based on this protocol, like dummy receivers in a message or dummy messages to complicate analysis. It is independent of the type of data transmitted, so any type of client could use this protocol (it is at the same level as IP). Most of the logic will be in the client and the contact manager, to present an interface easy to use.

I truly hope we will never need this to communicate, but recent events have convinced me that I need to release it, just in case.

I am open to any critique or suggestion about this protocol. I will soon open a repository to host some code implementing it, and it will of course be open to contributions 🙂