Programming VS Mathematics, and other pointless debates

I do not know who started this argument a few days ago. It feels like something coming from HN. Do you need to know mathematics to be a good programmer?

There is a lot of differing opinions. Maybe programming is a subbranch of mathematics, or programming is using mathematics. Or learning programming is closer to learning a new language. For me, saying that programming is about languages is like saying that literature is about languages. Sure, you need words to indicate concepts, some languages are better suited than others for that, and some concepts are better expressed in other languages. It is more like a hierarchy to me: philosophy formalizes concepts used by authors to write in common languages. Mathematics formalize concepts used by programmers to create code in common languages.
But this is besides the point.

This debate sparks outrage, since it touches a central point of our education, and one that is often not taught very well. “Look, I do not use geometry while writing a loop, so maths are pointless for me”. A lot of developers will never learn basic algebra or logic and will never need it in their day job. And that’s okay.
Programming is not a single profession anymore. Each and every one of us has a different definition. A mechanical engineer working on bridges, another on metallic parts for cars and another one on plastic toys all have different needs, different techniques for their job, although the fundamental basis (evaluating breaking strength, time of assembly, production costs) is the same. That does not make one of these jobs worth more than the other.

The real problem is that we are still fighting among ourselves to define what our job is. The other pointless debate, about software being engineering, science or craft, is evidence of that. And it will stay hard to define for a long time.
We are in a unique position. Usually, when a new field emerges, either tinkerers are launching it and later, good practices are studied to make it engineering, or scientists create it, then means of production become cheaper and crafters take over.
Computers were started by scientists, but the ease of access gave crafters a good opportunity to take over. But that does not mean research stopped when people started coding at home. So now, in a relatively new field (less than a century), while we are still exploring, we have a very large spectrum of jobs and approaches, from the most scientific to the most artistic kind. And that is okay. More world views will help us get better at our respective jobs.

So, while you are arguing that the other side is misguided, irrealistic or unrigorous, take time to consider where they come from. They do not have the same job, and that job can seem pointless to you, but they can be good at it, so there is probably something good you can learn from their approach. The only thing you should not forgive from the other side is the lack of curiosity.

Advertisements

The network is the computer, the cluster is the RAM

There is a very weird part of web applications, where all the nice abstractions and syntax reasoning go wrong, at the interface between the code and a database. At best, there is a leaky abstraction of the database with an ORM, and you have to think about what methods to apply to get the underlying SQL query you need, at worse, you write queries and deserialize manually.

This happens because at one point, applications needed to manipulate more data than their host’s memory could handle. This required a good abstraction over storage, efficient data walking algorithms and fine tuned caching. This also required thousands of hours of engineering, to get a database that is at least bearable to use. Since so much work was put in those databases, you might as well implement as many features as possible, to reuse all this fine engineering.

To work efficiently with these data warehouses and offload a part of the selection work from the application, query languages inspired from logic programming were invented. Basically, they make it easy to work with relations: entity/attribute/value triplets like RDF, or tabled data. Those query language are voluntarily not Turing complete: they do not include loops, negation or unbounded recursion. This helps a lot in optimizing the queries.

Unfortunately, this query language is the barrier between an application and its data. Instead of reasoning about what is in memory, the code must be transformed to load data from the database through a query, deserialize it, compute, reserialize data and put it in the database. Even worse, for efficiency’s sake, some developers push more and more logic to the database, with even more complex queries, views and stored procedures.

What if we could reason directly on a cluster of data as if it was already in the memory? I do not want to create a structure from a deserialized row, change a value then put that row “where id = $myId”. I want to access a structure that is already there in memory, and change the value directly (or clone it and change my index, but that talk is for another blog post).

“No, you cannot access directly data that is not already in your memory”. Sure I can. We already have powerful tools for that. L1 and L2 caches are using that principle, to load data from the RAM and make it available faster to the CPU. Memory mapped file can be lazily loaded page by page in the virtual memory. Imagine loading data lazily from the network, in your address space… Nowadays, we can index data on 64 bits, enough to address the whole world!

“But this totally breaks your security model”. No, it does not. First, most database clusters already assume they’re running on a trusted network. Second, since I see the cluster as a part of my hardware, I think adding a MMU to the lot would work quite well.

“It does not work, because of concurrent accesses”. This already happens in databases, and this is where their powerful query language gets things wrong: if you have a powerful way to access multiple rows at the same time, you have to lock huge parts of the database at once in a transaction to run your mutating query. For virtual memory, we have a lot of interesting tools. Memory pages can be read-write or read-only. Locking through mutexes or Software Transactional Memory could also be implemented at a cluster’s scale. But concurrency is a hard problem, that is often better solved through good data architecture. Immutable data, colocating related data, append-only datastructures, all work as well in memory as on a networked cluster.

This is of course a very big gap to jump, from our traditional databases to a total abstraction in memory, but I think it is an interesting alternative to consider.

There is another model to consider here, one that is currently adopted by large distributed databases: since an application cannot do all the work by just loading data in its memory space, let’s push code onto the data, and run a Turing complete query language on the cluster. This is actually the same kind of model, with worker threads running on your data while you wait for a “work done” message, but you still need to interface your app with the query language. Maybe someday I’ll be able to send a compiled function to run on a cluster.

All in all, the big tools that people built to fight the inefficiencies of yesterday’s technology have to be questioned today. By removing the complex abstractions and their obsolete limitations, we could obtain powerful and simple model to write our future code.

Revisiting Zooko’s triangle

In 2001, Zooko Wilcox-O’Hearn conjectured that a key value system, in the way the keys address those values, must make a compromise between three properties. Those properties are:

  • distributed: there is no central authority in the system (and the other nodes of the system are potentially untrusted)
  • secure: a name lookup cannot return an incorrect value, even in the presence of an attacker
  • human usable keys: keys that a human will be able to remember and write without errors

The conjecture stated that you may have at most two out of these three properties in the system.

Examples of those properties at work:

  • the DNS is secure (one possible record for a given name) and human usable (domain names can be learned), but not distributed (the server chain is centralized)
  • the PGP mapping between email and public keys is distributed (no central authority defining which email has which key) and human usable (public keys are indexed by emails) but not secure (depending on your view of the web of trust, identities could be falsified)
  • Tor .onion addresses are secure (the address is the hash of the public key) and distributed (nobody can decide that an address redirects to the wrong server) but not human meaningful (have you seen those addresses?)

Some systems, like NameCoin, emerged lately and exhibited those three properties at the same time. So is the conjecture disproved? Not really.

When considering that triangle, people often make a mistake: assuming that you have to choose two of these properties and will never approach the third. These properties must not be seen as absolute.

Human usability is easily addressed with Petname systems, where a software assistant can help you add human-meaningful names to secure and distributed keys (at risk of collision, but not system wide).

You can add some distributed features in a centralized system like SSL certificate authorities by accepting multiple authorities.

NameCoin is distributed and human meaningful, and creates a secure view of the system through the blockchain.

And so on.

The three properties should be redefined with what we know ten years later.

Human usable names

We now understand that a human meaningful name is a complex concept. We see people typing “facebook” in a search engine because they would not remember “facebook.com”. Phone numbers are difficult to remember. Passwords, even when chosen through easy to remember methods, will be repeated at some point. And we could also talk about typosquatting…

Here is a proposition: a human usable key is an identifier that a human can remember easily in large numbers and distinguish as easily from other identifiers. I say identifier because it could be a string, a picture, anything that stands out enough to be remember and for which differences could be easily spotted.

That definition is useful because it is quantifiable. We can measure how many 10 digits phone numbers a human can remember. We can measure how many differences a human can spot in two strings or pictures.

Decentralized and/or secure names

The original text indicates distributed as “there is no central authority which can control the namespace, which is the same as saying that the namespace spans trust boundaries”. The trust boundary is defined as the space in which nodes could be vulnerable to each other (because they trust each other). In a fully distributed system, a node may not be able to force any other node to have a different view of the system.

Secure means that there is only one correct answer for a name lookup, whoever does the lookup. This is the same as saying that there is a consensus (at least in a distributed system).

We can see that the decentralized and secure properties are at odds with the consensus problem in a distributed system. This is where systems like NameCoin make the compromise. They exchange an absolute secure view of the system for an eventual consistency, and fully distributed trust boundaries for byzantine problems. They guarantee that, at some point, and unless there are too many traitors in the systems, there will be a universal and unique mapping for a human readable name.

So, we could replace the “secure” property by: the whole system recognizes that there is only one good answer for a record at some point in the future. Again, this is measurable, and it also addresses the oddity in DNS that records have to propagate (so sometimes the view can differ). Systems where identifiers do not give unique responses will not satisfy that definition.

The last property, “decentralized”, can be formulated like this: a node’s view of the names cannot be influenced by a group of unauthorized nodes. In a centralized system, where there is essentially one node, this definition cannot apply. In a hierarchy of nodes, like DNS, we can easily see that a node’s view can be abused, since there is only one node you need to compromise to influence it, its first nameserver (this is used, for good and for bad reasons, in a lot of companies). In a distributed system, this definition becomes quantifiable and joins the byzantine problem: how many traitor nodes do you need to modify a node’s view of the system?

Here, we have three new definitions that are more nuanced, allow some compromise and modulation of the needs, and for which the limits can be measured or calculated. These quantified limits can be useful to describe and compare the future naming system propositions.

How to choose your secure messaging app

Since WhatsApp announced its acquisition, a lot of people started to switch to alternatives, trying to escape from Facebook. Some of them then discovered my article about Telegram, and a common answer was “hey, at least, it is better than WhatsApp, because it is open source, faster and it has encryption”.

This is a very bad way to decide what application you should use. If you choose a secure messaging app, it must be because you need it, not just because you want to avoid Facebook.

Those are not good enough requirements:

  • independent from Facebook
  • fast
  • multi platforms
  • open source

Yes, even open source, because it does not magically make software safe.

So, what are goods requirements? Well, I already have a list of what a secure messaging app should meet to be considered. If an app does not follow those requirements, it may not be a good idea to use it.

But it still does not mean the app will fit your use case. So you must define your use case:

  • Why do you need it?
  • With whom will you communicate?
  • Who is the adversary?
  • What will happen if some of your information is revealed to the adversary?
  • Does it need to be always available?
  • For how long will it be used?

This is part of what I mean when I insist on having a threat model: you cannot choose correctly if you do not know the risks.

Here are a few examples that you could consider.

The activist in a protest

The activist must be able to communicate quickly in the crowd. Identifying info might not be the most important part, because she can use burner phones (phones that will be abandoned after the protest). The most important feature is that it should be always available. Phone networks were often used to disrupt activist communication, so a way to send message through WiFi our bluetooth might be useful. The messages can be sent to a lot of different people, so being able to identify them might be important. If it is large enough to be infiltrated easily, then having no way to identify people is crucial.

Being able to send photos is important, because they might be the only proof of what happened in the protest. Here, I have in mind the excellent ObscuraCam app, which is able to quickly hide the faces of people in photos before sending them.

The application should not keep logs, or provide a way to quickly delete them, or encrypt them by default, because once someone is caught, the police will look through the phone.

The crypto algorithms and protocols should be safe and proven for that use case, because the adversaries will have the resources to exploit any flaw.

No need for a good update system if the devices will be destroyed after use.

The employee of a company with confidential projects

The adversaries here are other companies, or even other countries. The most important practice here is the “need to know”: reduce the number of persons knowing the confidential information. that means the persons communicating between themselves is reduced, and you can expect that they have a mean of exchanging information securely (example: to verify a public key).

Identifying who talks with whom is not really dangerous, because it is easy to track the different groups in a company. You may be confident enough that the reduced group will not be infiltrated by the adversary. The messages should be stored, and ideally be searchable. File exchange should be present.

There could be some kind of escrow system, to reveal information if you have a certain access level. Authentication is a crucial point.

The crypto may be funnier for that case, because the flexibility needed can be provided by some systems, like identity based encryption.Enterprise policies might be able to force regular uodates of the system, so that everybody has the same protocol version at the ame time, and any eventual flaw will be patched quickly.

The common user

It is you, me, anyone wanting to exchange private messages with friends or family. Here, trying to protect against the NSA is futile, because most of the contacts might not have the training needed. Trying to hide the contacts list from Facebook is futile too: even if someone protects the information, one of the contacts may not. The adversary you should consider here: crooks, pirates, anyone that could exploit the private messages for criminal ways (stealing bank info, blakcmailing, sending malware, etc).

An application fitting this use case should encrypt messages, preferably end to end, to limit problems when the exchange server is compromised. The service might not provide any expectation of anonymity. Messages should be stored, but encrypting them is a good option, in case the device is lost or stolen.

The crypto does not need to be very advanced, but it should use common, well known designs.

There should be a good update system, a way to negotiate protocol versions (and forbid some unsafe versions), because you will never be sure that everybody has performed all the needed updates.

Your use case here

Those were some common situations, for which some solutions exist, but there are a lot more possible use cases. If you are not sure about yours and need help defining your threat model, do not hesitate to ask for help, and do not jump on a solution because the marketing material says it is safe.

A good security solution will not only tell you what is protected, and how, but also what is not protected, and the security margins you have. It will also teach you the discipline you need to apply to get the most out of it.

The problem with meritocracy

Everytime people discuss the hacker community and its diversity, I see someone waving the “meritocracy” argument. “It is not our fault those minorities are not well represented, if they knew more stuff or did more stuff, they would have a better status”.

It is easy to see how that argument would be flawed, as meritocracy is a power structure, and whenever a power structure is created, after some time it tends to reinforce its own community. But that is not my point right now.

I realized that the idea of meritocracy is so deeply ingrained in the hacker mindset that we lost sight of what was important. I can see how that idea is appealing. Once you prove you know stuff, people will recognize you, and that will be enough to motivate you to learn. Except it is not. The meritocracy is just another way to exclude people. Once you consider someone’s status by how much you perceive they know, things go downhill.

Some are good at faking knowledge. Some know their craft, but do not talk that well. Some are not experts, but have good ideas. Some would like to learn without being judged. Everytime you dismiss someone’s opinion because of their apparent (lack of) knowledge, everytime you favor someone’s opinion because of their apparent knowledge, you are being unscientific and unwelcoming. You are not a hacker, you are just a jerk.

Somewhere along the way, people got too hung up on meritocracy, and forgot that you hack for knowledge and for fun, not for status. It is all about testing stuff, learning, sharing what you learned, discussing ideas and helping others do the same, whatever their skills or their experience. Status and power structures should have nothing to do with that.

Guess what? I pointed out that bad behaviour, but I am guilty of it too. I have to constantly keep myself in check, to avoid judging people instead of judging ideas. That’s alright. Doing the right thing always requires some effort.

Criterions for a crypto app

Following the previous article, people have asked me what I would consider as a good secure system, and others asked me to review their app, so I think it will be interesting to expose my process when studying those projects.

Threat modeling

The most important point I look for in a project is the threat model. This is the document that will explain for whom the project was created, who are the adversaries, what they are trying to obtain, and which of these threats you are addressing.

Without that document, I cannot know if you considered all the possible actors, and I must infer it from the protocol, which is relatively easy, but my view of the threat model might not correspond to what you expected.

With a good threat model, I can know right away what is your target market (ex: sexting for teens, or secure reporting for journalists in war environments), see if your users will understand the implications, if it will need training, and more importantly, if your system can be safe for that context.

You cannot create a project and say that it will solve all of the privacy problems with some magical crypto algorithm, against all adversaries, even the state actors. I would prefer a useful tool for a niche with real and well defined needs.

Prior art

As you have probably seen, the secure messaging space is already very crowded. If you come up with a new solution to an already solved problem, you need to justify it. Why didn’t you improve an existing project? Couldn’t you adapt someone else’s code, add a better UI?

the NIH syndrome is at the heart of innovation, so I am not against it. But in the case of crypto applications, it might be a good idea to employ already existing (and already audited) code, instead of writing a whole new protocol or algorithm from scratch.

Otherwise, if you are working on an unsolved problem, or improving on current solutions, be prepared to justify it, and a lot, if you employ unusual systems. I am not telling you to avoid funny stuff like Pailler’s cryptosystem, PIR or pairing based cryptography. Just be aware that people will ask you about these.

Publications

That part is fundamental: if you are providing a new protocol or algorithm, you should publish it and ask for review before you start coding and get users. I am not advising you to start up LaTeX and write a paper in ACM format. Just explaining your system on a webpage is fine. The crypto community is full of nice people that will be able to point out if there is any problem (and if you use the academic way of publishing, you might even profit from other people’s funding to get reviews :p).

Some said that the crypto community is full of bitter people eager to hit any new project, following the whole Telegram debacle. That tends to happen when you make a big announcement to get users, telling that it will solve any security problem, and dismiss the opinions of experts, without having asked for review previously.

Note that some of those experts have worked for years on a project before even thinking of communicating about it. As examples, check out Briar, Pond or Cryptosphere: those are quiet but interesting projects. They are not trying to get a lot of users quickly or profit from the post Snowden panic. They have been at it for a long time.

So, publish, ask for review, fix flaws, publish again, fix stuff, and repeat again and again. That is the smartest way to spend your time and money on your project. Once everything is developed and deployed, you will have a hard time trying to plug the holes.

Protocol design

Once we get in the technical stuff, the protocol design is interesting to get a high level view of what you want to achieve. I’ll ask questions like:

  • Is it server centric or P2P? (note: a network of server introduces routing, but is not P2P)
  • Does it include authentication?
  • Is it encrypted end to end?
  • How do you protect against DoS?
  • Is it versioned? Do you allow for protocol version negotiation? Are the algorithms negotiated?
  • Can you revoke keys or identities?

Often, the protocol show what you want to achieve with your system, and it is often answering more threats than the crypto algorithms themselves. A good way to present your protocols is to use diagrams and present the message contents.

Do not insist on algorithms at this point: use general words to describe the primitive you need, like authenticated cipher, public key, key derivation function, MAC. You might change the algorithms later, so stating the properties you need will help reviewers understand what you want to achieve.

A specific note on server VS peer to peer: it is a very understandable feeling for geeks that P2P architectures look better, because they’re decentralizing everything, etc. But they can introduce other problems (like hole punching or sybil attacks), and in some case, you will not be able to avoid servers (for message routing and retries, for mobile systems, etc). Both types of systems are fine, just be aware of their shortcomings.

Cryptographic constructs

Cryptographic algorithms are not enough, you need to apply them correctly. I will have no pity if you say you use “military grade AES 256 encryption” but do not know what is a block cipher mode or Encrypt-Then-MAC. A lot of ugly details can hide here, so do not try to be clever, use battle tested systems:

  • add a separate authentication layer to Diffie-Hellman key exchanges
  • use an authenticated encryption mode
  • use RSA-OAEP instead of PKCS1 padding
  • know well if you need a nonce, an unpredictable number or a time based ID
  • etc.

This is one of the parts where crypto experts will ask annoying questions, because a lot of bugs come from there. They can also propose better solutions (safer, more performant, etc), so listen to them.

If you are employing an unusual scheme here, be prepared to justify it. It might be ok for you, but if the design looks weird to cryptographers, that will raise alarms. Your scheme could be safe, but if it has never been proven right, you are taking a risk, and your users will take that risk too. Is it worth it? Hint: your weird design should provide a unique property that no other algorithm has.

Choice of algorithms

Yes, I do not worry about algorithms until I am already deep in the system. It is not that hard to make correct choices there. Just listen to the recent attacks (ie, avoid RC4) choose large enough keys, choose correct elliptic curves.

Every algorithm has parameters that you need to get right, so be sure to document yourself on your algorithm choices:

  • AES-CBC needs an initialization vector, but AES-CTR uses an incremented nonce
  • RSA needs a good exponent
  • Some elliptic curves work better for some operations

Even if you choose dubious algorithms, if your protocol was well designed, you will be able to move to better algorithm. Be careful with algorithm negotiation, though, a lot of smart people were bitten before.

The implementation

This is probably the part that I will skip, because I do not have the time nor the funding to audit thoroughly the code of every new projects. I will often grep a bit through the code, look for some important points, but this is not something that should be done quickly. This is where the protocol review shows its limits.

Even with a good design, a lot of vulnerabilities can be present in a flawed implementation. Crypto projects should undergo a careful audit like the one Least Authority performed recently on Cryptocat. And that is why you should not communicate about your project before it has been reviewed.

There are things you should always look for in your software projects:

  • encrypting data at rest: if you worry about stolen data, know that a mobile phone or laptop can be stolen
  • random number generation: you should use a CSPRNG, with a good source, and probably some user or device specific data
  • data backup: is it possible? is it safe?
  • software updates: are they downloaded from a secure source? Are the updates verified?
  • Do you use public key pinning?
  • How long are they private keys stored as plaintext in memory?

The implementation details are as important as the whole protocol. You can have a good protocol, but a small error in the code could greatly affect your users. Nevertheless, specifying your protocol is useful, because people can provide better implementations, or make it interoperate with other software. Having other implementations is a good thing: you will not control those versions, but they will be able to construct cool stuff around your system, and make a part of your PR.

User interface

this part is more and more important, because we have been able to create safe systems for years, but often at the price of usability. The user experience of crypto apps needs a lot of innovation, and I’ll follow closely any interesting idea in that space: onboarding experience, useful alerts, user decision making, etc. People should be able to understand when there is a security problem.

I’ll state it once more: if you create a new crypto software, you HAVE to make it easy to use and understand. Some complexity is acceptable, but it must be compensated by documentation (with screenshots, etc) or training.

Other criterions

There are two others that I could think of, but they do not matter that much.

The first is the team. I have been accused of making fun of Telegram for waving around their team of PhDs, but the truth is that I was hopeful: a team full of smart people can come up with interesting design and solve complex problems. If they do not deliver on that, I could be less indulgent. That does not mean I will think less of people without big diplomas. I know too many smart people that dropped out of school to make that mistake. Ultimately, the important thing to judge is the design.

The last parameter is attitude. It is normal to be defensive when someone else reviews your work, but that does not justify denial and dishonesty. People are often taking time off of their job to study your system, so they will be quick and get to the point. If you do not answer or refuse to explain your decisions, it will smell fishy. Even more if you did not ask for a review before communicating about your project. But it does not matter that much. If you are humble and quick to answer, people may help you out of good will, but if you anger cryptographers, you may just have won a free thorough audit 😀

 

Telegram, AKA “Stand back, we have Math PhDs!”

Disclaimer: this post is now very old and may not reflect the current state of Telegram’s protocol. There has been other research in the meantime, and this post should not be used for your choice of secure messaging app. That said, on a personal note, I still think Telegram’s cryptosystem is weird, and its justifications are fallacious. If you want a recommendation on secure messaging apps: use a system based on the Axolotl/Signal protocol. It is well designed and has been audited. Signal and WhatsApp are both using that protocol, and there are others.

Here is the second entry in our serie about weird encryption apps, about Telegram, which got some press recently.

According to their website, Telegram is “cloud based and heavily encrypted”. How secure is it?

Very secure. We are based on a new protocol, MTProto, built by our own specialists, employing time-tested security algorithms. At this moment, the biggest security threat to your Telegram messages is your mother reading over your shoulder. We took care of the rest.

(from their FAQ)

Yup. Very secure, they said it.

So, let’s take a look around.

Available technical information

Their website details the protocol. They could have added some diagrams, instead of text-only, but that’s still readable. There is also an open source Java implementation of their protocol. That’s a good point.

About the team (yes, I know, I said I would not do ad hominem attacks, but they insist on that point):

The team behind Telegram, led by Nikolai Durov, consists of six ACM champions, half of them Ph.Ds in math. It took them about two years to roll out the current version of MTProto. Names and degrees may indeed not mean as much in some fields as they do in others, but this protocol is the result of thougtful and prolonged work of professionals

(Seen on Hacker News)

They are not cryptographers, but they have some background in maths. Great!

So, what is the system’s architecture? Basically, a few servers everywhere in the world, routing messages between clients. Authentication is only done between the client and the server, not between clients communicating with each other. Encryption happens between the client and the server, but not using TLS (some home made protocol instead). Encryption can happen end to end between clients, but there is no authentication, so the server can perform a MITM attack.

Basically, their threat model is a simple “trust the server”. What goes around the network may be safely encrypted, although we don’t know anything about their server to server communication, nor about their data storage system. But whatever goes through the server is available in clear. By today’s standards, that’s boring, unsafe and careless. For equivalent systems, see Lavabit or iMessage. They will not protect your messages against law enforcement eavesdropping or server compromise. Worse: you cannot detect MITM between you and your peers.

I could stop there, but that would not be fun. The juicy bits are in the crypto design. The ideas are not wrong per se, but the algorithm choices are weird and unsafe, and they take the most complicated route for everything.

Network protocol

The protocol has two phases: the key exchange and the communication.

The key exchange registers a device to the server. They wrote a custom protocol for that, because TLS was too slow and complicated. That’s true, TLS needs two roundtrips between the client and the server to exchange a key. It also needs x509 certificates, and a combination of a public key algorithm like RSA or DSA, and eventually a key exchange algorithm like Diffie-Hellman.

Telegram greatly simplified the exchange by requiring three roundtrips, using RSA, AES-IGE (some weird mode that nobody uses), and Diffie-Hellman, along with a proof of work (the client has to factor a number, probably a DoS protection). Also, they employ some home made function to generate the AES key and IV from nonces generated by the server and the client (server_nonce appears in plaintext during the communication):

  • key = SHA1(new_nonce + server_nonce) + substr (SHA1(server_nonce + new_nonce), 0, 12);
  • IV = substr (SHA1(server_nonce + new_nonce), 12, 8) + SHA1(new_nonce + new_nonce) + substr (new_nonce, 0, 4);

Note that AES-IGE is not an authenticated encryption mode. So they verify the integrity. By using plain SHA1 (nope, not a real MAC) on the plaintext. And encrypting the hash along with the plaintext (yup, pseudoMAC-Then-Encrypt).

The final DH exchange creates the authorization key that will be stored (probably in plaintext) on the client and the server.

I really don’t understand why they needed such a complicated protocol. They could have made something like: the client generates a key pair, encrypts the public key with the server’s public key, sends it to the server with a nonce, and the server sends back the nonce encrypted with the client’s public key. Simple and easy. And this would have provided public keys for the clients, for end-to-end authentication.

About the communication phase: they use some combination of server salt, message id and message sequence number to prevent replay attacks. Interestingly, they have a message key, made of the 128 lower order bits of the SHA1 of the message. That message key transits in plaintext, so if you know the message headers, there is probably some nice info leak there.

The AES key (still in IGE mode) used for message encryption is generated like this:

The algorithm for computing aes_key and aes_iv from auth_key and msg_key is as follows:

  • sha1_a = SHA1 (msg_key + substr (auth_key, x, 32));
  • sha1_b = SHA1 (substr (auth_key, 32+x, 16) + msg_key + substr (auth_key, 48+x, 16));
  • sha1_с = SHA1 (substr (auth_key, 64+x, 32) + msg_key);
  • sha1_d = SHA1 (msg_key + substr (auth_key, 96+x, 32));
  • aes_key = substr (sha1_a, 0, 8) + substr (sha1_b, 8, 12) + substr (sha1_c, 4, 12);
  • aes_iv = substr (sha1_a, 8, 12) + substr (sha1_b, 0, 8) + substr (sha1_c, 16, 4) + substr (sha1_d, 0, 8);

where x = 0 for messages from client to server and x = 8 for those from server to client.

Since the auth_key is permanent, and the message key only depends on the server salt (living 24h), the session (probably permanent, can be forgotten by the server) and the beginning of the message, the message key may be the same for a potentially large number of messages. Yes, a lot of messages will probably share the same AES key and IV.

Edit: Following Telegram’s comment, the AES key and IV will be different for every message. Still, they depend on the content of the message, and that is a very bad design. Keys and initialization vectors should always be generated from a CSPRNG, independent from the encrypted content.

Edit 2: the new protocol diagram makes it clear that the key is generated by a weak KDF from the auth key and some data transmitted as plaintext. There should be some nice statistical analysis to do there.

Edit 3: Well, if you send the same message twice (in a day, since the server salt lives 24h), the key and IV will be the same, and the ciphertext will be the same too. This is a real flaw, that is usually fixed by changing IVs regularly (even broken protocols like WEP do it) and changing keys regularly (cf Forward Secrecy in TLS or OTR). The unencrypted message contains a (time-dependent) message ID and sequence number that are incremented, and the client won’t accept replayed messages, or too old message IDs.

Edit 4: Someone found a flaw in the end to end secret chat. The key generated from the Diffie-Hellman exchange was combined with a server-provided nonce: key = (pow(g_a, b) mod dh_prime) xor nonce. With that, the server can perform a MITM on the connection and generate the same key for both peers by manipulating the nonce, thus defeating the key verification. Telegram has updated their protocol description and will fix the flaw. (That nonce was introduced to fix RNG issues on mobile devices).

Seriously, I have never seen anyone use the MAC to generate the encryption key. Even if I wanted to put a backdoor in a protocol, I would not make it so evident…

To sum it up: avoid at all costs. There are no new ideas, and they add their flawed homegrown mix of RSA, AES-IGE, plain SHA1 integrity verification, MAC-Then-Encrypt, and a custom KDF. Instead of Telegram, you should use well known and audited protocols, like OTR (usable in IRC, Jabber) or the Axolotl key ratcheting of TextSecure.