Telegram, AKA “Stand back, we have Math PhDs!”

Disclaimer: this post is now very old and may not reflect the current state of Telegram’s protocol. There has been other research in the meantime, and this post should not be used for your choice of secure messaging app. That said, on a personal note, I still think Telegram’s cryptosystem is weird, and its justifications are fallacious. If you want a recommendation on secure messaging apps: use a system based on the Axolotl/Signal protocol. It is well designed and has been audited. Signal and WhatsApp are both using that protocol, and there are others.

Here is the second entry in our serie about weird encryption apps, about Telegram, which got some press recently.

According to their website, Telegram is “cloud based and heavily encrypted”. How secure is it?

Very secure. We are based on a new protocol, MTProto, built by our own specialists, employing time-tested security algorithms. At this moment, the biggest security threat to your Telegram messages is your mother reading over your shoulder. We took care of the rest.

(from their FAQ)

Yup. Very secure, they said it.

So, let’s take a look around.

Available technical information

Their website details the protocol. They could have added some diagrams, instead of text-only, but that’s still readable. There is also an open source Java implementation of their protocol. That’s a good point.

About the team (yes, I know, I said I would not do ad hominem attacks, but they insist on that point):

The team behind Telegram, led by Nikolai Durov, consists of six ACM champions, half of them Ph.Ds in math. It took them about two years to roll out the current version of MTProto. Names and degrees may indeed not mean as much in some fields as they do in others, but this protocol is the result of thougtful and prolonged work of professionals

(Seen on Hacker News)

They are not cryptographers, but they have some background in maths. Great!

So, what is the system’s architecture? Basically, a few servers everywhere in the world, routing messages between clients. Authentication is only done between the client and the server, not between clients communicating with each other. Encryption happens between the client and the server, but not using TLS (some home made protocol instead). Encryption can happen end to end between clients, but there is no authentication, so the server can perform a MITM attack.

Basically, their threat model is a simple “trust the server”. What goes around the network may be safely encrypted, although we don’t know anything about their server to server communication, nor about their data storage system. But whatever goes through the server is available in clear. By today’s standards, that’s boring, unsafe and careless. For equivalent systems, see Lavabit or iMessage. They will not protect your messages against law enforcement eavesdropping or server compromise. Worse: you cannot detect MITM between you and your peers.

I could stop there, but that would not be fun. The juicy bits are in the crypto design. The ideas are not wrong per se, but the algorithm choices are weird and unsafe, and they take the most complicated route for everything.

Network protocol

The protocol has two phases: the key exchange and the communication.

The key exchange registers a device to the server. They wrote a custom protocol for that, because TLS was too slow and complicated. That’s true, TLS needs two roundtrips between the client and the server to exchange a key. It also needs x509 certificates, and a combination of a public key algorithm like RSA or DSA, and eventually a key exchange algorithm like Diffie-Hellman.

Telegram greatly simplified the exchange by requiring three roundtrips, using RSA, AES-IGE (some weird mode that nobody uses), and Diffie-Hellman, along with a proof of work (the client has to factor a number, probably a DoS protection). Also, they employ some home made function to generate the AES key and IV from nonces generated by the server and the client (server_nonce appears in plaintext during the communication):

  • key = SHA1(new_nonce + server_nonce) + substr (SHA1(server_nonce + new_nonce), 0, 12);
  • IV = substr (SHA1(server_nonce + new_nonce), 12, 8) + SHA1(new_nonce + new_nonce) + substr (new_nonce, 0, 4);

Note that AES-IGE is not an authenticated encryption mode. So they verify the integrity. By using plain SHA1 (nope, not a real MAC) on the plaintext. And encrypting the hash along with the plaintext (yup, pseudoMAC-Then-Encrypt).

The final DH exchange creates the authorization key that will be stored (probably in plaintext) on the client and the server.

I really don’t understand why they needed such a complicated protocol. They could have made something like: the client generates a key pair, encrypts the public key with the server’s public key, sends it to the server with a nonce, and the server sends back the nonce encrypted with the client’s public key. Simple and easy. And this would have provided public keys for the clients, for end-to-end authentication.

About the communication phase: they use some combination of server salt, message id and message sequence number to prevent replay attacks. Interestingly, they have a message key, made of the 128 lower order bits of the SHA1 of the message. That message key transits in plaintext, so if you know the message headers, there is probably some nice info leak there.

The AES key (still in IGE mode) used for message encryption is generated like this:

The algorithm for computing aes_key and aes_iv from auth_key and msg_key is as follows:

  • sha1_a = SHA1 (msg_key + substr (auth_key, x, 32));
  • sha1_b = SHA1 (substr (auth_key, 32+x, 16) + msg_key + substr (auth_key, 48+x, 16));
  • sha1_с = SHA1 (substr (auth_key, 64+x, 32) + msg_key);
  • sha1_d = SHA1 (msg_key + substr (auth_key, 96+x, 32));
  • aes_key = substr (sha1_a, 0, 8) + substr (sha1_b, 8, 12) + substr (sha1_c, 4, 12);
  • aes_iv = substr (sha1_a, 8, 12) + substr (sha1_b, 0, 8) + substr (sha1_c, 16, 4) + substr (sha1_d, 0, 8);

where x = 0 for messages from client to server and x = 8 for those from server to client.

Since the auth_key is permanent, and the message key only depends on the server salt (living 24h), the session (probably permanent, can be forgotten by the server) and the beginning of the message, the message key may be the same for a potentially large number of messages. Yes, a lot of messages will probably share the same AES key and IV.

Edit: Following Telegram’s comment, the AES key and IV will be different for every message. Still, they depend on the content of the message, and that is a very bad design. Keys and initialization vectors should always be generated from a CSPRNG, independent from the encrypted content.

Edit 2: the new protocol diagram makes it clear that the key is generated by a weak KDF from the auth key and some data transmitted as plaintext. There should be some nice statistical analysis to do there.

Edit 3: Well, if you send the same message twice (in a day, since the server salt lives 24h), the key and IV will be the same, and the ciphertext will be the same too. This is a real flaw, that is usually fixed by changing IVs regularly (even broken protocols like WEP do it) and changing keys regularly (cf Forward Secrecy in TLS or OTR). The unencrypted message contains a (time-dependent) message ID and sequence number that are incremented, and the client won’t accept replayed messages, or too old message IDs.

Edit 4: Someone found a flaw in the end to end secret chat. The key generated from the Diffie-Hellman exchange was combined with a server-provided nonce: key = (pow(g_a, b) mod dh_prime) xor nonce. With that, the server can perform a MITM on the connection and generate the same key for both peers by manipulating the nonce, thus defeating the key verification. Telegram has updated their protocol description and will fix the flaw. (That nonce was introduced to fix RNG issues on mobile devices).

Seriously, I have never seen anyone use the MAC to generate the encryption key. Even if I wanted to put a backdoor in a protocol, I would not make it so evident…

To sum it up: avoid at all costs. There are no new ideas, and they add their flawed homegrown mix of RSA, AES-IGE, plain SHA1 integrity verification, MAC-Then-Encrypt, and a custom KDF. Instead of Telegram, you should use well known and audited protocols, like OTR (usable in IRC, Jabber) or the Axolotl key ratcheting of TextSecure.


139 thoughts on “Telegram, AKA “Stand back, we have Math PhDs!”

  1. Thank you for the feedback, but your article contains serious mistakes, most likely based on misunderstading the setup. Kindly take a look at this detailed scheme:

    > the server can perform a MITM attack.
    > you cannot detect MITM between you and your peers.

    NOT true. You can compare key visualization in the clients.

    > Yes, a lot of messages will probably share the same AES key and IV.

    NOT true. AES key and iv is different for each message because it depends on the message contents (msg_id and sequence no are always different).

    As a result, no real threats to the Telegram protocol have been identified yet.

    • I used this document (and others on the website) as base for this article, but did not install the app. How do you “compare the key visualization”? Is it mandatory? Could you provide a screenshot?

      Also, making the key and IV dependant on the message is a really bad design. You may think that it is not a real threat because nobody bothered breaking it for now, but it is still an unsafe construct.

      • I mostly meant this diagram:

        It’s a recent addition and It is clear that you didn’t use it when you wrote this article. Otherwise it would be easy to avoid the obvious mistakes.

        Comparing key visualization is not mandatory, but is an obvious action for anybody not trusting the server.

        The rest looks like matters of taste as opposed to objective reasoning. Can you name an actual attack?

        More mistakes:

        > But whatever goes through the server is available in clear.

        Not true. Telegram’s secret chats are not “clear” to the server.

        > They will not protect your messages against law enforcement eavesdropping

        Not true. Telegram servers don’t have the encryption key for secret chats (these are created on the clients via DH)

      • About eavesdropping: it still seems there is no mandatory authentication of the peer in your spec. If there is, please elaborate, and explain how it is done.

        The diagram was not available at the time I wrote the article. You added it afterward. And with that diagram, it is clear that you are doing a MAC-Then-Encrypt, which is a flawed construct, with SHA1, which has not be designed for that, and that hash appears in the clear along with the message and is included in the key.


        Was it really that hard to use an authenticated cipher mode like GCM? Or if you’re worried about performance on ARM, use AES-CTR, with HMAC-SHA1? The message can then be (with Encrypt-Then-Mac) {enc(key1, message); HMAC(key2, enc(key1, message)} with key1 and key2 safely derived from the auth key. That is a correct way to do it (not the only one), and it is simple. Anyone could write a safer protocol that the one you came up with.

      • > flawed homegrown mix of RSA, AES-IGE, plain SHA1 integrity verification, MAC-Then-Encrypt

        Another false point in your review. We do not use MAC-Then-Encrypt.

        > The diagram was not available at the time I wrote the article.

        It is true. We grew tired of “experts” too lazy to read the full documentation. But you still have to have a look there to fully understand Telegram’s encryption mechanism.

        >> And with that diagram, it is clear that you are doing a MAC-Then-Encrypt, which is a flawed construct

        Again, we do not use MAC-then-encrypt. Our scheme is closer to MAC-and-encrypt with some essential modifications. These modifications solve the same problem that you solve by the GCM mode for AES, but they are much faster on mobile devices.

        Sorry, your review and comments are full of emotions and wishful thinking, but lack the understanding of what we do.

      • Alright, after a good night of sleep, I’m not emotional anymore 🙂

        I read your full documentation and looked at your diagram, and it still looks like a MAC-Then-Encrypt. If it is not, then please explain what are those essential modifications and the rationale behind them. Also, you cannot be faster that Encrypt-Then-MAC to reject invalid messages, even on mobiles, so I still don’t see your justification.

        Other questions still unanswered:
        Why SHA1 instead of a real MAC?
        Why did you use that custom KDF instead of a well known one? (well known KDFs all have parameters that you can adapt, even for mobile devices)
        What led to the design of the auth key exchange? It is overly complex (although I quite like the proof of work for the client, is that a DoS protection?)
        You still did not provide examples of how the key verification is done between clients. Is it mandatory?
        Is the key generated used for long term communication between clients? Or does it change from one conversation to the next?
        Did you consider adding some forward secrecy?

        I am willing to update the article with all the corrections you want, as long as your answers are more elaborated than a simple “you are wrong, we know better”, like you did repeteadly here, on Hacker News and on Twitter yesterday.

      • > Still looks like MAC-Then-Encrypt

        It is not. Our setup is rather an improvement on MAC-And-Encrypt in that the encryption key (sic!) and iv are MAC-dependent.

        > Why SHA1 instead of a real MAC?

        It is faster, when it comes to sending large photos and videos. And since this means still requiring at least 2^128 operations (instead of 2^256 with, say, SHA-2) to even begin trying to break this scheme, this trade-off seems fair.

        > Why did you use that custom KDF instead of a well known one? (well known KDFs all have parameters that you can adapt, even for mobile devices)

        We will consider this, thank you.

        > What led to the design of the auth key exchange? It is overly complex (although I quite like the proof of work for the client, is that a DoS protection?)

        Yes, this is a DoS protection. And in case a DoS is detected, p and q may be even increased.

        > You still did not provide examples of how the key verification is done between clients. Is it mandatory?

        It is not mandatory, since we are a mass market messenger this can be a serious barrier for everyday users. That said, the interface offers a way of comparing visualizations of the key in the form of identicons:

        We may add an option to forbid Secret Chat initialization, unless the user has confirmed the key (using a QR code, NFC, etc.) for very advanced users.

        > Is the key generated used for long term communication between clients? Or does it change from one conversation to the next?

        Each secret chat (a user can create an unlimited amount of secret chats) will have a new key. Secret chats expire upon logout.

        > Did you consider adding some forward secrecy?

        Yes. Unfortunately, our clients do not support this yet — we are working on our apps to change this soon. For the moment you can achieve this by deleting secret chats and creating new ones, or logging out periodically.

        It is supported in the protocol — the primitive p_q_inner_data_temp can be used to generate temporary keys with limited TTL to achieve PFS.

      • Using SHA1 in that setup might be fine, because it is too difficult to build a length extension attack in a MAC-Then-Encrypt setup.
        I don’t get why you did not go with an Encrypt-Then-MAC setup (also, I just remembered that it’s specifically a MAC-and-Ecrypt that you are using). Even ignoring the security properties, since you want to avoid doing too many calculations on mobiles, it is a better solution, beause you can directly ignore invalid messages by verifying the MAC. With your system, the client has to decrypt the whole message before validating it. There is nothing to prevent a malicious user from sending a lot of long and invalid messages to a user.

        About the secret chats, you’re telling me that users have to verify a visual fingerprint of the encryption key, but that the key will be deleted on logout, right? What happens when one of the users logs out, but not the other? On the next communication, they will need to verify the key again, right? Did you consider having long term user identities based on public keys?

        Could you explain the reasons that motivated the design decisions in the auth key exchange protocol?

      • > With your system, the client has to decrypt the whole message before validating it. There is nothing to prevent a malicious user from sending a lot of long and invalid messages to a user.

        When it comes to ordinary chats, these would be stopped by the server. While this is true for secret chats, when you compare network capacities to processing power (in terms of decrypting AES) for an ordinary smartphone, the network will likely give earlier.

        > What happens when one of the users logs out, but not the other? On the next communication, they will need to verify the key again, right? Did you consider having long term user identities based on public keys?

        Upon logout of one of the parties, the secret chat becomes ‘cancelled’ and no new messages can be sent to it (same happens if one of the participants simply decides to delete the secret chat). A new secret chat can then be established between the users (an unlimited amount of secret chats, actually). As for long term user identities based on public keys we may go for this eventually as an advanced feature.

        > Could you explain the reasons that motivated the design decisions in the auth key exchange protocol?

        In part, because we wanted to avoid having to rely on merely the server or merely the client for random numbers generation. This decision from 2012 helped us stay more robust compared to many other solutions, as random generation bugs on Android were revealed in August 2013, for example (

      • How exactly should I compare the key with my partener, considering that my chat partener is 500 km away?
        Actually I did such a test with a partener next to my shoulder, and we started a secret chat, but the visualization key was not the same. It looks something similar but not the same “picture”. Does this means that the chat was less secure or encrypted? Thanks for any answer.

      • There’s a large discussion right now on ways to verify key fingerprints. Pictures are not the best way, because human brains will tend to gloos over the details. And people do not know what to do if the pictures are different (which means the communication is not secured, and someone may be actively trying to exploit it).

  2. > and the message key only depends on the server salt (living 24h)

    Still not true, even after your edit.

    > the message key may be the same for a potentially large number of messages

    Still not true. It’s impossible, msg_key equals SHA of the message body, including time (with precision to 2^-32) and sequence number and secure random session_id generated by the client.

    > Keys and initialization vectors should always be generated from a CSPRNG, independent from the encrypted content.

    Did you notice that 128 Bytes of pregenerated entropy are added to the AES Key?

    • That’s correct, I’ll edit again.

      you have 128 bytes of pregenerated entropy, that are used over and over, employed in a KDF with a piece of data transmitted in plaintext. With a KDF as quick as yours, I would really worry about some clever statistical analysis being done on that algorithm.

  3. I’m actually glad this discussion ensued. This provides valuable feedback to Telegram and Telegram seems to take it at hard.

    This is what you get and _want_ when open sourcing your code. Sure, there are flaws. That’s why they solicit feedback by OSing it.

    I would still rather use this than Whatsapp for example of which I know _nothing_ except that they had breaches in the past.

      • <<Feedback is indeed valuable. Although so far the biggest lesson is that we need to improve our documentation.

        To be honest, the biggest lesson/problem is your arrogance.

        The security is based on Roll-Your-Own, new, unproven, obscure primitives. The idea of having both a normal chat and secret is hilarious (few ppl will use secret chat), the idea of using QR codes to verify something are stupid (almost no-one will use it), and the Self destruct feature is a lie. This trash is not secure, not in practice, and probably not in theory either.

        Uninstalled upon watching Steve Gibson's podcast and reading this blog.

  4. I did a little investigation of the code myself, looking specifically at the CLI client’s code because it’s useful to know if they can actually write decent code, and prose and PR can make a bad system appear good. – I’d suggest just staying clear. The implementation is terrible and has so many examples of lazy and ill-thought out writing that even if the principle of the crypto is right, there are so many holes in how they’ve done it that taking out the code even in normal use is probably easy, never mind when someone is trying to attack it. But hey, what do I know…

    • Thank you for your scrutiny.

      Telegram is an open platform and everybody is free to use our API. The CLI in question was ceated by an independent developer, who created this interface for personal purposes and was kind enough to share his work on GitHub.

      Try contacting him there, he will most likely be grateful for your comments on his code culture.

      • I’ve taken a look at the Android code, not the CLI code. Man that makes my eyes bleed.

        Nested if statements that are completely unnecessary. Deserialized objects & the content of those objects are implicitly trusted (I can spot several cases where you have uncaught NumberFormatException possibilities for example). It also appears you have the possibility of killing my battery while you wait to close connections – in an attempt to save resource usage I imagine.

        I’d also speculate that you have a vulnerability where someone could cause a client to think it’s expecting a connection to close, oh, and you use Thread.sleep() on an Android.

        Thanks, but I’ll steer clear.

  5. > Well, if you send the same message twice (in a day, since the server salt lives 24h), the key and IV will be the same, and the ciphertext will be the same too.

    Not true. They key and iv are NEVER the same. Please see above: each message ALWAYS includes time and sequence number. Strict monotonity of time and sequence number is enforced. Sending a message with the same time and sequence number would be a classic replay attack, which we are protected against.

    • Fair enough, I’ll fix it. That was not clear from the documentation (I see you added the message layout in the meantime, thanks).

      What about the other questions? Also, Do you have a clear threat model?

      • Just for me as an uninitiate to crypto, what do you mean by threat model? All the possible ways to breach security right?

      • A threat model is a thorough assessments of a system’s assets (usernames, password, files, credit card numbers, etc), a list of possible attackers with different capabilities and access levels (passive eavesdropping, active MITM, malicious user, etc) and a list of possible points of attack, even seemingly innocuous ones.

        With such a model, you can quickly see if by combining some attacks, one of the possible attackers can get the upper level access, or read/write some data it should not.

        It is usually the first thing you do when you design a secure system or protocol, because it is useless to develop it without knowing against whom you must defend. And you use it throughout the life of the project, because all the features should be checked through that model’s lense.

        So far, most of the secure messaging systems I have seen did not even bother establishing their threat model, though.

      • Aha! Splendid, thanks for elucidating that to me.

        The model must be a living document right? I’m a developer and I find it nigh impossible to design a system beforehand.

  6. Hi, Géal

    Telegram backer, Pavel Durov, will give $200,000 in BTC to the first person to break the Telegram encrypted protocol. Starting today, each day Paul (+79112317383) will be sending a message containing a secret email address to Nick (+79218944725). In order to prove that Telegram crypto was indeed deciphered and claim your prize, send an email to the secret email address from Paul’s message.

      • Good move and well done — but what about Telegram servers’ (potential) MITM capability? This is more important than it seems, we know NSA and other listeners are possibly able to get into such set-ups. This has to be 100% secure from an internal MITM, too!

      • That is the problem with the contest: it does not take into account real MITM. For an interesting contest, we would be able to intercept and tamper with messages, and send our own messages, instead of just looking at the valid messages exchanged.

    • This doesn’t proof anything. Hackers in dorm rooms might not be able to attack the system because they have no control over the server and network hardware, but the governments, the NSA, and the KGB do. And they will not claim $200K, but make it appear as if everything is fine. Never mind Telegram itself.

  7. Since this inaccurate review is still being referenced from time to time, we took the liberty of re-reading the current edit. Sadly, it still contains multiple mistakes.

    > Basically, their threat model is a simple “trust the server”.

    We have created secret chats just for this reason. People who don’t trust the server can use secret chats and compare their keys in order to make sure no MITM is possible.

    > But whatever goes through the server is available in clear.

    This sounds confusing, since messages in secret chats are only available to the server in their encrypted form, and the server does not have the key.

    > They will not protect your messages against law enforcement eavesdropping or server compromise.

    See above. If you’ve compare the key visualization with your partner, you can be quite sure, that no MITM happened and the server does not have access to your key. Therefore when it comes to secret chats, Telegram could only provide hackers and the gorvernment with unencrypted garbage.

    > Worse: you cannot detect MITM between you and your peers.

    Wrong. You can compare your keys in order to make sure no MITM is possible.

    > Note that AES-IGE is not an authenticated encryption mode. So they verify the integrity. By using plain SHA1 (nope, not a real MAC) on the plaintext. And encrypting the hash along with the plaintext (yup, pseudoMAC-Then-Encrypt).

    While this is true, the emotional coloring behind the passage implies that this is somehow broken. This is not true, see our other comments above or the tech FAQ for details:

    > That message key transits in plaintext, so if you know the message headers, there is probably some nice info leak there.

    This is not a real attack, although it may look like one. The headers contain enough random information (128-Bit), so that even you could restore a message from its SHA1 (which, by the way, has not been achieved in the 20 years of its existence) you would still need to repeat the process 2^128 times and then find the one real message.

    > Still, they depend on the content of the message, and that is a very bad design. Keys and initialization vectors should always be generated from a CSPRNG, independent from the encrypted content.

    You should back this up.

    > the new protocol diagram makes it clear that the key is generated by a weak KDF

    Unfounded opinion.

    > Seriously, I have never seen anyone use the MAC to generate the encryption key. Even if I wanted to put a backdoor in a protocol, I would not make it so evident…

    You make it sound as if just the MAC is used in this case, which is wrong. What we do is described here:

    • A lot of what you label as mistakes are things that I could fix if you cared to answer some of my questions.

      About the threat model, from what you said in the comments, the secret chat key is regenerated everytime one of the peers logs out, without permanent user authentication independent from the server. Moreover, you said that secret chats are not mandatory. For me, that is trusting the server too much, because the threat model has to consider the largest access level possible.

      About the keys that should be generated by a PRNG, AES-IGE, the weak KDF, or employing the MAC key in the key generation, the burden of proof is on you. I point out what looks like dubious algorithm choices. If you think it is safe, you have to provide a mathematical proof. I won’t do your job in your place.

      • Althought I may not grasp all aspects of the debate here, throwing out unexplained and opiniated critics seems wrong. I doubt it to be Telegraph’s responsability to provide proof against these critics.

        If the design is so flawed, why not say how precisely ? I too can declare such or such algo is weak. Would’nt make me anymore right about it.

        I’m no Telegraph evangelist but from an external point of view, I don’t see them being at fault. Mainly because I found answers to almost every critic here.

        Sorry if this seems a tad agressive, it really is’nt. Just very interested in knowing how or where their app may have weakspots.

      • The problem is that they do not use well known systems. If they had employed well known and audited algorithms, like AES-GCM instead of AES_IGE, or an Encrypt-Then-MAC instead of MAC-And-Encrypt, it would be a lot easier to say that it is safe, because the properties are very well known (until the next flaw found in these systems).
        If they use uncommon constructs without explaining precisely why they choose it (like saying that using AES-IGE is “a matter of taste”) nor proving why they are good choices, it is a lot more difficult to verify that the system is as secure as they pretend.
        The only thing we can say is that on these particular areas, we do not know if it is secure. And that incertitude is really dangerous in cryptosystems. Maybe it is safe. Maybe not. Maybe someone will win the contest. Maybe someone else will start working and silently attack the system in 6 months.
        Right now, the only ones able to prove that it is safe are the developers from Telegram, and even if they pointed out some errors in the article (which I fixed), they left a lot unanswered.

    • I’m not an expert in cryptography, just a user interested in security and privacy. You seem to put a lot of emphasis on the properties of “secret chats”. But using your app does not promote the use of secret chats at all. By default, all chats are “public”, so most people would not bother to find out what a “secret chat” is. If they do, they would find a notice informing them that those chats have two very desirable properties:
      – use end to end encrytion (good!)
      – leave no trace in your servers (great!, why would anyone want ANY chat leaving trace in your servers)

      and two features that invite to avoid them:
      – have a self destruct timer (WTF! self destruct? no, I want to keep my chats….)
      -do not allow forwarding (oh, no, I want to forward cat pictures!…)

      so I wouldn’t say the app exactly advocates for secret chats. Also, I don’t see the need to accept this drawbacks to be able to have end-to-end encryption, which is the basic thing to ask of a messaging system.

      • Note that self destructing messages is actually an important feature. Chat history is a liability. If you need to encrypt your messages, ie, you assume that the adversary is powerful enough to read on the network, then you can also consider an adversary that will have access to your data at rest. That adversary could be someone that steals your phone, or a law enforcement officer that seized your phone.
        If what you wish to protect is sensible, keeping logs is dangerous. The best thing is to delete them (but you can never be sure that the person you talk with deletes them as well), otherwise, encrypt them on your disk (with a personal key that will never be used elsewhere), or the last idea, used with GPG and email, keep the encrypted messages (and then, the LE officer can compare the messages on your disk with the stuff they saw going around the network).

      • This will be hardly the point of this whole discussion, but I just can’t seem to leave it alone:

        > If what you wish to protect is sensible, keeping logs is dangerous. The best thing is to delete them …

        Why just don’t keep logs at all?

      • It depends on your threat model. Whenever someone transmits your messages (like Telegram’s server), you should assume that they keep everything. that’s why encryption should be done client to client, not client to server.

        When I say that keeping logs is angerous, I refer to the existing experience of hackers and activists that kept logs and were caught by law enforcement. Whenever they kept logs, their friends were caught as well.

        So, whenever you communicate with someone on encrypted chanlles, either you do not keep logs, either you prepare yourself to never give away to key to your encrypted logs, because they’re a liability.

        A computer is never safe. The almost best safe is your brain. The best safe is your brain, when you do not know anything.

  8. News – December 22nd, 2013 | cipherpal

  9. Fighting DISHFIRE: The State of Mobile, Cross-Platform, Encrypted Messaging | MissingM

  10. Hey there, I read (most of) the discussion and now I’m even more confused than before 😉 Are the “normal chats” (not “secret chats”) stored on the server? and are they stored in an encrypted form on the mobile? I was just wondering if the threema app ( is dealing with these issues in a more professional way but probably you can not tell me because of the closed source code… Anyways, thanks a lot for the critical discussion!

    • It is mainly a question of security model. They can say that the messages are not stored, and that could be true, but you will never be sure, and it could change in the future (example: due to police forcing them to). So, in the security model, you have to assume that whatever transited through their server can be stored, and that their server is basically your adversary, which is why people need verifiable end to end encryption.

      I do not know about Threema because there is not that much information available.

      • Which is really the crux in this whole conversation. WhatsApp, Threema and others don’t offer you a peek into their code/algos, and as a result have the benefit of *perceived* security.

        This is exactly why I like telegram… open for scrutiny, apparently open for discussion (and suggestions).

        That said, not a huge fan of the cloud-storage aspects (my first question is: which cloud and where). The swiss location of Threema is appealing to me.

        PS Encryption was never a major issues in whatsapp (pre-NSA docs). Ease of use was.

      • As a protocol geek, I see another problem that is worrying: their protocol does not include versioning and version negotiation. that means that any protocol modification will break a lot of stuff.

        So even if they fix vulnerabilities, people will have no way of knowing if the people they’ are talking to are using an unsafe version of the system.

  11. Great discussion Telegrams security.

    But now that Whatsapp has been bought by Facebook, and many are looking for an alternative, this blog post will be increasingly referenced too and searched. That’s how I arrived here, from a Twitter message.

    And after reading all the discussion the question remains: What’s the best alternative to Whatsapp, surely an open source one.
    As a masses messaging app, is Telegram the compromise between easy of use and security and privacy?

    • There is no silver bullet. Right now, there is no easy way to secure communication between lots of users, because security takes practice and discipline.

      There are solutions for message privacy, and in that regard, they should provide end to end encryption, authentication of the message recipient, and forward secrecy. Telegram’s end to end encryption is a protocol made weird without reason, the authentication (visual key fingerprint) did not prevent MITM at the beginning, and they do not have forward secrecy.

      Personally, I like to use TextSecure, but it works over SMS, and this puts off some users. Also, learning how key verification works is not evident for some people.

    • I wrote that post before they published their contest, and that contest is rigged to ignore most exploit avenues. Also, I do not have much time to spend working on a broken solution with no assurance of being paid :p

      • it would be a poor sign of their trustworthiness if they would not pay you. If I would see a way to break their security model I would try it. With my loan, I’d need to work several years to earn it..

      • They have already paid $100 000 for a guy who found a server-sided nonce related weakness. It wasn’t what they were after with the $200k contest, but decided to compensate the guy because it was still a vulnerability.

  12. u should try something instead of throwing around with words… i am not involved well in security mechs, but its still 1000 times better than WA… and i would better give my private data to telegram than facebook / whatsapp..

    its better to give different data to two companies than to one 😉 and threema is not a choice for me, cause they support only ios and android, this will break their neck…

    this open-source app supports more operating systems and devices… thats why telegram is my first choice! 🙂

    • so, your choice depends on the platform support, and wether it is onwed by facebook or not. Well, keep thinking that way if it is okay for you, but when you have a real threat model, get back to me, I’ll help you find a real solution.

      • At least it’s better then Whatsapp. And with all the positive news around with Whatsapp users going to Telegram, Telegram makers should know that the reason is that they don’t trust large datamining and datahungry company’s anymore. If for some reason Telegram will go the same way, users will also let it drop like a brick.
        So maybe it’s not 100% secure for now (and sure nothing is 100% secure in the digital world), I think they will keep updating where ever possible to improve security.
        And like a user said before me: Divide and Conquer !
        Better to have not everything at 1 place.

      • Saying that using Telegram is better than WhatsApp because it has encryption shows no regard for the threat model. If you used WhatsApp in the first place, you did not care about security.
        Jumping on a new solution that is not safe and hoping that the number of people joining will push Telegram makers into securing everything is a mistake. They will work on scaling, they will add features to please users, which is fine. But once you have hundreds of thousands of users, it is very hard to update a crypto protocol, moreover when there are multiple third party apps implementing it.

  13. I think most people know that Whatsapp is not a secure messaging app. it is the whole thought of Facebook taking over Whatsapp and the assume of data use for their ads.

    This blog is quite informative on the security of Telegram and it is great to see Telegram’s reaction. It shows how dedicated they are behind their product.

    Thread model or no thread model, in the end people use the app that is easily in use just as Whatsapp.

  14. It’s still not clear to me if there is always a MITM weakness (even if it’s only in the beginning). Say I would write my own MTProto client that would allow only e2e encryption and me and my peer both solely use this client and check each others public key via a side channel (any network other than Telegram’s).

    1. Can a secure authenticated connection be established without trusting Telegram at any point?

    2. Do we have to keep sharing new public keys via side-channels before each new secure chat with each other or are our keys more permanent like ssh/pgp so that we don’t keep verifying it with each chat?

    Yesterday I looked at webogram (which is unofficial and still in a very experimental phase) and it connected with Telegram servers (not even over ssl) to start generating keys and I’m still wondering how much I have to trust Telegram even if I would write my own client that uses their protocol and only use their servers for transport.

    • End to end encryption with verification in a side channel would be safe now, but at the beginning, Telegram’s protocol was not safe against a MITM attack, because the key fingerprint could be manipulated by the server.

      I do not remember if you have to verify fingerprints at each new communication. But I know that it does not provide forward secrecy (ie session keys changing regularly to limit the danger of compromission of one key).

      • “End to end encryption with verification in a side channel would be safe now, but at the beginning, Telegram’s protocol was not safe against a MITM attack, because the key fingerprint could be manipulated by the server.”

        With “at the beginning” you mean when you wrote the original article two months ago and by now it’s fixed? Or you mean at the start of using your own new Telegram independent client software you still have a small window where you need to trust Telegram and so there’s always a way to attack clients from within the Telegram network, no matter what software they use and whether they only use side-channels for key verification?

  15. You’re being really aggressively judging on an app that is Open Source. It’s not that we know all about the what’s behind Whatsapp.

    Telegram is faster than Whatsapp, it’s cheaper than Whatsapp, it at least provides security options (if you really want to be that secretive, just stop talking over social networks and sh*t at all) and the biggest advantage to common users (eg. the mass, not that one naggy computer nerd 😉 ) is that it can be used on multiple devices. FINALLY.

    Sure, if you’re focussing solely on it’s security Telegram probably has some flaws, but when you compare it to it’s biggest partner-in-business it has a lot more pro’s. In my opinion anyways.

    • Being Open Source does not make something safe. Being faster neither. Being present on multiple platforms augments greatly the attack surface.

      If Telegram had presented their system as yet another messaging app, I would not have cared. But when their main selling point is security (“Telegram keeps your messages safe from hacker attacks” on their front page), I care. Because people will ask me if it is safe. Because people may be put in dangerous situations by trusting unsafe app. Because there are already a lot of other solutions that do that, do it better, and have been doing it for enough time to get lots of security audits.

      If the Telegram developers really cared that much about Open Source, they would have contributed to existing solutions.

      • Well, TextSecure is an app sending encrypted messages over SMS, ChatSecure is an IM app for OTR over XMPP, Cryptocat is an easy to use IM app working as a browser extension.

        There is a lot of projects in that space, not only the ones that appeared following Snowden’s revelations.

  16. very interesting article and even more interesting discussion, thanks for that!

    Me personally, I switched to Threema, although it is not Open Source and I cannot be sure if it is really as secure as it is advertised as, the app makes a more serious impression to me.

    • I cannot make an opinion on Threema either, since there is not that much info available. But their attitude and what we can already infer from their system look good.

    • End to end encryption (when you establish a key to directly encrypt messages with your recipient) can thwart that kind of attack. It is meant to protect against insecure transport.

      Generating random numbers is indeed a challenge, but one that is actively studied and for which there are countermeasures (more sources, not choosing an unsafe PRNG, etc). And there are algorithms that can avoid them. Example: using deterministic DSA instead of plain DSA.

    • I do not think much of it since no info was released. The time when they are designing the system is the time they should talk to experts, publish stuff, get advice.

      As it is now, I am not informed enough to recommend it.

  17. Sorry but I think that is silly, to say about Threema that

    “But their attitude and what we can already infer from their system look good.”

    you’re praising their “attitude” and the information they give to the public – but if Telegram wasn’t open-source you could have said the very same thing about Telegram.

    So I’m not sure why you seem to give Threema the benefit of the doubt although you don’t have any insights about how exactly they implement their system, but when it comes to Telegram you’re as critical as possible.

    Also when asked about alternatives you mention

    “TextSecure is an app sending encrypted messages over SMS, ChatSecure is an IM app for OTR over XMPP, Cryptocat is an easy to use IM app working as a browser extension.”

    So actually, you can’t mention any real alternative at all. Because people, millions of them, want to use a software like WhatsApp / Telegram. Those millions of people will not use an “IM app working as a browser extension” or “IM app for OTR over XMPP” any time soon.

    So here we have a free open-source app which tries to be an alternative to FB / WA, but you say, well, better use nothing at all. Perhaps Threema, because their website looks trustworthy.

    • I am very sorry if I made you think that I recommend Threema. If you look a bit more in the comments, on Twitter or in the blog posts, you will see that I cannot recommend it simply because I have not enough information on their system.

      But about the attitude, they are completely different from Telegram, which launched with a lot of marketing, touting itself as the most secure system, dismissing opinions from crypto experts by waving the math PhDs in their team, and then proposing a security contest rigged to ignore all the common vulnerabilities present in that kind of application. So, I am allowed to say they have a better attitude than Telegram’s people.

      About the alternatives, I mentioned TextSecure, which I have studied and used for a long time, ChatSecure, which is based on proven technologies, CryptoCat which started unsafe but took the time to patch their vulns. Those are good to use right now.

      But they do not fit the use case. Well, tough luck. If there are no safe solutions for now, taking a “less unsafe” one will not make you safe. You’re going at sea in a boat full of holes. You’re like that CEO saying “I accept the risk” when I point out a remote code execution flaw in their web app.

      A lot of people switched to Telegram because they wanted to avoid Facebook. Well, it’s their choice. But it is clear they do not care about security and do not know ho to evaluate a risk. The minimum I could do was try to inform them, but I will not prevent them from willingly shooting themselves in the foot.

  18. Whats up?
    Even with WhatsApp nobody(!) cared about the stuff… So, its quite cool to see one company actually caring about secure messaging 🙂

    • Well… I wrote that post months before people started to flee from WhatsApp. I cared then, I cared before, and a lot of other projects cared about secure messaging before Telegram came.

      But those other projects took the time to verify, break, fix, audit their systems, again and again before even starting to imagine they could talk publicly about it. I could cite some of them that almost nobody heard of that I would trust a lot more than Telegram.

  19. Telegram Messenger, le WhatsApp sécurisé | PowerJPM

  20. Under my concern, just it, telegram seams to follow at least some good open source values.

    Maybe you can unite. If you prove any flaw maybe you can help telegram to improve.

    At least the idea seams good to me, an open source message service to remplace the privative whatsapp.

    Maybe it’s just an opinion from an open source strong beliber 🙂

    • Open source is good, but not nearly enough when talking about security. And I am already working on other projects, helping here and there.

      There were already a lot of interesting projects before Telegram came.

    • Well, in the comments, they pointed out flaws, that I fixed in the article. But they repeatedly ignored important issues, attributing their use of Mac-And-Encrypt to a matter of taste, and saying that forward secrecy is present, you just have to disconnect and reconnect from the app…

      Hint: if a lot of cryptographers and security people were very skeptic, it is probably good to avoid it until the suspicious stuff is cleared up.

      • Sorry to ask but who are those a lot of cryptographers and security people?? I mean did they really come out and say something? I just can’t really find them as most of the arguments against telegram are based on your writing.

      • Yes, it was extensively discussed on Hacker News by Moxie Marlinspike (sslsniff author, ceryptographer at WhisperSystems) and Thomas Ptacek (CEO of Matasano Security), and their crypto challenge was also criticized by Moxie and by Taylor Hornby.

        I could also add to the list Matthew Green, Tony Arcieri and Jean Philippe Aumasson who shared their doubts on Twitter.

      • Thank you very much for the sources. I will look into them. But already now I can tell you that I really like the way this everything is going. A very lively discussion. I’m just wondering if facebook and whatsapp would have ever anticipated that this was coming.

  21. Great Conversation. I believe its even a part of the Open Source philosophy to talk about the stuff, in a open manner and controversial. I need to take a closer look to threats models. Even the release of the server code is interesting me, because whats happen on the servers let me in doubt.

    • For me, threat modeling is the most important part. It determines if an app will do the job for you.

      Note that releasing the server’s code is not an assurance that the deployed server will correspond. When you use end to end encryption, it is because you do not fully trust the server.

  22. You Sir, did well! I read the article and all comments now and I have to say I greatly appreciate your efforts in publishing flaws and unclear points in those “crypto-messengers”. It’s sad however to see that the developer himself replies so harsh and kind of aggressively to your article. They should be happy to see that there are people who take a deeper look and try to help them with their work. But well, people handle stuff and critics differently.
    As for me, if I ever come up with work in the crypto field, I would be happy to have people like you who dismantle my work and tell me how some things could be done better!
    So thanks for your work Géal!

    • Thank you for your support! Working in cryptography is harsh, because it means whatever you do could be destroyed at any moment (even worse when people are already relying on it). But the critic process is necessary to get better systems.

    • Thanks to Géal, the Telegram staff and everyone else for this quite useful discussion. I’d like to add my perspective:

      Lets assume for a moment that no one ever claims the $200k and both developers and users agree that the system is secure. We still have the following problems:

      – A large intelligence organization (Take your pick) would never claim the prize, but simply silently collect the data, which would be worth immensely more. So I’m not sure there is any point in the price at all. Also, do we have any assurances that they actually have the cash?
      – Telegram uses a persistent ID, your phone number, which in many countries is linked directly to your national ID.
      – Telegram stores your phone book server side, hence knowing your social graph. Combine this with knowing when your messages are sent and the you have very robust metadata.
      – Unless tools like Tor is used in combination with Telegram, the server will know your IP and hence your location and I’m guessing 99% of users won’t be using Tor.
      – Secret chats is not used by default and I doubt most users will actually use it. While not direct problem, everyone should consider unsafe default setups to be a major problem, especially for a system catered to even the greenest of users.
      – System is centralized and would be hard to federate.
      – Its an open question how Telegram would handle requests from law enforcement for user data. I find it unlikely that they would not comply.

      In total; In the best case they get your contacts aka social graph, location information, chat timing and you get an unproven protocol on a centralized system where you have no control over your data other that assurances.

      Alternatively we have an app such as ChatSecure (Android/iOS):

      – Use any XMPP server, run your own if you wish. System is federated.
      – Can use Tor by default
      – Comes with OTR built in, enables end-to-end encryption automatically if possible
      – Open source (As are many XMPP servers)

      In total; Fully open, time tested protocols/crypto and you don’t have to give up your data to anyone unless you explicitly choose to.

      So why would I use Telegram when better alternatives exist?

      • Great summary! This app should not be used by people who have real security needs. ChatSecure may still be hard to use for some, but it has come a long way and is really nice now.

  23. Hi Géal, how would OTR work in an asynchronous environment like mobile messaging, where there is no guarantee that your contact is actually online?
    I’m working on a messaging app, and I really appreciate the security that OTR is designed to give, but I don’t yet see how it would work if my contact’s app is currently not running…

    • This can be done with pre-keys: user A pregenerates a number of DH keys and publishes them. Then, when A is offline, B chooses one of them, does the handshake to obtain a shared key, then sends to A his part of the handshake then the encrypted message. A can now perform the handshake with the part B just sent, obtain the key and use it to decrypt the message.

      It works, but there are significant challenges with that approach:

      • B has to keep the key for potentially a long time
      • B is not sure that the exchange is correct until A answers (beware of MITM)
      • if the prekeys are meant to be used only one time, you need to keep track of their use and refresh them regularly
      • it is possible to have only one prekey, but then you have to make sure that the key sent by B will not put you in a small subgroup

      Such an asynchronous protocol has been studied and implemented by WhisperSystems in their Axolotl protocol.

      If you need help in implementing this, I could provide you with more pointers.

      • Thank you for the quick response! I will definitly have a look at the Axolotl protocol as you proposed, an initial glance at that link looks promising!

        At the moment, I’m working on a messaging platform for Android and iOS, with an AMQP broker for the communication, and a DJango+REST platform for user/group management. My vision is a platform where there is NO messaging content stored in the backend, other than the messages in-flight, queued in the broker.
        Some key concepts for me are:
        * Multi user chat
        * NO communication stored in the backend
        * End to end encryption
        * Trust (Am I sure that Alice is who she claims to be?)
        * Good balance between security requirements and useability

        I’m currently working on the base protocol between client and brokers, thats why I’m investigating different security scenario’s (like OTR).

        If you have any good ideas or remarks for me, I would be very thankful! Furthermore, I’d be glad to learn from your security experience, to make this a rock solid application!

      • I’m sorry, you will find this answer disappointing, but: what problem are you solving that is not already done in other systems? Most of the messaging systems out there will not store the messages, or at least not for long, simply because it is not very scalable.

        Also, the security models where you need end to end encryption are the ones where you cannot trust the server to destroy the messages. In that model, you assume that someone will watch everything passing through the server, or store them for later analysis.

        So making a system that does not store anything may not be that innovative.

        If you want to make something new, do not concentrate on the transport, but on user discovery. Most of the interesting problems are there right now:

        • how do you exchange public keys safely?
        • how do you verify the identity of the person you communicate with?
        • do you copy the user’s contacts list, or will they add other users manually?
        • in a group chat setup, how do you know the exact list of participants in a conversation? (that one is especially tricky, I’m thinking a lot about it)
        • how do you exchange a key efficiently in a multi user setup?
        • how do you handle byzantine adversaries?

        There are a lot of interesting problems to tackle 🙂

      • Hi Géal, thank you for your reply! I’m not disappointed at all, you really got me thinking.

        Having read your comments I’ve realised I should not focus on whether or not I should store messages in the backend.

        Something “unusual” about the system I’m designing is the fact that I would like to be able to have a mix between known users and anonymous* users inside a conversation.
        This imposes the issue of not being able to verify the identity of every person in a conversation, but that should not be too much of a problem, since I’m able to show to every participant of the conversation, whether there are anonymous* users participating in the conversation. So in my scenario, there will be a trade-of between identity verification and functionality (being able to communicate with anonymous* users)

        A simplified technical view on the communication would be this:

        [USER A]——[USER B]
        | | |
        | | |
        —> [SYSTEM] <—

        The HTTPS channel will be used to register a user account (and as you suggested, also to store the public keys for a user).
        As soon as the account is registered, the user will be able to connect via AMQP (over SSL) and further communication will happen through there.

        Can I even trust the AMQP SSL and HTTPS implementations? Shouldn’t my greatest concern be MITM attacks?
        How would you suggest I should design safe public key exchange?

        You’ve asked about how to be able to know the exact list of participants in a conversation in a group chat setup….isn’t that covered by the fact that all user pairs in a conversation will have their own virtual end-to-end encrypted channel? e.g.:

        In a conversation with 3 users:
        [USER A] —— [USER B] (shared key 1)
        [USER A] —— [USER C] (shared key 2)
        [USER B] —— [USER C] (shared key 3)

        So even if a 4 user would sneak in via the backend and starts listening to the messages, he will not be able to decrypt them? Or am I missing something?

        Concerning the byzantine adversaries, I’ve snooped a bit in this book: but I did not find a -for me- understandable solution here.
        Do you have any suggestions or countermeasures I could take in this situation?

        Thank you for your input!

        *Anonymous user: User B is not a friend of User A, so if B participates in a conversation with A, B will only see A’s username (and no further info) and A will only see B’s username (and no further info)

      • If you look at OTR’s design, there is a very interesting property: after the initial authentication (by SMP, fingerprint check, or signature of fingerprint), there is nothing to identify people. All the keys used depend just on the session. So having anonymous (or nearly anonymous) users is possible. Beware, though, the server has more info on anonymous users (IP, mainly).

        As I said previously, in an end to end security setup, you consider that the transport is unsafe (but oddly, that it is reliable enough to transmit some messages). So MITM attacks are a great problem, because they can happen between any user and the server, or in the server itself.

        To design a safe public key exchange, there are multiple solutions. You could have Trust On First Use with fingerprint verification like SSH, or a public key infrastructure (like the SSL certificates), or a directory of keys (GPG or DANE)… There are lots of ways to do it, and the significant challenges are the discovery (how do I verify the identity of someone I never met) and the usability.

        Knowing the exact list of participants in a MITM setup is hard. You could have a system with 4 users A, B, C and D, but C and D do not know that either of them is in the conversation. Everything will depend on how you exchange your keys. But doing it safely and efficiently is hard 🙂

        There is also the problem of the byzantine adversary, who can send different messages to different users. So now you need a message transcript that users must agree to. There is also the adversary that can drop some messages of some users. Or make someone appear as disconnected to the others. There is a fine line between an unreliable network and an adversary actively messing with the transport.

        This is a really interesting subject, one that I am currently researching a bit. The main problem I have identified is that there are so many ways to attack a distributed system that it is hard to test the mitigations against all the different adversaries.

      • Thank you for the response! It seems there are a lot of pitfalls! Also, it sounds virtually impossible to protect against every type of attack.

        For now, I need to make some quick decisions, to be able to move forward with my project (even more because I’m the only developer on this project). That means of course making some concessions.
        As I understand, secure key exchange is the “holy grail” in everything related to security, so I’ll start with that.

        For the key verification, I’ll provide an optional visual confirmation; “power users” will have the option to visually approve/decline keys, but regular users will experience an auto-accept (Otherwise, the impact on usability will be too big). Furthermore, I will have a look at Axolotl (OpenWhisperSystems) since their approach gives me a comfort feeling.

        I “think” I’ve covered the most common dangerous scenario’s that way, while still being able to provide a good user experience.
        What would you think? Would you consider such an approach a reasonably secure system? (I wouldn’t want to get burned to the ground if I eventually launch this application)

  24. Man, if this isn’t one of the best conversations about how secure an app is, then I don’t kow ANYTHING about ANYTHING. And I know how to write, for sure.

    Putting aside your differences, it is very valuable that the crew from Telegram is taking the time to read your comments, reply and even (if I’m not wrong) accepting suggestions?

    Please keep up the good work. I’m not even good at programming, but I understand a little, let alone I’m not a cryptographer or anything, but your conversation is really educative.

    Thanks for all, folks!

  25. Just wanted to give a big THANKS for the post and the whole follow up.
    It is good to have all this discussion available for review to the general public.

    Géal I understand your point and thank you specially not only for your thoughts, but also for the followup with Telegram with the prolonged Q&A.

    And although it may be not the best attitude, I think that Telegram at least participated on the whole thread making it richer.

  26. Hello Géal,
    I also thank you for this discussion very much. As I know it costs a lot of time reading and answering in high quality and understandable expressions. It is much more worth.
    I never installed WA and used FB some years ago only to see whats going on there.
    About half a year ago I installed the App But it is very hard to get other users to install this App also, even they could try it on the browser and I preinstalled their accounts.
    I think to keep our privacy will be a never ending fight and will be used only by a handfull users.

  27. I just installed the App.
    First, I have to say I use Mobiwol to revoke the App’s possibility to connect the internet via WLAN or via 3G at installation.
    Telegram is untrustworthy because of the rights it wants to have. These are
    Position via GPS
    Position via WiFi
    Read my contacts
    Change my contacts
    Read the telefon status and identity
    Read the google service account
    Search for accounts on the device
    Add or remove accounts

    Am I right, an app with high security should be able to save (encrypted) its own contacts (only this ones, I am creating for using with this app)? And it also should be able to save all the necessary date to be used without creating an account on the phone. So it would not be necessary to search, read, add and remove accounts!

    Why my position? Why reading my ShortMessages (SM)?
    If I am not able to read the telegram code sended via SMS and type it into the app, I don*t understand security on smartphones at all! Should I delete the telegram code sended by telegram via SM-service immediately

    Next thing:
    If I leave the app, it is still opend and there is no way to end it. I can’t find it under active processes nor under processes in the cache. Is this an security feature?? I have to delete the cache and the data to end the app.

    Questions over questions.

    • Accessing the contacts is anooying for most. That’s part of the reason why applications like Snapchat worked so well: you add people directly, and there is no obvious link with your contacts.

  28. Hi everyone,

    As the discussion is quite nice here, I would like to contribute a question:

    You say a Man In The Middle (MiM) attack is not possible, because one can compare his private key with his chat partner. I use telegram and I like the feature. So here you make the claim, that it is impossible or really unlikely, that the (MiM) is able to generate the same key twice. I would now like to show a scenario, where the MiM is able to generate the same key for both chat partner, so the visual comparison fails:(I use the same notation as the German Wikipedia article on Diffie Hellman, so you might want to look it up, if something is unclear)

    1. The second partner (Bob) initiates a Diffie Hellmann key exchange with the MiM. They agree on p (prime) and g (a primitive root of p).
    2. The MiM does a Diffie Hellmann (DH) with the first partner (Alice) and this generates a key K_a.
    3. Now the key for Bob is created by the following equation K_b = B^z mod p, where B (= g^b mod p, with b is randomly generated by Bob and B is send to the MiM by Bob)
    was generated by Bob and z is a number the MiM can choose freely (normally random, but this is an attack).
    To trick your visual key comparison system the MiM has to choose z so, that it fulfills the equation:
    K_a = B^z mod p.
    At first look, this is the discrete logarithm problem, which assures the security of DF, but not exactly. For the normal problem B has to be a primitive root of g, this is not sure in this case! So which cases can now come up:
    a) B is a primitive root of p: no problem, the equation is nearly impossible to fulfill.
    b) B^z mod p doesn’t even generate the number K_a (especially if K_a>=p): even better, because the two keys will be different
    c) B^z mod p generates only a small subset of the numbers from 0 to p-1 including K_a: in this case it could be possible to choose z in a way, that K_a=K_b. In this case the both keys would be equal and one couldn’t determine a MiM attack, by your
    visual comparison method

    As I am not a Math PhD and don’t got to much time at the moment I do not know the likeliness of those 3 scenarios. I would guess, that c) is pretty unlikely, but it is just a guess.

    Please don’t get me wrong. I use telegram myself and like it, because it is open source and discussions like this can happen. I am also not a professional and would be really glad to get some answers explaining me why I am wrong or c) is not likely. I simply thought this could be an interesting point to talk about.

    Have a nice day and greetings from Germany 🙂

      • Thanks for the fast answer and the information provided. I will read through it and leave another comment if there are more question:)

  29. For your Edit 4: Wasn’t that the problem which was noticed at the end of december and already fixed with a nice and fluffy 100.000 Bankroll for the “reminder”?

  30. Geal, for me what’s definitely “less trustworthy” than Telegram (who opened their code for everybody to see) is you, who spent a lot of time “analyzing” the code, or pretending to have analyzed it, and when you were asked to prove your statements and were offered $200,000 for it you just went: “Nah, I have more interesting things to do”..

    Look how much time you spent on this blog alone, if you have dedicated that time to actually prove your statements, you would have a much better argument than just vague critique based solely on the luxury Telegram gave you by allowing you to look into their code, and one that you can’t even back by finding concrete problems or any security holes.

    On the other hand you seem to have a soft spot for other (commercial) apps, that didn’t even allow you to look into their code, which seems a bit fishy coming from somebody who pretends to be “suspicious” about everything, even when he has the code in hand.. “I cannot make an opinion on Threema either, since there is not that much info available. But their attitude and what we can already infer from their system look good.” lol. Look good?!

    • You do not trust me. Fair enough, I do not need your approval. But consider some things:

      • even if a flaw was found and Telegram needed to update their protocol, they would fail at it, because they have too many users now (they only concentrate on scaling) so the update would take a lot of time, their protocol does not even address versioning or negotiation, and there are too many third party apps to update
      • their contest is rigged to ignore most common attacks, like oracle attacks or fuzzing the service. This has been discussed at length before. Participating in that contest has no point
      • What I said about Threema was in comparison with Telegram: they launched with lots of noise, touting their project as “very secure”, waving their math PhDs instead of justifying their weird crypto, and attacking cryptographers, saying they have commercial interests. In contrast, Threema has been very quiet, and apparently uses known and safe crypto. That’s just it. If you bother to read a bit the article and comments, you will see that I said repeatedly that I cannot recommand them because I do not have enough info

      And when I said I have better things to do? Yes, it is true. I do not see why I would spend some time on a project I do not believe in, while there are hard problems to tackle (like multiparty OTR) and more interesting projects to follow (like TextSecure or ChatSecure).

      • – The attitude you’re taking against Telegram is what makes me very suspicious about you (let alone the fact that you haven’t even discovered one real problem yet). Open source projects are meant so everybody can help. You seem only interested in destroying the project, rather than helping it grow and improve. You keep coming up with new theories like “They can’t update it even if they wanted” and stuff like that, and it seems to me like you’re only intending to create more and more uncertainty about it, rather than actually proving with a reasonable degree of certainty that there is an actual problem with it.

        – Telegram is now used by a lot of users. You don’t even have to rely on the contest. Telegram is already live and used by many users. Try to find holes in the system and hack into it. Show me how you can do it, instead of theorizing about something you haven’t even read about that well (which is evident from the number of things you modified in your article after you read more about Telegram).

        – You’re telling me that threema is better than Telegram because they made less noise when they launched it?! Are you for real? Yes, you came short of literally recommending it, but you always has something good to say about them, even though you know pretty much nothing about their system. “looks good”, “apparently safe”, “they made less noise” (what does that even have anything to do with what we’re talking about)…etc. And by the way, I have never heard the Telegram guys talk about their PhDs. This is the first time I hear about it. Is this now becoming a personal issue for you with them?

        I didn’t ask you to help develop Telegram. I merely asked you to back up your theories with actual facts. With real problems you find. With real holes you can exploit, so the millions who are already using telegram know about them. If you can’t, then don’t be surprised if people didn’t take you seriously.

      • It seems I will not convince you of anything. As I said, since that initial analysis, which showed a number of problems that were never adressed (secret chats not present by default, verification not mandatory, no forward secrecy, weak key derivation function), I have not spent time on Telegram because I do not see the point in that. Those problems are enough to convince a lot of people to avoid them.

        What you want is a flashy vulnerability like “anyone can decrypt any message”. It does not work like that in cryptography. What could happen is someone finding a small info leak, very small, but enough to create an oracle attack. Or a replay attack in a MITM setup. But that would not be enough to convince you or those millions users.

        People can use it, that is their decision. The goal of that article was to inform people on the weird crypto design, which was a red flag for me and a lot of other people (cf ). When someone comes up with a new scheme the burden of proof is on them.

        About Threema: stop misrepresenting what I said. You want to know what looks safe from my point of view? They use NaCl, which is a well known and audited crypto library ( cf ), instead of an old block cipher mode and a weird MAC construct.

        If you still do not trust what I said, look at other critics:

        After that, I have nothing else to say, since nothing will convince you.

  31. I see. So the strike out of some edits are from dec 17th or before. Btw. Looking aht the the textsecure atm.

    • grrr shot me or my keyboard. Thanks. For the ones hopefully my keyboard leaves behind: I meant the new textsecure 2.

  32. Hi Geal,
    two questions:
    1.) What is your professional training?
    2.) what ist the name of the hacker that hacked the MTproto of telegram?
    I think it is not necessary to develop a new encryption algorithm for more security.

    • I was trained as an engineer with various skills (CS, but also mechanical engineering, thermodynamics, electronic, etc). For some time, I worked under a cryptographer specialized in eletronic money and smartcards. Then I spent some time designing and implementing strong authentication systems.

      I do not know the name of the one who exploited the Diffie-Hellman implementation of Telegram, but you may be able to contact him here:

  33. inSecurity 21 – You’ve Gotten a Telegram |

  34. تطبيق تيليغرام Telegram والأمان الزائف | رووت جريس

  35. تطبيق تيليغرام Telegram والأمان الزائف » طارق الجاسر

  36. If Telegram and Whatsapp (after acquisition of Facebook) are the only two choices which one would be the better one even if they both are not secure. With as most important factor: Not saving your messages. We don’t like someone creating a profile with a history of all our messages.

    We can’t trust whatsapp’s not storing any messages on servers anymore since facebook i guess (any maybe even before that).

    • This is a bad setup. You ask for a secure solution in a setup where it is impossible. To begin with, you can never trust any service to delete messages passing through them, so you need good end to end encryption and forward secrecy. Neither Telegram nor Whatsapp have it.

      So when you choose to use them despite the security flaws, you are not making a compromise or choosing the less bad solution. You are taking a risk, and that risk is not adressed by those solutions.

      Basically, if you have a real need for security, if you risk being killed or put in jail for what you say and whom you talk to, stay away from those apps, and test less user friendly but more secure solutions.

      • Man, Géal, its good that you point out that what cant be proven secure and where authentication is sloppy is not secure enough for seriois fucking trouble/business.
        The most people will not be able to take the effort that is associated with not trusting the transport.
        Its good that you persistently state your point.
        But actually many people seek for a tradeoff.
        For example they could assume:
        Nsa or other secret limitless entities will read it anyway, but they do not expect being interesting for them.
        They want just a reasonable hint, that not everything they put in will be used for datamining by the very operator of the network.
        The reason that I switched to Telegram for insecure to slighzly more private messages is:
        People are to fucking stupid or lazy or careless to actually cope with security. They judge by the attitude of the operator.
        Such metafacts about a company are actually usable as a weak indocator.
        Because every company until now that got sold to facebook or didnt give a shit about data, somehow let that see through in their attitude.
        Its a gut feeling thing.
        Could you imagine how this mechanism in humans can be technically explained?
        Should this be technically explained if not already? Because just the asshole companies could just use that as a template.
        How would you judge such a service for non-critical data, that should still rather not be in multiple giant databases, of companois that directly care for selling profiles, destroying society even more through exploiting brain patterns, if you were a complete layman in CS and Crypto?

        (Punch me in the face textually if my post is very dumb, because I couldnt resist posting even in a state where I am maximum tored and likely to fail at thorough thinking.)

      • Yes, people have to make tradeoffs. But those tradeoffs must be informed. There are different services with different levels of security. If those services are honest, they will tell you in which case they protect you, and what are their limits. So what happens when a service comes and presents a service as safe against the NSA, while experts all around the world warn that their system may not be that safe? If people judge that Telegram is ok by that attitude, are not alerted byt the experts’ opinions, and do not dig further to make an informed decision, what can I do?

        This is the confirmation bias: people see good marketing, make a good opinion of something, then when one criticizes that something, they do not agree, even though they are not really equiped to judge. They only seek what will reinforce their beliefs.

  37. I am a developer, I want to use Telegram Source Code in my Codes. How can change the Verification Code Message:”Telegram Code 23342″ be as my Application Name eg. “Live Code 2342″

  38. Géal replies are just amazing, very technical and VERY convincing. I already installed Threema. Very easy! Thanks Géal for your comments and saving us! 😀

Comments are closed.