Programming VS Mathematics, and other pointless debates

I do not know who started this argument a few days ago. It feels like something coming from HN. Do you need to know mathematics to be a good programmer?

There is a lot of differing opinions. Maybe programming is a subbranch of mathematics, or programming is using mathematics. Or learning programming is closer to learning a new language. For me, saying that programming is about languages is like saying that literature is about languages. Sure, you need words to indicate concepts, some languages are better suited than others for that, and some concepts are better expressed in other languages. It is more like a hierarchy to me: philosophy formalizes concepts used by authors to write in common languages. Mathematics formalize concepts used by programmers to create code in common languages.
But this is besides the point.

This debate sparks outrage, since it touches a central point of our education, and one that is often not taught very well. “Look, I do not use geometry while writing a loop, so maths are pointless for me”. A lot of developers will never learn basic algebra or logic and will never need it in their day job. And that’s okay.
Programming is not a single profession anymore. Each and every one of us has a different definition. A mechanical engineer working on bridges, another on metallic parts for cars and another one on plastic toys all have different needs, different techniques for their job, although the fundamental basis (evaluating breaking strength, time of assembly, production costs) is the same. That does not make one of these jobs worth more than the other.

The real problem is that we are still fighting among ourselves to define what our job is. The other pointless debate, about software being engineering, science or craft, is evidence of that. And it will stay hard to define for a long time.
We are in a unique position. Usually, when a new field emerges, either tinkerers are launching it and later, good practices are studied to make it engineering, or scientists create it, then means of production become cheaper and crafters take over.
Computers were started by scientists, but the ease of access gave crafters a good opportunity to take over. But that does not mean research stopped when people started coding at home. So now, in a relatively new field (less than a century), while we are still exploring, we have a very large spectrum of jobs and approaches, from the most scientific to the most artistic kind. And that is okay. More world views will help us get better at our respective jobs.

So, while you are arguing that the other side is misguided, irrealistic or unrigorous, take time to consider where they come from. They do not have the same job, and that job can seem pointless to you, but they can be good at it, so there is probably something good you can learn from their approach. The only thing you should not forgive from the other side is the lack of curiosity.

About these ads

Handling IO failure

Let’s talk a bit about IO programming. Filesystem, network, GUI… You cannot write useful code without doing IO these days. So why is it so damn hard to do in “safe” languages like Haskell?

Well, in Haskell, you isolate the unsafe parts to be able to reason safely on the rest of the code. What does “unsafe” mean in that context? Mostly, unsafe code is unpredictable, non deterministic and will fail for a number of reasons independent from your program’s code.

Surely, you might think “come on, it cannot be that hard, I’m doing HTTP requests everywhere in my code, and there is no problem”. Well, let’s see how a simple HTTPS request could fail:

  • you are disconnected from the network (you have no IP)
  • you are connected to the network, but it is not connected to anything
  • your network is connected to Internet, but routers are dropping packets
  • your network is connected to Internet, but very slow
  • your DNS server is unreachable
  • your DNS server drops your packets
  • your DNS server cannot parse your request
  • your DNS server cannot contact other server to get your answer
  • your DNS server sends back an invalid response
  • your DNS server sends back an outdated response
  • you cannot reach the web server’s IP from your network
  • the web server drops your packets silently before connecting
  • the web server connects, then drops the connection silently
  • the web server rejects your connection
  • the web server cannot parse your packets, and so, rejects them
  • the web server timeouts
  • the server’s certificate is expired
  • the server’s certificate is not for the right subject name
  • the server’s certification chain has parts missing
  • the server’s certification chain has an unknown root
  • the server’s certificate was revoked
  • the packet’s signatures are invalid
  • your user agent and the server do not support the same versions of TLS
  • your user agent and the server do not have common cipher suites
  • the web server closes the connection without warning
  • the web server timeouts
  • the web server crashes
  • the web server cannot parse your HTTP request and rejects it
  • your request is too large
  • the web server parses your HTTP request correctly, but your cookie or OAuth token is invalid
  • the data you requested does not exist
  • the data you requested is elsewhere
  • your user agent does not support the mime type of the data
  • the data requested is too large for a simple response
  • the server only sends a part of the data, then drops the connection
  • your user agent cannot parse the response
  • your user agent can parse the data, but some way or another, it is invalid

If you have worked for some time with networks, all of those have probably happened to you at some point (and the list is not nearly exhaustive). What did you do in your code? Did you handle all these exceptions? Did you catch all the exceptions (see what I did there)? Do you check for all the error codes? Do you retry the requests where you need to?

Let’s face it: most of the network handling code out there is made of big chunks of procedural code, without much error handling, in blocking mode. In most cases, it is ok. But that is sloppy programming.

Safe languages do not allow you to write sloppy code like that. So, we are stuck between correct but overly complex code, and simple but failing code. Choose your weapons.

Personally, I prefer isolating unsafe code in asynchronous systems like futures or actors. I know failure will happen, I know threads will crash, I know I will make errors in my code. That is ok, it happens. So, let’s write robust code to handle failure.

For network errors, I just want to know if the server is unreachable. It is ok, I will try later. If my request’s authentication is rejected, I want to know, and must handle that failure. Some errors should be handled seriously, others must be put in the “ok, it failed, whatever” bin.

Even if languages like Haskell make it harder to perform IO safely, they are still good tools, because they let you isolate unsafe parts, to let you reason on safe, deterministic parts of the program.

P.S.: ok, the network case was maybe a bit too much. Surely, filesystem usage will be easier? Just for the fun, let’s list some possible failures when you want to open a file for reading and writing:

  • invalid path
  • correct path, but you do not have the permission
  • correct path, you have the permission, but the file does not exists
  • you do not have the permission to create the file
  • you check that the file does not exists, then you try to create it, but someone already created it in the meantime (fun security bug, that one)
  • the file exists, but someone is already writing on it, no concurrent access
  • you have the handle you want on the file, but someone just deleted it
  • not enough file descriptors available (oh, please, no)
  • someone is writing to the file at the same time
  • there are so many page faults that your program is slowed down
  • the disk is slow, blocking on a large operation
  • the disk is full
  • you checked that you have enough room, but someone is filling the disk at the same time
  • the file is on a networked file system, and it is slow
  • the file is on a remote disk, and the network just failed
  • hardware failure in the disk
  • hardware failure in the RAID array (and for some reason, redundancy was not enough, you lost the data)
  • the file is on a USB card that someone just unplugged

Basically, IO is a nightmare. Please wake me up now.

My ideal job posting

This post is a translation of something I wrote in French for Human Coders. They asked me what would be the ideal job post from a developer’s standpoint:

How whould you write a job announcement attracting good developers? Most recruiters complain that finding the right candidates is an ordeal.

If you ask me, it is due to very old recruitment practices: writing job posts for paper ads (where you pay by the letter), spamming them to as many people as possible, mandating fishy head hunters… This has worked in the past, but things changed. A lot more companies are competing to recruit developers, and many more developers are available, so separating the wheat from the chaff is harder.

We can do better! Let’s start from scratch. For most candidates, a job posting is the first contact they’ll have with your company. It must be attrctive, exciting! When I read “the company X is the leader on the market of Y”, I don’t think that they’re awesome, I just wonder what they really do.

A job posting is a marketing document. Its purpose is not to filter candidates, but to attract them! And how do you write a marketing document?

YOU. TALK. ABOUT. THE. CLIENT. You talk about his problems, his aspirations, and only then, will you tell him how you will make things better for him. For a job posting, you must do the same. You talk to THE candidate. Not to multiple candidates, not to the head hunter or the HR department, but to the candidate. Talk about the candidate, and only the candidate. When a developer looks for a job, she doesn’t want to “work on your backend application” or “maintain the web server”. That is what she will do for you. This is what she wants:

  • getting paid to write code
  • work on interesting technologies
  • a nice workplace atmosphere
  • learn
  • etc.

A good job posting should answer the candidate’s aspirations, and talk about the carrer path. Dos this job lead to project management? Do you propose in-house training? Is there a career path for expertise in your caompany?

Do you share values with your candidate? I do not mean the values written by your sales team and displayed on the “our values” page of your website. I am talking about the values of the team where the candidate will end up. Do they work in pair programming? Do they apply test driven development? There is no best way to work, the job can be done in many ways, so you must make sure that the candidate will fit right in.

What problem are you trying to solve with your company? Do you create websites that can be managed by anyone? Do you provide secure hosting? Whatever your goal is, talk about it instead of taliking about the product. I do not want to read, “our company develops a mobile server monitoring tool”, because that is uninteresting. If I read “we spent a lot of time on call for diverse companies, so we understood that mobility is crucial for some system administrators, so we created a tool tailored for moving system administrators”, I see a real problem, a motivation to work, a culture that I could identify to.

By talking that way to the candidate, you will filter candidates on motivation and culture instead of filtering on skills. That can be done later, once you see the candidate You did not really think that a resume was a good way to select skilled people, do you?

Here is a good example of fictive job posting, from a company aggregating news for developers, looking for a Rails developer:

“You are a passionnate Ruby on Rails developers, you are proiud of you unit tests, and you enjoy the availability of Heroku’s services? That’s the same for us!

At Company X, we love developers: all of our services are meant for their fulfillment. We propose a news website about various technologies, higly interesting trainings and a job board for startups.

Our employees benefit fully from these services, and make talks in conferences all around France. By directly talking with other developers, they quickly get an extensive knowledge of current technologies.

Our news website is growing fast, so we need help to scale it. The web app uses Heroku and MongoDB, with a CoffeeScript front end. Are you well versed in Rails optimization? If yes, we would love to talk with you!”

Note that I did not talk about years of experience, or a city. I want to hire a Rails developer, not necessarily a french developer. I want someone with experience in optimization, not someone over 27.

With such a job posting, you will receive a lot more interesting employment applications. Now, are you afraid that it will incur a lot more work? The solution in a future post: how to target candidates efficiently? Get away from job boards!

HN Discuss on Hacker News

Looking for big architectures and adventurous sysadmins

Last week, I wrote a post about SSL optimization that showed the big interest people have in getting the absolute best performance from their web servers.

That post was just a small part of the ebook on SSL tuning I am currently writing. This ebook will cover various subjects:

  • algorithms comparison
  • handshake tuning
  • HSTS
  • session tickets
  • load balancers

I test a lot of different architectures, to provide you with tips directly adaptable to your system (like I did previously with Apache and Nginx). But I don’t have access to every system under the sun…

So, if you feel adventurous enough to try SSL optimization on your servers, please contact me, I would be happy to help you!

I am especially interested in large architectures (servers in multiple datacenters around the world, large load balancers, CDNs) and mobile application backends.

And don’t forget to check out the ebook, to be notified about updates!

5 easy tips to accelerate SSL

Photo credit: TheKenChan - http://www.flickr.com/photos/67936989@N00/2678539087/

Update: following popular demand, the article now includes nginx commands :)

Update 2: thanks to jackalope from Hacker News, I added a missing Apache directive for the cipher suites.

Update 3: recent attacks on RC4 have definitely made it a bad choice, and ECDHE cipher suites got improvements.

SSL is slow. These cryptographic algorithms eat the CPU, there is too much traffic, it is too hard to deploy correctly. SSL is slow. Isn’t it?

HELL NO!

SSL looks slow, because you did not even try to optimize it! For that matter, I could say that HTTP is too verbose, XML web services are verbose too, and all this traffic makes the website slow. But, SSL can be optimized, as well as everything!

Slow cryptographic algorithms

The cryptographic algorithms used in SSL are not all created equal: some provide better security, some are faster. So, you should choose carefully which algorithm suite you will use.

The default one for Apache 2’s SSLCipherSuite directive is: ALL: !ADH:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP

You can translate that to a readable list of algorithms with this command: openssl ciphers -v ‘ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP’

Here is the result:

DHE-RSA-AES256-SHA      SSLv3 Kx=DH       Au=RSA  Enc=AES(256)  Mac=SHA1
DHE-DSS-AES256-SHA      SSLv3 Kx=DH       Au=DSS  Enc=AES(256)  Mac=SHA1
AES256-SHA              SSLv3 Kx=RSA      Au=RSA  Enc=AES(256)  Mac=SHA1
DHE-RSA-AES128-SHA      SSLv3 Kx=DH       Au=RSA  Enc=AES(128)  Mac=SHA1
DHE-DSS-AES128-SHA      SSLv3 Kx=DH       Au=DSS  Enc=AES(128)  Mac=SHA1
AES128-SHA              SSLv3 Kx=RSA      Au=RSA  Enc=AES(128)  Mac=SHA1
EDH-RSA-DES-CBC3-SHA    SSLv3 Kx=DH       Au=RSA  Enc=3DES(168) Mac=SHA1
EDH-DSS-DES-CBC3-SHA    SSLv3 Kx=DH       Au=DSS  Enc=3DES(168) Mac=SHA1
DES-CBC3-SHA            SSLv3 Kx=RSA      Au=RSA  Enc=3DES(168) Mac=SHA1
DHE-RSA-SEED-SHA        SSLv3 Kx=DH       Au=RSA  Enc=SEED(128) Mac=SHA1
DHE-DSS-SEED-SHA        SSLv3 Kx=DH       Au=DSS  Enc=SEED(128) Mac=SHA1
SEED-SHA                SSLv3 Kx=RSA      Au=RSA  Enc=SEED(128) Mac=SHA1
RC4-SHA                 SSLv3 Kx=RSA      Au=RSA  Enc=RC4(128)  Mac=SHA1
RC4-MD5                 SSLv3 Kx=RSA      Au=RSA  Enc=RC4(128)  Mac=MD5 
EDH-RSA-DES-CBC-SHA     SSLv3 Kx=DH       Au=RSA  Enc=DES(56)   Mac=SHA1
EDH-DSS-DES-CBC-SHA     SSLv3 Kx=DH       Au=DSS  Enc=DES(56)   Mac=SHA1
DES-CBC-SHA             SSLv3 Kx=RSA      Au=RSA  Enc=DES(56)   Mac=SHA1
DES-CBC3-MD5            SSLv2 Kx=RSA      Au=RSA  Enc=3DES(168) Mac=MD5 
RC2-CBC-MD5             SSLv2 Kx=RSA      Au=RSA  Enc=RC2(128)  Mac=MD5 
RC4-MD5                 SSLv2 Kx=RSA      Au=RSA  Enc=RC4(128)  Mac=MD5 
DES-CBC-MD5             SSLv2 Kx=RSA      Au=RSA  Enc=DES(56)   Mac=MD5 
EXP-EDH-RSA-DES-CBC-SHA SSLv3 Kx=DH(512)  Au=RSA  Enc=DES(40)   Mac=SHA1 export
EXP-EDH-DSS-DES-CBC-SHA SSLv3 Kx=DH(512)  Au=DSS  Enc=DES(40)   Mac=SHA1 export
EXP-DES-CBC-SHA         SSLv3 Kx=RSA(512) Au=RSA  Enc=DES(40)   Mac=SHA1 export
EXP-RC2-CBC-MD5         SSLv3 Kx=RSA(512) Au=RSA  Enc=RC2(40)   Mac=MD5  export
EXP-RC4-MD5             SSLv3 Kx=RSA(512) Au=RSA  Enc=RC4(40)   Mac=MD5  export
EXP-RC2-CBC-MD5         SSLv2 Kx=RSA(512) Au=RSA  Enc=RC2(40)   Mac=MD5  export
EXP-RC4-MD5             SSLv2 Kx=RSA(512) Au=RSA  Enc=RC4(40)   Mac=MD5  export

28 cipher suites, that’s a lot! Let’s see if we can remove the unsafe ones first! You can see at the end of the of the list 7 ones marked as “export”. That means that they comply with the US cryptographic algorithm exportation policy. Those algorithms are utterly unsafe, and the US abandoned this restriction years ago, so let’s remove them:
‘ALL:!ADH:!EXP:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2′.

Now, let’s remove the algorithms using plain DES (not 3DES) and RC2: ‘ALL:!ADH:!EXP:!LOW:!RC2:RC4+RSA:+HIGH:+MEDIUM’. That leaves us with 16 algorithms.

It is time to remove the slow algorithms! To decide, let’s use the openssl speed command. Use it on your server, ecause depending on your hardware, you might get different results. Here is the benchmark on my computer:

OpenSSL 0.9.8r 8 Feb 2011
built on: Jun 22 2012
options:bn(64,64) md2(int) rc4(ptr,char) des(idx,cisc,16,int) aes(partial) blowfish(ptr2) 
compiler: -arch x86_64 -fmessage-length=0 -pipe -Wno-trigraphs -fpascal-strings -fasm-blocks
  -O3 -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -DL_ENDIAN -DMD32_REG_T=int -DOPENSSL_NO_IDEA
  -DOPENSSL_PIC -DOPENSSL_THREADS -DZLIB -mmacosx-version-min=10.6
available timing options: TIMEB USE_TOD HZ=100 [sysconf value]
timing function used: getrusage
The 'numbers' are in 1000s of bytes per second processed.
type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes
md2               2385.73k     4960.60k     6784.54k     7479.39k     7709.04k
mdc2              8978.56k    10020.07k    10327.11k    10363.30k    10382.92k
md4              32786.07k   106466.60k   284815.49k   485957.41k   614100.76k
md5              26936.00k    84091.54k   210543.56k   337615.92k   411102.49k
hmac(md5)        30481.77k    90920.53k   220409.04k   343875.41k   412797.88k
sha1             26321.00k    78241.24k   183521.48k   274885.43k   322359.86k
rmd160           23556.35k    66067.36k   143513.89k   203517.79k   231921.09k
rc4             253076.74k   278841.16k   286491.29k   287414.31k   288675.67k
des cbc          48198.17k    49862.61k    50248.52k    50521.69k    50241.28k
des ede3         18895.61k    19383.95k    19472.94k    19470.03k    19414.27k
idea cbc             0.00         0.00         0.00         0.00         0.00 
seed cbc         45698.00k    46178.57k    46041.10k    47332.45k    50548.99k
rc2 cbc          22812.67k    24010.85k    24559.82k    21768.43k    23347.22k
rc5-32/12 cbc   116089.40k   138989.89k   134793.49k   136996.33k   133077.51k
blowfish cbc     65057.64k    68305.24k    72978.75k    70045.37k    71121.64k
cast cbc         48152.49k    51153.19k    51271.61k    51292.70k    47460.88k
aes-128 cbc      99379.58k   103025.53k   103889.18k   104316.39k    97687.94k
aes-192 cbc      82578.60k    85445.04k    85346.23k    84017.31k    87399.06k
aes-256 cbc      70284.17k    72738.06k    73792.20k    74727.31k    75279.22k
camellia-128 cbc        0.00         0.00         0.00         0.00         0.00 
camellia-192 cbc        0.00         0.00         0.00         0.00         0.00 
camellia-256 cbc        0.00         0.00         0.00         0.00         0.00 
sha256           17666.16k    42231.88k    76349.86k    96032.53k   103676.18k
sha512           13047.28k    51985.74k    91311.50k   135024.42k   158613.53k
aes-128 ige      93058.08k    98123.91k    96833.55k    99210.74k   100863.22k
aes-192 ige      76895.61k    84041.67k    78274.36k    79460.06k    77789.76k
aes-256 ige      68410.22k    71244.81k    69274.51k    67296.59k    68206.06k
                  sign    verify    sign/s verify/s
rsa  512 bits 0.000480s 0.000040s   2081.2  24877.7
rsa 1024 bits 0.002322s 0.000111s    430.6   9013.4
rsa 2048 bits 0.014092s 0.000372s     71.0   2686.6
rsa 4096 bits 0.089189s 0.001297s     11.2    771.2
                  sign    verify    sign/s verify/s
dsa  512 bits 0.000432s 0.000458s   2314.5   2181.2
dsa 1024 bits 0.001153s 0.001390s    867.6    719.4
dsa 2048 bits 0.003700s 0.004568s    270.3    218.9

We can remove the SEED and 3DES suite because they are slower than the other. DES was meant to be fast in hardware implementations, but slow in software, so 3DES (which runs DES three times) is slower. On the contrary, AES can be very fast in software implementations, and even more if your CPU provides specific instructions for AES. You can see that with a bigger key (and so, better theoretical security), AES gets slower. Depending on the level of security, you may choose different key sizes. According to the key length comparison, 128 might be enough for now. RC4 is a lot faster than other algorithms. AES is considered safer, but the implementation in SSL takes into account the attacks on RC4. So, we will propose this one in priority. Following recent researches, it appears that RC4 is not safe enough anymore. And ECDHE got a performance boost with recent versions of OpenSSL. So, let’s forbid RC4 right now!

So, here is the new cipher suite: ‘ALL:!ADH:!EXP:!LOW:!RC2:!3DES:!SEED:!RC4:+HIGH:+MEDIUM’

And the list of ciphers we will use:

DHE-RSA-AES256-SHA      SSLv3 Kx=DH       Au=RSA  Enc=AES(256)  Mac=SHA1
DHE-DSS-AES256-SHA      SSLv3 Kx=DH       Au=DSS  Enc=AES(256)  Mac=SHA1
AES256-SHA              SSLv3 Kx=RSA      Au=RSA  Enc=AES(256)  Mac=SHA1
DHE-RSA-AES128-SHA      SSLv3 Kx=DH       Au=RSA  Enc=AES(128)  Mac=SHA1
DHE-DSS-AES128-SHA      SSLv3 Kx=DH       Au=DSS  Enc=AES(128)  Mac=SHA1
AES128-SHA              SSLv3 Kx=RSA      Au=RSA  Enc=AES(128)  Mac=SHA1
RC4-SHA                 SSLv3 Kx=RSA      Au=RSA  Enc=RC4(128)  Mac=SHA1
RC4-MD5                 SSLv3 Kx=RSA      Au=RSA  Enc=RC4(128)  Mac=MD5 
RC4-MD5                 SSLv2 Kx=RSA      Au=RSA  Enc=RC4(128)  Mac=MD5

9 ciphers, that’s much more manageable. We could reduce the list further, but it is already in a good shape for security and speed. Configure it in Apache with this directive:

SSLHonorCipherOrder On
SSLCipherSuite ALL:!ADH:!EXP:!LOW:!RC2:!3DES:!SEED:!RC4:+HIGH:+MEDIUM

Configure it in Nginx with this directive:

ssl_ciphers ALL:!ADH:!EXP:!LOW:!RC2:!3DES:!SEED:!RC4:+HIGH:+MEDIUM

You can also see that the performance of RSA gets worse with key size. With the current security requirements (as of now, January 2013, if you are reading this from the future). You should choose a RSA key of 2048 bits for your certificate, because 1024 is not enough anymore, but 4096 is a bit overkill.

Remember, the benchmark depends on the version of OpenSSL, the compilation options and your CPU, so don’t forget to test on your server before implementing my recommandations.

Take care of the handshake

The SSL protocol is in fact two protocols (well, three, but the first is not interesting for us): the handshake protocol, where the client and the server will verify each other’s identity, and the record protocol where data is exchanged.

Here is a representation of the handshake protocol, taken from the TLS 1.0 RFC:

      Client                                               Server

      ClientHello                  -------->
                                                      ServerHello
                                                     Certificate*
                                               ServerKeyExchange*
                                              CertificateRequest*
                                   <--------      ServerHelloDone
      Certificate*
      ClientKeyExchange
      CertificateVerify*
      [ChangeCipherSpec]
      Finished                     -------->
                                               [ChangeCipherSpec]
                                   <--------             Finished
      Application Data             <------->     Application Data

You can see that there are 4 messages exchanged before any real data is sent. If a TCP packet takes 100ms to travel between the browser and your server, the handshake is eating 400ms before the server has sent any data!

And what happens if you make multiple connections to the same server? You do the handshake every time. So, you should activate Keep-Alive. The benefits are even bigger than for plain unencrypted HTTP.

Use this Apache directive to activate Keep-Alive:

KeepAlive On

Use this nginx directive to activate keep-alive:

keepalive_timeout 100

Present all the intermediate certification authorities in the handshake

During the handshake, the client will verify that the web server’s certificate is signed by a trusted certification authority. Most of the time, there is one or more intermediate certification authority between the web server and the trusted CA. If the browser doesn’t know the intermediate CA, it must look for it and download it. The download URL for the intermediate CA is usually stored in the “Authority information” extension of the certificate, so the browser will find it even if the web server doesn’t present the intermediate CA.

This means that if the server doesn’t present the intermediate CA certificates, the browser will block the handshake until it has downloaded them and verified that they are valid.

So, if you have intermediate CAs for your server’s certificate, configure your webserver to present the full certification chain. With Apache, you just need to concatenate the CA certificates, and indicate them in the configuration with this directive:

SSLCertificateChainFile /path/to/certification/chain.pem

For nginx, concatenate the CA certificate to the web server certificate and use this directive:

ssl_certificate /path/to/certification/chain.pem

Activate caching for static assets

By default, the browsers will not cache content served over SSL, for security. That means that your static assets (Javascript, CSS, pictures) will be reloaded on every call. Here is a big performance failure!

The fix for that: set the HTTP header “Cache-Control: public” for the static assets. That way, the browser will cache them. But don’t activate it for the sensitive content, beacuase it should not be cached on the disk by your browser.

You can use this directive to enable Cache-Control:

<filesMatch ".(js|css|png|jpeg|jpg|gif|ico|swf|flv|pdf|zip)$">
Header set Cache-Control "max-age=31536000, public"
</filesMatch>

The files will be cached for a year with the max-age option.

For nginx, use this:

location ~ \.(js|css|png|jpeg|jpg|gif|ico|swf|flv|pdf|zip)$ {
    expires 24h;
    add_header Cache-Control public;
}

Update: it looks like Firefox ignores the Cache-Control and caches everything from SSL connections, unless you use the “no-store” option.

Beware of CDN with multiple domains

If you followed a bit the usual performance tips, you already offloaded your static assets (Javascript, CSS, pictures) to a content delivery network. That is a good idea for a SSL deployment too, BUT, there are caveats:

  • your CDN must have servers accessible over SSL, otherwise you will see the “mixed content” warning
  • it must have “Keep-Alive” and “Cache-control: public” activated
  • it should serve all your assets from only one domain!

Why the last one? Well, even if multiple domains point to the same IP, the browser will do a new handshake for every domain. So, here, we must go against the common wisdom of separating your assets on multiple domains to profit from the parallelized request in the browser. If all the assets are served from the same domain, there will only be one handshake. It could be fixed to allow multiple domains, but this is beyond the scope of this article.

More?

I could talk for hours about how you could tweak your web server performance with SSL. There is alot more to it than these easy tips, but I hope those will be of useful for you!

If you want to know more, I am currently writing an ebook about SSL tuning, and I would love to hear your comments about it!

If you need help with your SSL configuration, I am available for consulting, and always happy to work on interesting architectures.

By the way, if you want to have a good laugh with SSL, read “How to get a certificate signed by multiple certification authorities” :)

Testing Android push without a server

Adding push support to an Android app is quite easy, but it can be cumbersome to test it if the server part is not ready yet.

For this, you only need your API key, and the registration ID for your device (you can get it from a call to GCMRegistrar.getRegistrationId). Also, you should have already called GCMRegistrar.register from your app, with your sender id.

Then, to send a push message to your application, use this code:

import java.io.IOException;

import com.google.android.gcm.server.*;

public class Main {

/**
* @param args
*/
public static void main(String[] args) {

String apiKey = “…”;

String deviceId = “…”;
Sender sender = new Sender(apiKey);
Message message = new Message.Builder().addData(“data1″, “hello”).build();

try {
Result result = sender.send(message, deviceId, 2);
System.out.println(“got result: “+result.toString());
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}

}

Warning your users about a vulnerability

Somebody just told you about a vulnerability in your code. Moreover, they published a paper about it. Even worse, people have been very vocal about it.
What can you do now? The usual (and natural) reaction is to downplay the vulnerability, and try to keep it confidential. After all, publishing a vuln will make your users unsafe, and it is bad publicity for your project, right?

WRONG!

The best course of action is to communicate a lot. Here’s why:

Communicate with the security researcher

Yes, I know, we security people tend to be harsh, quickly flagging a vulnerable project as “broken”, “dangerous for your customers” or even “EPIC FAIL“. That is, unfortunately, part of the folklore. But behind that, security researchers are often happy to help you fix the problem, so take advantage of that!

If you try to silence the researcher, you will not get any more bug reports, but your software will still be vulnerable. Worse, someone else will undoubtedly find the vulnerability, and may sell it to governments and/or criminals.

Vulnerabilities are inevitable, like bugs. Even if you’re very careful, people will still find ways to break your software. Be humble and accept them, they’re one of the costs of writing software. The only thing that should be on your mind when you receive a security report is protecting your users. The rest is just ego and fear of failure.

So, be open, talk to security researchers, and make it easy for them to contact you (tips: create a page dedicated to security on your website, and provide a specific email for that).

Damage control of a public vulnerability

What do you do when you discover the weakness on the news/twitter/whatever?

Communicate. Now.

Contact the researchers, contact the journalists writing about it, the users whining about the issue, write on your blog, on twitter, on facebook, on IRC, and tell them this: “we know about the issue, we’re looking into it, we’re doing everything we can to fix it, and we’ll update you as soon as it’s fixed“.

This will buy you a few hours or days to fix the issue. You can’t say anything else, because you probably don’t know enough about the problem to formulate the right opinion. What you should not say:

  • “we’ve seen no report of this exploited in the wild”, yet
  • “as far as we know, the bug is not exploitable”, but soon it will be
  • “the issue happens on a test server, not in production” = “we assume that the researcher didn’t also test on prod servers”
  • “no worries, there’s a warning on our website telling that it’s beta software”. Beta means people are using it.

Anything you could say in the few days you got might harm your project. Take it as an opportunity to cool your head, work on the vulnerability, test regressions, find other mitigations, and plan the new release.

Once it is done, publish the security report:

Warning your users

So, now, you fixed the vulnerability. You have to tell your users about it. As time passes, more and more people will learn about the issue, so they might as well get it from you.

You should have a specific webpage for security issues. You may fear that criminals might use it to attack your software or website. Don’t worry, they don’t need it, they have other ways to know about software weaknesses. This webpage is for your users. It has multiple purposes:

  • showing that you handle security issues
  • telling which versions someone should not use
  • explaining how to fix some vulnerabilities if people cannot update

For each vulnerability, here is what you should display:

  • the title (probably chosen by the researcher)
  • the security researcher’s name
  • the CVE identifier
  • the chain of events:
    • day of report
    • dates of back and forth emails with the researcher
    • day of fix
    • day of release (for software)
    • day of deploy (for web apps)
  • the affected versions (for software, not web apps)
  • the affected components (maybe the issue only concerns specific parts of a library)
  • how it can be exploited. You don’t need to get into the details of the exploit.
  • The target of the exploit. Be very exhaustive about the consequences. Does the attacker get access to my contacts list? Can he run code on my computer? etc.
  • The mitigations. Telling to update to the latest version is not enough. Some users will not update right away, because they have their own constraints. Maybe they have a huge user base and can’t update quickly. Maybe they rely on an old feature removed in the latest version. Maybe their users refuse to update because it could introduce regressions. What you should tell:
    • which version is safe (new versions with the fix, but also old versions if the issue was introduced recently)
    • if it’s open source, which commit introduced the fix. That way, people managing their own internal versions of software can fix it quickly
    • how to fix the issue without updating (example for a recent Skype bug). You can provide a quick fix, that will buy time for sysadmins and downstream developers to test, fix and deploy the new version.

Now that you have a good security report, contact again the security researchers, journalists and users, provide them with the report, and emphasize what users risked, and how they can fix it. You can now comment publicly about the issue, make a blog post, send emails to your users to tell them how much you worked to protect them. And don’t forget the consecrated formule: “we take security very seriously and have taken measures to prevent this issue from happening ever again”.

Do you need help fixing your vulnerabilities? Feel free to contact me!

Worst Case Messaging Protocol

We already have lots of tools to communicate safely, and in some cases, anonymously. These tools work well for their purpose, but spying and law enforcement tools have also improved.

People are forced to decrypt their hard drives in airports. Rootkits are developed by police forces. DPI systems are installed on a country level. Using the graph of who communicates with whom, it is possible to map the whole hierarchy of an organisation, even if all communications are encrypted. How can we defend against that?

I believe that the analysis of organisations is one of the biggest threats right now. The ability to recognize an organization is the ability to destroy it by seizing a few members. That may be useful against terrorists, but these tools are too powerful, and they may be used by the wrong hands. Terrorists already know how to defend against such systems, but what of common people? Any company, association or union could be broken anytime.

I think we need tools to mount resilient organizations. A compromised node shouldn’t bring down the whole system. Intercepted messages shouldn’t give too much information on the different parties. People can still find ways to send their messages safely (PGP, dead drops, giving USB keys hand to hand), but we need a framework to build decentralized human systems.

Here is the solution I propose: a messaging protocol designed to protect identities and route around interception. This protocol is designed to work in the worst case (hence its name), when any communication could be intercepted, and even when we can only use offline communication.

Every message is encrypted, signed and nearly anonymized (if you already know the emitter or receiver, you can recognize their presence, but this can be mitigated). To introduce redundant routes in the network, a simple broadcast is used (the protocol is not designed for IP networks, but for human networks, with limited P2P communication). Any node can be used to transmit all the messages. The fundamental idea is that if you’re compromised, the whole organization can still work safely without you.

Prerequisites

Assumptions

  • Every communication can be intercepted
  • if a node is compromised, every message it can read or write is compromised
  • People can still create a P2P (IRL) network to communicate (aka giving an USB key to someone else)

Goals

  • Confidentiality
  • Anonymity
  • Authentication
  • Deniability? (I am just a messenger)
  • Untraceability
  • Can work offline (worst case)

For this protocol, we need a RNG, a HMAC function, a symmetric cipher and an asymmetric cipher. I have not yet decided what algorithms must be used. They could be specified in the message format, to permit future changes in algorithms.

Message format

  • nonce1
  • HMAC( sender’s public key, nonce1 )
  • expiration date
  • symcrypt( symmetric key, message )
  • nonce2
  • For each receiver:
    • asymcrypt( receiver1’s public key, symmetric key )
    • HMAC( receiver1’s public key, nonce2 )
    • asymcrypt( receiver2’s public key, symmetric key )
    • HMAC( receiver2’s public key, nonce2 )
  • asymsign( sender’s private key, whole message )

Here is how it works:
An identity is represented by one or more public keys (you can have a different public key for each of your contacts, it makes messages difficult to link to one person). They are not publicly linked to a name or email address, but the software layer could handle the complexity with a good enough UI. I do not cover key exchange: users must design their own way to meet or establish communications.

Every user is a hub: all messages are broadcasted by all users. To prevent data explosion, expiration dates are used. Users recognize the message they receive and the identity of the emitter, using the HMAC of the public keys. All communication are encrypted, but I use the convergent encryption ideas to have multirecipient messages.

Handling the messages

Parsing

  • verify if one of my public keys is in the receivers by calculating the HMAC with nonce2
  • validate the signature (if I already know the emitter’s public key)
  • decrypt the symmetric key that was encrypted with my public key
  • decrypt the message

Creating messages

  • write the message
  • choose the emitter’s identity and public key
  • choose the receivers and the public keys used
  • choose an expiration date
  • generate two random nonces
  • generate a symmetric key
  • insert nonce 1 and the HMAC of nonce 1 and my key
  • insert the expiration date
  • encrypt the message with the symmetric key and insert it
  • insert nonce 2
  • for each receiver:
    • encrypt the symmetric key with the receiver’s public key and insert the result
    • insert the HMAC of nonce 2 and the receiver’s public key
  • sign the whole message with my private key and append the signature

Sending messages

  • Take all the messages you received
  • filter them according to some rules:
    • remove expired messages
    • remove messages too large?
    • etc
  • add your messages to the list
  • give the message list to the next node in the network

This protocol is voluntarily simple and extensible. New techniques can be devised based on this protocol, like dummy receivers in a message or dummy messages to complicate analysis. It is independent of the type of data transmitted, so any type of client could use this protocol (it is at the same level as IP). Most of the logic will be in the client and the contact manager, to present an interface easy to use.

I truly hope we will never need this to communicate, but recent events have convinced me that I need to release it, just in case.

I am open to any critique or suggestion about this protocol. I will soon open a repository to host some code implementing it, and it will of course be open to contributions :)

Manage your libraries with symlinks on Windows

My Windows development environment is a bit complex. I work on multiple projects, at multiple versions, with different compiler environments, and with dependencies on different libraries and versions of these libraries.

Now, how do I specify where the compiler must search the libraries, and which version to use? Most of the time, I will add it directly in the project configuration. Easy at first, but quickly,I find myself writing by hand the path to each library (and header) in each project and each configuration of this project. And then, writing by hand the different names of the libraries (mylibrary.lib, mylibraryd.lib, mylibrary-static.lib, mylibrary-MTD.lib, etc).

And when I want to update to a new version of the library? If I’m lucky, I just have to change the library and header paths (in every project using the library). If not, I also have to change the name, because of the library developer’s convention.

The first solution to these problems was to use a batch file to launch Visual Studio and MSYS, and set some environment variables in this file. I quickly ended up with one big file containing two environment variables (include path and lib path) per library, possibly more if there were some big changes in the library names. My Visual Studio configuration was cluttered with $(MYLIBRARY_LIBPATH), $(MYLIBRARY_INCLUDEPATH), $(MYLIBRARY_NAME). It is unreadable, and again, impossible to maintain.

My solution comes from the Unix world, where you have a correct organization for your development files:

  • one folder containing the subfolders include, bin and lib
  • library names including version, and a symlink (without the version number) to the latest version of the lib

Can I do that on Windows? YES \o/

Here is the trick: normal links on Windows won’t work, but the mklink tool can create symlinks. And Visual Studio will recognize those as files and folders while looking for libraries.

Now, how would I organize my development environment? I chose to use (and abuse) symlinks, to create include, lib and bin folders for each project and configuration, and use generic names for the libraries.

  • I create a folder containing include, lib and bin
  • in the  include/ folder, I put symlinks to the header file or the subfolder for each library I will use in that project
  • in the lib directory, I create symlinks to the library version I want, one symlink per static/dynamic, MT/MD, Debug/Release version. But I could create one lib folder per static/dynamic, etc. A bit complex, but feasible (most of the time, I use only debug and release version, so it’s still manageable).

With this setup, I only set the INCLUDE and LIB environment variables, and I use directly the library names I need.

Here is an example script I use to create different library folders for x86 and x64 libs:

echo "Building include and library directories for Windows %PLATFORM%"

@mkdir %PLATFORM%
@mkdir %PLATFORM%\include
@mkdir %PLATFORM%\lib

@mklink /D %PLATFORM%\include\boost %BOOST%\boost
@for %%i in (%BOOST%\lib\*.lib) do (mklink %PLATFORM%\lib\%%~ni.lib %%~fi)

@mklink /D %PLATFORM%\include\cpptest %CPPTEST%\include\cpptest
@for %%i in (%CPPTEST%\lib\*.lib) do (mklink %PLATFORM%\lib\%%~ni.lib %%~fi)

I set up the BOOST and CPPTEST environment variables in another file. Then, I launch Visual Studio from another script which includes it.

There may be better ways, and that system will evolve in the future, but I’m pretty comfortable with it right now :)

Depending on my needs, I may grab from the bottom of my disk the package manager I wrote back in school, and make a big solution to download, build and link libs and personal projects. But later, I have some procrastination planned right now.