A world without certificate authorities

love locks
When networks began to expand and people saw the need for secure communication, they designed complex systems based on public key cryptography, that worked more or less. Problem: how do you trust that the key a server sent you is the right one? How can you make sure that it is not somebody else trying to impersonate that website?

Multiple solutions were proposed, and the most promising was a public directory of domain names and associated public keys, maintained by a peer to peer network named KeyCoin. It looked better than so called Web Of Trust solutions, because everybody could agree on what was the correct key for a given domain. As long as nobody hold 51% of the network, no change could happen without being validated by a lot of different peers. The network was maintained by 10000 enthusiast system administrators who took their task very seriously (after all, the security of the whole system depended on their honesty), and nobody had enough computing power to take over the network.

After a while, people began using the system, since it was directly integrated in their browsers, but they did not want to run a node on the network themselves. It was too bothersome, and they could trust the administrators. Also, they had to ask one of them to make a change everytime. The whole process was a bit artisanal.

In the meantime, some people demonstrated the 51% attack on networks of reduced size, and that worried people. They wanted a safe system, one that was not only relying on those sysadmins that could do anything. Who were they anyway? Running that system was still too complex for non technical too run it themselves anyway, so they did not worry enough. But some governments found that rewriting the truth of name/key matching was interesting. Maybe to catch pedophiles, terrorists, criminals. Or maybe to censor websites, I do not know, they told me it was for my own good.

Some smart person found a good solution: if controlling the whole system necessitated owning 51% of the system, the easiest way was to have a lot of machines, enough to counteract the sysadmins. That did not seem risky when people designed the system. Nobody could have enough computing power to take over the whole network, and there would be even more nodes every day.

Yet, that person got enough funding to install tens of thousands of machines and make them join the network. They even provided a nice enough interface for people and businesses to input their domain name and public key, as long as they paid some fees. The sysadmins welcomed him at first, since money coming in the system validated their ideas. Atfer a while, they started worrying, since none of them could keep up with the computing power, but that company asssured them it would never attain 51% of the network.

Other companies jumped on the bandwagon and started to profit from that new business opportunity. Governments started their own server farms to participate too. Problem: now that everybody (except the sysadmins) had a lot of computing power, nobody had enough to control the network entirely.

So they started making alliances. If a few major players work as a team, they can do whatever they want on the network. If one of them decided to try and replace a key on the ledger, others could help it. Of course, once they begun doing that, others wanted to participate. So they created a few rules to join their club. First, you needed to have enough machines. That was a good rule, because that made a big barrier to entry. You could not start as a small player. The other rules? You had to submit to an audit, performed by the other players. Yet another barrier to entry. And once they deemed you acceptable, you had to follow the requests of governments, which were arbitrarily refusing candidates.

Even with the big barriers to entry, a few hundred players came up, often backed by governments. Of course, all ended up in the same team, doing whatever they wanted, as long as nobody was complaining, because anytime one of them had something shady to do, all of them followed automatically.

Since building those big companies required money, they made their clients pay more and more, and to make it easier to accept, provided “premium” options where they show they trust you more, since they took the time to phone your company and ask a few questions.

Some found that big system too centralized, too obedient to states, and decided to fork it. There are separate public ledgers, but they do not come directly embedded in browsers, you need to integrate them yourself, and that’s bothersome. Also, most of those networks have a few hundred nodes at best.

From a nice, decentralized, home made system, we ended up with a centralized system controlled by corporations and governments.

Now let me tell you about that system I designed. It is based on a concept named certificate, a cryptographically signed file that links the public key to a domain name. Now here’s the catch: a certificate represents a key, and is signed by another key, which is represented by another certificate, and so on and so forth until a certificate that signs itself. That system is good, because you just have to embed the root certificate that your friends gives you, and you’ll be able to verify the key of his websites, even if those keys change. And this, without even asking the public ledger, so that is a truly decentralized and more anonymous system! Nothing could go wrong with that, right?

About these ads

Programming VS Mathematics, and other pointless debates

I do not know who started this argument a few days ago. It feels like something coming from HN. Do you need to know mathematics to be a good programmer?

There is a lot of differing opinions. Maybe programming is a subbranch of mathematics, or programming is using mathematics. Or learning programming is closer to learning a new language. For me, saying that programming is about languages is like saying that literature is about languages. Sure, you need words to indicate concepts, some languages are better suited than others for that, and some concepts are better expressed in other languages. It is more like a hierarchy to me: philosophy formalizes concepts used by authors to write in common languages. Mathematics formalize concepts used by programmers to create code in common languages.
But this is besides the point.

This debate sparks outrage, since it touches a central point of our education, and one that is often not taught very well. “Look, I do not use geometry while writing a loop, so maths are pointless for me”. A lot of developers will never learn basic algebra or logic and will never need it in their day job. And that’s okay.
Programming is not a single profession anymore. Each and every one of us has a different definition. A mechanical engineer working on bridges, another on metallic parts for cars and another one on plastic toys all have different needs, different techniques for their job, although the fundamental basis (evaluating breaking strength, time of assembly, production costs) is the same. That does not make one of these jobs worth more than the other.

The real problem is that we are still fighting among ourselves to define what our job is. The other pointless debate, about software being engineering, science or craft, is evidence of that. And it will stay hard to define for a long time.
We are in a unique position. Usually, when a new field emerges, either tinkerers are launching it and later, good practices are studied to make it engineering, or scientists create it, then means of production become cheaper and crafters take over.
Computers were started by scientists, but the ease of access gave crafters a good opportunity to take over. But that does not mean research stopped when people started coding at home. So now, in a relatively new field (less than a century), while we are still exploring, we have a very large spectrum of jobs and approaches, from the most scientific to the most artistic kind. And that is okay. More world views will help us get better at our respective jobs.

So, while you are arguing that the other side is misguided, irrealistic or unrigorous, take time to consider where they come from. They do not have the same job, and that job can seem pointless to you, but they can be good at it, so there is probably something good you can learn from their approach. The only thing you should not forgive from the other side is the lack of curiosity.

How to choose your secure messaging app

Since WhatsApp announced its acquisition, a lot of people started to switch to alternatives, trying to escape from Facebook. Some of them then discovered my article about Telegram, and a common answer was “hey, at least, it is better than WhatsApp, because it is open source, faster and it has encryption”.

This is a very bad way to decide what application you should use. If you choose a secure messaging app, it must be because you need it, not just because you want to avoid Facebook.

Those are not good enough requirements:

  • independent from Facebook
  • fast
  • multi platforms
  • open source

Yes, even open source, because it does not magically make software safe.

So, what are goods requirements? Well, I already have a list of what a secure messaging app should meet to be considered. If an app does not follow those requirements, it may not be a good idea to use it.

But it still does not mean the app will fit your use case. So you must define your use case:

  • Why do you need it?
  • With whom will you communicate?
  • Who is the adversary?
  • What will happen if some of your information is revealed to the adversary?
  • Does it need to be always available?
  • For how long will it be used?

This is part of what I mean when I insist on having a threat model: you cannot choose correctly if you do not know the risks.

Here are a few examples that you could consider.

The activist in a protest

The activist must be able to communicate quickly in the crowd. Identifying info might not be the most important part, because she can use burner phones (phones that will be abandoned after the protest). The most important feature is that it should be always available. Phone networks were often used to disrupt activist communication, so a way to send message through WiFi our bluetooth might be useful. The messages can be sent to a lot of different people, so being able to identify them might be important. If it is large enough to be infiltrated easily, then having no way to identify people is crucial.

Being able to send photos is important, because they might be the only proof of what happened in the protest. Here, I have in mind the excellent ObscuraCam app, which is able to quickly hide the faces of people in photos before sending them.

The application should not keep logs, or provide a way to quickly delete them, or encrypt them by default, because once someone is caught, the police will look through the phone.

The crypto algorithms and protocols should be safe and proven for that use case, because the adversaries will have the resources to exploit any flaw.

No need for a good update system if the devices will be destroyed after use.

The employee of a company with confidential projects

The adversaries here are other companies, or even other countries. The most important practice here is the “need to know”: reduce the number of persons knowing the confidential information. that means the persons communicating between themselves is reduced, and you can expect that they have a mean of exchanging information securely (example: to verify a public key).

Identifying who talks with whom is not really dangerous, because it is easy to track the different groups in a company. You may be confident enough that the reduced group will not be infiltrated by the adversary. The messages should be stored, and ideally be searchable. File exchange should be present.

There could be some kind of escrow system, to reveal information if you have a certain access level. Authentication is a crucial point.

The crypto may be funnier for that case, because the flexibility needed can be provided by some systems, like identity based encryption.Enterprise policies might be able to force regular uodates of the system, so that everybody has the same protocol version at the ame time, and any eventual flaw will be patched quickly.

The common user

It is you, me, anyone wanting to exchange private messages with friends or family. Here, trying to protect against the NSA is futile, because most of the contacts might not have the training needed. Trying to hide the contacts list from Facebook is futile too: even if someone protects the information, one of the contacts may not. The adversary you should consider here: crooks, pirates, anyone that could exploit the private messages for criminal ways (stealing bank info, blakcmailing, sending malware, etc).

An application fitting this use case should encrypt messages, preferably end to end, to limit problems when the exchange server is compromised. The service might not provide any expectation of anonymity. Messages should be stored, but encrypting them is a good option, in case the device is lost or stolen.

The crypto does not need to be very advanced, but it should use common, well known designs.

There should be a good update system, a way to negotiate protocol versions (and forbid some unsafe versions), because you will never be sure that everybody has performed all the needed updates.

Your use case here

Those were some common situations, for which some solutions exist, but there are a lot more possible use cases. If you are not sure about yours and need help defining your threat model, do not hesitate to ask for help, and do not jump on a solution because the marketing material says it is safe.

A good security solution will not only tell you what is protected, and how, but also what is not protected, and the security margins you have. It will also teach you the discipline you need to apply to get the most out of it.

The problem with meritocracy

Everytime people discuss the hacker community and its diversity, I see someone waving the “meritocracy” argument. “It is not our fault those minorities are not well represented, if they knew more stuff or did more stuff, they would have a better status”.

It is easy to see how that argument would be flawed, as meritocracy is a power structure, and whenever a power structure is created, after some time it tends to reinforce its own community. But that is not my point right now.

I realized that the idea of meritocracy is so deeply ingrained in the hacker mindset that we lost sight of what was important. I can see how that idea is appealing. Once you prove you know stuff, people will recognize you, and that will be enough to motivate you to learn. Except it is not. The meritocracy is just another way to exclude people. Once you consider someone’s status by how much you perceive they know, things go downhill.

Some are good at faking knowledge. Some know their craft, but do not talk that well. Some are not experts, but have good ideas. Some would like to learn without being judged. Everytime you dismiss someone’s opinion because of their apparent (lack of) knowledge, everytime you favor someone’s opinion because of their apparent knowledge, you are being unscientific and unwelcoming. You are not a hacker, you are just a jerk.

Somewhere along the way, people got too hung up on meritocracy, and forgot that you hack for knowledge and for fun, not for status. It is all about testing stuff, learning, sharing what you learned, discussing ideas and helping others do the same, whatever their skills or their experience. Status and power structures should have nothing to do with that.

Guess what? I pointed out that bad behaviour, but I am guilty of it too. I have to constantly keep myself in check, to avoid judging people instead of judging ideas. That’s alright. Doing the right thing always requires some effort.

Telegram, AKA “Stand back, we have Math PhDs!”

Here is the second entry in our serie about weird encryption apps, about Telegram, which got some press recently.

According to their website, Telegram is “cloud based and heavily encrypted”. How secure is it?

Very secure. We are based on a new protocol, MTProto, built by our own specialists, employing time-tested security algorithms. At this moment, the biggest security threat to your Telegram messages is your mother reading over your shoulder. We took care of the rest.

(from their FAQ)

Yup. Very secure, they said it.

So, let’s take a look around.

Available technical information

Their website details the protocol. They could have added some diagrams, instead of text-only, but that’s still readable. There is also an open source Java implementation of their protocol. That’s a good point.

About the team (yes, I know, I said I would not do ad hominem attacks, but they insist on that point):

The team behind Telegram, led by Nikolai Durov, consists of six ACM champions, half of them Ph.Ds in math. It took them about two years to roll out the current version of MTProto. Names and degrees may indeed not mean as much in some fields as they do in others, but this protocol is the result of thougtful and prolonged work of professionals

(Seen on Hacker News)

They are not cryptographers, but they have some background in maths. Great!

So, what is the system’s architecture? Basically, a few servers everywhere in the world, routing messages between clients. Authentication is only done between the client and the server, not between clients communicating with each other. Encryption happens between the client and the server, but not using TLS (some home made protocol instead). Encryption can happen end to end between clients, but there is no authentication, so the server can perform a MITM attack.

Basically, their threat model is a simple “trust the server”. What goes around the network may be safely encrypted, although we don’t know anything about their server to server communication, nor about their data storage system. But whatever goes through the server is available in clear. By today’s standards, that’s boring, unsafe and careless. For equivalent systems, see Lavabit or iMessage. They will not protect your messages against law enforcement eavesdropping or server compromise. Worse: you cannot detect MITM between you and your peers.

I could stop there, but that would not be fun. The juicy bits are in the crypto design. The ideas are not wrong per se, but the algorithm choices are weird and unsafe, and they take the most complicated route for everything.

Network protocol

The protocol has two phases: the key exchange and the communication.

The key exchange registers a device to the server. They wrote a custom protocol for that, because TLS was too slow and complicated. That’s true, TLS needs two roundtrips between the client and the server to exchange a key. It also needs x509 certificates, and a combination of a public key algorithm like RSA or DSA, and eventually a key exchange algorithm like Diffie-Hellman.

Telegram greatly simplified the exchange by requiring three roundtrips, using RSA, AES-IGE (some weird mode that nobody uses), and Diffie-Hellman, along with a proof of work (the client has to factor a number, probably a DoS protection). Also, they employ some home made function to generate the AES key and IV from nonces generated by the server and the client (server_nonce appears in plaintext during the communication):

  • key = SHA1(new_nonce + server_nonce) + substr (SHA1(server_nonce + new_nonce), 0, 12);
  • IV = substr (SHA1(server_nonce + new_nonce), 12, 8) + SHA1(new_nonce + new_nonce) + substr (new_nonce, 0, 4);

Note that AES-IGE is not an authenticated encryption mode. So they verify the integrity. By using plain SHA1 (nope, not a real MAC) on the plaintext. And encrypting the hash along with the plaintext (yup, pseudoMAC-Then-Encrypt).

The final DH exchange creates the authorization key that will be stored (probably in plaintext) on the client and the server.

I really don’t understand why they needed such a complicated protocol. They could have made something like: the client generates a key pair, encrypts the public key with the server’s public key, sends it to the server with a nonce, and the server sends back the nonce encrypted with the client’s public key. Simple and easy. And this would have provided public keys for the clients, for end-to-end authentication.

About the communication phase: they use some combination of server salt, message id and message sequence number to prevent replay attacks. Interestingly, they have a message key, made of the 128 lower order bits of the SHA1 of the message. That message key transits in plaintext, so if you know the message headers, there is probably some nice info leak there.

The AES key (still in IGE mode) used for message encryption is generated like this:

The algorithm for computing aes_key and aes_iv from auth_key and msg_key is as follows:

  • sha1_a = SHA1 (msg_key + substr (auth_key, x, 32));
  • sha1_b = SHA1 (substr (auth_key, 32+x, 16) + msg_key + substr (auth_key, 48+x, 16));
  • sha1_с = SHA1 (substr (auth_key, 64+x, 32) + msg_key);
  • sha1_d = SHA1 (msg_key + substr (auth_key, 96+x, 32));
  • aes_key = substr (sha1_a, 0, 8) + substr (sha1_b, 8, 12) + substr (sha1_c, 4, 12);
  • aes_iv = substr (sha1_a, 8, 12) + substr (sha1_b, 0, 8) + substr (sha1_c, 16, 4) + substr (sha1_d, 0, 8);

where x = 0 for messages from client to server and x = 8 for those from server to client.

Since the auth_key is permanent, and the message key only depends on the server salt (living 24h), the session (probably permanent, can be forgotten by the server) and the beginning of the message, the message key may be the same for a potentially large number of messages. Yes, a lot of messages will probably share the same AES key and IV.

Edit: Following Telegram’s comment, the AES key and IV will be different for every message. Still, they depend on the content of the message, and that is a very bad design. Keys and initialization vectors should always be generated from a CSPRNG, independent from the encrypted content.

Edit 2: the new protocol diagram makes it clear that the key is generated by a weak KDF from the auth key and some data transmitted as plaintext. There should be some nice statistical analysis to do there.

Edit 3: Well, if you send the same message twice (in a day, since the server salt lives 24h), the key and IV will be the same, and the ciphertext will be the same too. This is a real flaw, that is usually fixed by changing IVs regularly (even broken protocols like WEP do it) and changing keys regularly (cf Forward Secrecy in TLS or OTR). The unencrypted message contains a (time-dependent) message ID and sequence number that are incremented, and the client won’t accept replayed messages, or too old message IDs.

Edit 4: Someone found a flaw in the end to end secret chat. The key generated from the Diffie-Hellman exchange was combined with a server-provided nonce: key = (pow(g_a, b) mod dh_prime) xor nonce. With that, the server can perform a MITM on the connection and generate the same key for both peers by manipulating the nonce, thus defeating the key verification. Telegram has updated their protocol description and will fix the flaw. (That nonce was introduced to fix RNG issues on mobile devices).

Seriously, I have never seen anyone use the MAC to generate the encryption key. Even if I wanted to put a backdoor in a protocol, I would not make it so evident…

To sum it up: avoid at all costs. There are no new ideas, and they add their flawed homegrown mix of RSA, AES-IGE, plain SHA1 integrity verification, MAC-Then-Encrypt, and a custom KDF. Instead of Telegram, you should use well known and audited protocols, like OTR (usable in IRC, Jabber) or the Axolotl key ratcheting of TextSecure.

SafeChat, P2P encrypted messages?

For the first article in the new post serie about “let’s pick apart the new kickstarted secure decentralized software of the week”, I chose SafeChat, which started just two days ago. Yes, I like to hunt young preys :p

A note, before we begin: this analysis is based on publicly available information at the time of writing. If the authors of the project give more information, I can update the article to match it. The goal is to assert, with what little we know about the project, if it is a good idea to give money to this project. I will only concentrate on the technical parts, not on the team itself (even if, for some of those projects, I think they’re idiots running with scissors in hand).

What is SafeChat?

Open source encryption based instant messaging software

SafeChat is a brilliantly simple deeply secure instant messaging system for mobile phones and computers

SafeChat is an instant messaging software designed by Commercial Free. There is no real indication about who really works there, and where the company is based, except for David Crawford, who created the Kickstarter project and is based in Montreal in Canada.

Note that SafeChat is only a small part of the services they want to provide. Commercial Free will also have plans including an email encryption service (no info about that one) and cloud storage.

Available technical information

There is not much to see. They say they are almost done with the core code, but the only thing they present is some videos of what the interaction with the app could be.

Apparently, it is an instant messaging application with Android and iOS applications and some server components.  Session keys are generated for the communication between users. They will manage the server component, and the service will be available with a yearly subscription.

It seems they don’t want to release much information about the cryptographic components they use. They talk about “peer to peer encryption” (lol) which is open source and standard. If anyone understands what algorithm or protocol they refer to, please enlighten me. They also say they will mix in some proprietary code (so much for open source).

I especially like the part about NIST. They mock NIST, telling that they have thrown “all standard encryption commonly used today out the window”. I am still wondering what “open source and standard peer to peer encryption” means.

Network protocol

The iOS and Android applications will apparently provide direct communication between users. I guess that from their emphasis on P2P, but also from the price they claim: $10 per user per year would be a bit small to pay for server costs if they had to route all the messages.

P2P communication between phones is technically feasible. They would probably need to implement some TCP hole punching in their solution, but it is doable.

Looking athe the video, it seems there is a key agreement before communication. I do not really like the interaction they chose to represent key agreement (with the colors and the smileys). There are too many different states, while  people only need to know “are we safe now?”

I am not sure if there is a presence protocol. The video does not really show it. If there is no presence system, are messages stored until the person is online? Stored on the server or on the client? Does the server notify the client when the person becomes available?

Cryptography

By bringing together existing theories of cryptography and some proprietary code to bind them together, we are making a deeply encrypted private chatting system that continues to evolve as the field of cryptography does.

Yup, I really feel safe now.

Joke aside, here is what we can guess:

  • session keys for the communication between users. I don’t know if it is a Diffie-Hellman based protocol
  • no rekeying, ie no perfect forward secrecy
  • no info on message authentication or integrity verification
  • I am not sure if the app generates some asymmetric keys for authentication, if there is trust on first use, or whatever else
  • the server might not be very safe, because they really, really want to rely on German laws to protect it. If the crypto was fully managed client side, they would not care about servers taken down, they could just pop another somewhere.

There could be a PKI managed by Commercial Free. That would be consistent with the subscription model (short lived certificates is an easy way of limiting the usage of a service).

Threat model

Now, we can draw the rough threat model they are using:

What we want to do is make it impractical for an organization to snoop your communications as it would become very hard to find them and then harder still to decrypt them.

Pro tip: a system with a central server does not make it hard to find communications.

Attacker types:

  • phone thief: I don’t think they use client side encryption for credentials and logs. Phone thiefs and forensics engineers won’t have a real problem there
  • network operator: they can disrupt the communication, but will probably not be able to decrypt or do MITM (I really think the server is managing the authentication part, along with setting up the communication)
  • law enforcement: they want to rely on German laws to protect their system. At the same time, they do not say they will move out to Germany to operate the system. If they stay in Canada, that changes the legal part. If they use a certificate authority, protecting the server will be useless, because they can just ask the key at the company.
  • server attacker: the server will probably be Windows based (see the core developer’s skills). Since that design is really server centric, taking down the server might take down the whole service. And attacking it will reveal lots of interesting metadata, and probably offer MITM capabilities
  • nation state: please, stop joking…

So…

Really, nothing interesting here. I do not see any reason to give money to this project: there is nothing new, it does not solve big problems like anonymous messaging, or staying reliable if one server is down. Worse, it is probably possible to perform a MITM attack if you manage the server. Nowadays, if you create a cryptographic protocol with client side encryption, you must make sure that your security is based on the client, not the server.

Alternatives to this service:

  • Apple iMessage: closed source, only for iOS, encrypted message, MITM is permitted for Apple by the protocol, but “we have not architected the server for this”. Already available.
  • Text Secure by OpenWhisperSystems: open source, available for iOS and Android, uses SMS as a transport protocol, uses OTR (Off the Record protocol) to protect the communication, no server component. Choose Text Secure! It is really easy to use, and OTR is well integrated in the interface.

My ideal job posting

This post is a translation of something I wrote in French for Human Coders. They asked me what would be the ideal job post from a developer’s standpoint:

How whould you write a job announcement attracting good developers? Most recruiters complain that finding the right candidates is an ordeal.

If you ask me, it is due to very old recruitment practices: writing job posts for paper ads (where you pay by the letter), spamming them to as many people as possible, mandating fishy head hunters… This has worked in the past, but things changed. A lot more companies are competing to recruit developers, and many more developers are available, so separating the wheat from the chaff is harder.

We can do better! Let’s start from scratch. For most candidates, a job posting is the first contact they’ll have with your company. It must be attrctive, exciting! When I read “the company X is the leader on the market of Y”, I don’t think that they’re awesome, I just wonder what they really do.

A job posting is a marketing document. Its purpose is not to filter candidates, but to attract them! And how do you write a marketing document?

YOU. TALK. ABOUT. THE. CLIENT. You talk about his problems, his aspirations, and only then, will you tell him how you will make things better for him. For a job posting, you must do the same. You talk to THE candidate. Not to multiple candidates, not to the head hunter or the HR department, but to the candidate. Talk about the candidate, and only the candidate. When a developer looks for a job, she doesn’t want to “work on your backend application” or “maintain the web server”. That is what she will do for you. This is what she wants:

  • getting paid to write code
  • work on interesting technologies
  • a nice workplace atmosphere
  • learn
  • etc.

A good job posting should answer the candidate’s aspirations, and talk about the carrer path. Dos this job lead to project management? Do you propose in-house training? Is there a career path for expertise in your caompany?

Do you share values with your candidate? I do not mean the values written by your sales team and displayed on the “our values” page of your website. I am talking about the values of the team where the candidate will end up. Do they work in pair programming? Do they apply test driven development? There is no best way to work, the job can be done in many ways, so you must make sure that the candidate will fit right in.

What problem are you trying to solve with your company? Do you create websites that can be managed by anyone? Do you provide secure hosting? Whatever your goal is, talk about it instead of taliking about the product. I do not want to read, “our company develops a mobile server monitoring tool”, because that is uninteresting. If I read “we spent a lot of time on call for diverse companies, so we understood that mobility is crucial for some system administrators, so we created a tool tailored for moving system administrators”, I see a real problem, a motivation to work, a culture that I could identify to.

By talking that way to the candidate, you will filter candidates on motivation and culture instead of filtering on skills. That can be done later, once you see the candidate You did not really think that a resume was a good way to select skilled people, do you?

Here is a good example of fictive job posting, from a company aggregating news for developers, looking for a Rails developer:

“You are a passionnate Ruby on Rails developers, you are proiud of you unit tests, and you enjoy the availability of Heroku’s services? That’s the same for us!

At Company X, we love developers: all of our services are meant for their fulfillment. We propose a news website about various technologies, higly interesting trainings and a job board for startups.

Our employees benefit fully from these services, and make talks in conferences all around France. By directly talking with other developers, they quickly get an extensive knowledge of current technologies.

Our news website is growing fast, so we need help to scale it. The web app uses Heroku and MongoDB, with a CoffeeScript front end. Are you well versed in Rails optimization? If yes, we would love to talk with you!”

Note that I did not talk about years of experience, or a city. I want to hire a Rails developer, not necessarily a french developer. I want someone with experience in optimization, not someone over 27.

With such a job posting, you will receive a lot more interesting employment applications. Now, are you afraid that it will incur a lot more work? The solution in a future post: how to target candidates efficiently? Get away from job boards!

HN Discuss on Hacker News