Warning your users about a vulnerability

Somebody just told you about a vulnerability in your code. Moreover, they published a paper about it. Even worse, people have been very vocal about it.
What can you do now? The usual (and natural) reaction is to downplay the vulnerability, and try to keep it confidential. After all, publishing a vuln will make your users unsafe, and it is bad publicity for your project, right?

WRONG!

The best course of action is to communicate a lot. Here's why:

Communicate with the security researcher

Yes, I know, we security people tend to be harsh, quickly flagging a vulnerable project as "broken", "dangerous for your customers" or even "EPIC FAIL". That is, unfortunately, part of the folklore. But behind that, security researchers are often happy to help you fix the problem, so take advantage of that!

If you try to silence the researcher, you will not get any more bug reports, but your software will still be vulnerable. Worse, someone else will undoubtedly find the vulnerability, and may sell it to governments and/or criminals.

Vulnerabilities are inevitable, like bugs. Even if you're very careful, people will still find ways to break your software. Be humble and accept them, they're one of the costs of writing software. The only thing that should be on your mind when you receive a security report is protecting your users. The rest is just ego and fear of failure.

So, be open, talk to security researchers, and make it easy for them to contact you (tips: create a page dedicated to security on your website, and provide a specific email for that).

Damage control of a public vulnerability

What do you do when you discover the weakness on the news/twitter/whatever?

Communicate. Now.

Contact the researchers, contact the journalists writing about it, the users whining about the issue, write on your blog, on twitter, on facebook, on IRC, and tell them this: "we know about the issue, we're looking into it, we're doing everything we can to fix it, and we'll update you as soon as it's fixed".

This will buy you a few hours or days to fix the issue. You can't say anything else, because you probably don't know enough about the problem to formulate the right opinion. What you should not say:

  • "we've seen no report of this exploited in the wild", yet
  • "as far as we know, the bug is not exploitable", but soon it will be
  • "the issue happens on a test server, not in production" = "we assume that the researcher didn't also test on prod servers"
  • "no worries, there's a warning on our website telling that it's beta software". Beta means people are using it.

Anything you could say in the few days you got might harm your project. Take it as an opportunity to cool your head, work on the vulnerability, test regressions, find other mitigations, and plan the new release.

Once it is done, publish the security report:

Warning your users

So, now, you fixed the vulnerability. You have to tell your users about it. As time passes, more and more people will learn about the issue, so they might as well get it from you.

You should have a specific webpage for security issues. You may fear that criminals might use it to attack your software or website. Don't worry, they don't need it, they have other ways to know about software weaknesses. This webpage is for your users. It has multiple purposes:

  • showing that you handle security issues
  • telling which versions someone should not use
  • explaining how to fix some vulnerabilities if people cannot update

For each vulnerability, here is what you should display:

  • the title (probably chosen by the researcher)
  • the security researcher's name
  • the CVE identifier
  • the chain of events:
    • day of report
    • dates of back and forth emails with the researcher
    • day of fix
    • day of release (for software)
    • day of deploy (for web apps)
  • the affected versions (for software, not web apps)
  • the affected components (maybe the issue only concerns specific parts of a library)
  • how it can be exploited. You don't need to get into the details of the exploit.
  • The target of the exploit. Be very exhaustive about the consequences. Does the attacker get access to my contacts list? Can he run code on my computer? etc.
  • The mitigations. Telling to update to the latest version is not enough. Some users will not update right away, because they have their own constraints. Maybe they have a huge user base and can't update quickly. Maybe they rely on an old feature removed in the latest version. Maybe their users refuse to update because it could introduce regressions. What you should tell:
    • which version is safe (new versions with the fix, but also old versions if the issue was introduced recently)
    • if it's open source, which commit introduced the fix. That way, people managing their own internal versions of software can fix it quickly
    • how to fix the issue without updating (example for a recent Skype bug). You can provide a quick fix, that will buy time for sysadmins and downstream developers to test, fix and deploy the new version.

Now that you have a good security report, contact again the security researchers, journalists and users, provide them with the report, and emphasize what users risked, and how they can fix it. You can now comment publicly about the issue, make a blog post, send emails to your users to tell them how much you worked to protect them. And don't forget the consecrated formule: "we take security very seriously and have taken measures to prevent this issue from happening ever again".

Do you need help fixing your vulnerabilities? Feel free to contact me!