Update 24/08/2010: Microsoft published an advisory, there's an article from the MSRC about the preloading DLL vulnerability, and a tool that fixes the problem. And if you want to know more, there's an MSDN article.
Update 25/08/2010: if you came here from golem.de, heise.de or h-online.com, good for you! The articles on these websites are blatantly wrong, and their proposed solution doesn't work . You will find here the real solutions, and links to the relevant blog posts and advisories.
Look! A new shiny vulnerability, affecting a lot of Windows applications! OMG OMG OMG we're DOOMED!
</crazy mode off>
OK, let's get serious. This vulnerability is actually a very old one. It is there because the DLL search order includes the current directory (if someone can tell me why, I would be delighted to know that). Here is the search order (source: MSDN):
If SafeDllSearchMode is enabled, the search order is as follows:
- The directory from which the application loaded.
- The system directory. Use the GetSystemDirectory function to get the path of this directory.
- The 16-bit system directory. There is no function that obtains the path of this directory, but it is searched.
- The Windows directory. Use the GetWindowsDirectory function to get the path of this directory.
- The current directory.
- The directories that are listed in the PATH environment variable. Note that this does not include the per-application path specified by the App Paths registry key. The App Paths key is not used when computing the DLL search path.
If SafeDllSearchMode is disabled, the search order is as follows:
- The directory from which the application loaded.
- The current directory.
- The system directory. Use the GetSystemDirectory function to get the path of this directory.
- The 16-bit system directory. There is no function that obtains the path of this directory, but it is searched.
- The Windows directory. Use the GetWindowsDirectory function to get the path of this directory.
For the record: disabling SafeDllSearchMode means that you're stupid, or that you're using a Windows before XP SP1 (note that this is not exclusive).
So, let's assume that your application tries to load a DLL that isn't present, either in your application directory, or in the system directories. Then, the application will try to load the DLL from the current directory. And that's it. If there's a DLL with the same name in that folder, it will be loaded in the application. That's why you can get owned by opening a file in that same directory: it sets the current directory of the application to the file's folder.
OMG OMG OMG we're DOOMED, and it's incredibly easy to exploit me!
So, now, let's take a look at the fix. This is an OS flaw. You can't fix all your applications yourself. H.D. Moore has some sysadmin fixes that you can apply to prevent the exploit from coming on your computer. But your applications will still be exploitable.
If you're a developer, though, you can fix your application. There's a function that can remove the current directory from the DLL search path: SetDllDirectory. From MSDN:
After calling SetDllDirectory, the DLL search path is:
- The directory from which the application loaded.
- The directory specified by the lpPathName parameter.
- The system directory. Use the GetSystemDirectory function to get the path of this directory. The name of this directory is System32.
- The 16-bit system directory. There is no function that obtains the path of this directory, but it is searched. The name of this directory is System.
- The Windows directory. Use the GetWindowsDirectory function to get the path of this directory.
- The directories that are listed in the PATH environment variable.
So, if you pass a safe directory(let's say, C:\Windows\System32, or your application directory) as argument to SetDllDirectory, you effectively remove the current directory from the search path! It works, I tested it for you :)
But that's not the end: you have to wipe out the PATH to be safe. From the metasploit blog:
If the application is trying to load a DLL that is normally found within the PATH, but not the Windows system directories, and the PATH contains environment variables that have not been set, then the literal value of the environment variable will be treated as sub-directory of the working directory (the share). For example, if %unknownvariable%\bin is in the system PATH, the share will be searched for a directory called “%unknownvariable%\bin” and the target DLL will be loaded from within this sub-directory.
And if you test your application with ProcMon, you will surely see that a lot of (potentially unsafe) directories are added to the PATH, and used to look for the DLL. So, remove all the useless directories from the PATH if you can!
(And now, for the disclaimer: this blog post is not endorsed by Microsoft, and if you want to be really safe and know the best solution to employ, wait for Microsoft's patches and workarounds. But if you trust me enough (you fool... *evil laugh*), you can try that fix on your application, and maybe protect your users. )
Actually, that fix is encouraged by Microsoft, and David Leblanc wrote about it in February 2008.
VLC 1.1.1 was just released! A lot of bugs were fixed, and now, GPU decoding works on ATI cards! You need Catalyst 10.7 to use DxVA on your ATI GPU.
Other important news: libVLC has a lot of new useful functions, like libvlc_set_user_agent(), or libvlc_video_set_callbacks() and libvlc_video_set_format() to replace the --vmem-* hack.
Enjoy this new release!
07 Jul 2010
•
General
I first experienced Internet at the glorious time of the 56k. It was slow, hard to browse, and full of badly designed websites. But it was fun to discover. At that time, people were still trying to figure out the answer to "what the hell can I do here?", were experimenting a lot, and shared their results.
Then the internet bubble grew and... I won't waste time telling that story, that's not my goal here. Let's go forward a little, to that new trend, the Web 2,0.
[youtube=http://www.youtube.com/watch?v=I6IQ_FOCE6I]
It was the beginning of social media, users producing content, companies investing a lot to reach those users. And somewhere, something went wrong. It was quiet at first. People were wondering how to get a better rank in search engines, how to blog, how to create a buzz. Some of them were really trying create value, and share it with the right users. But others didn't think like that. They found a way to make money out of thin air.
How to be successful on the Interwebz
Take a lot of incompetent people, more or less linked by a common interest-let's say "being famous", as an example- and able to communicate with each other thanks to social networks. One of them will see a post somewhere describing someone famous, and will share it with the rest. He will share other articles for a few months, and all the incompetent fools will be pleased to learn about how famous people became famous.
Then, he will begin to write articles that rephrase the ones he sent a few months ago. All the incompetent fools will thank him for sharing his insight in being famous. And if he doesn't find anything to write, he will rephrase one of his own articles, with a catchy title like "top ten ways to create a viral video" and a lot of bullet points. And some of the incompetent fools will share these articles (mostly because they're not smart enough to go and find the original content by themselves).
Little by little, he will be recognized as an expert in being famous, and will start a consultant job, and will be overpaid to teach the incompetent fools how to be famous. And then, those incompetent fools will start to share themselves, and blog, and become consultants and make money. Don't you recognize something?
The Ponzi scheme web 2.0 expert
That's the Internet we see now, and I find it disgusting. No real content, people quoting each other, and experts telling us "Get rich following my method, it worked for me, why wouldn't it work for you?". Social networks gave the crooks an easy way to legitimacy. Why would you bother working hard, when you can quickly get exposure by preaching to the incompetent crowd?
If you follow most of the news, you will think that those people talking about SEO, copywriting, web marketing or community management are the ones building the web. They're not. Internet is there thanks to a lot of quiet system administrators, developers and electronics engineers. They're not necessarily following new trends. They're at their origin.
If you want real content and real value, look for these people. If you want to build something useful, learn from them but follow your own path. Inventing is not about repeating what smart people say, but contradicting them.
A recent report from Secunia states that popular Windows applications don't use the DEP and ASLR protections. It is true for VLC up to 1.0: the latest version at the moment, 1.1, supports permanent DEP mode, and ASLR on all of its DLLs.
One thing the report could have shown is the difference between applicatins built with MSVC or GCC. Adding DEP and ASLR in Visual Studio means adding /NXCompat and DynamicBase to the compilation options. With MinGW, there is a different trick. This article on my old blog is slightly outdated: ld in binutils 2.20 supports the --nxcompat and --dynamicbase options. So, now, the developers using GCC have no more excuse!
Let's sum up the state of the security of VLC:
- 1.0.5 is NOT SAFE on Windows. 1.0.6 brings a lot of security fixes, but this version was not released on Windows. And security features are not used.
- 1.1.0 supports permanent DEP and ASLR (with DllCharacteristics flag, only on Vista/7) and termination on heap corruption
- 1.1.1 supports the same as 1.1.0, and adds DEP on XP SP3 with SetProcessDEPPolicy
- SafeSEH and stack cookies are not yet used
The developers using LibVLC should check their software: DEP won't be activated if their executable doesn't support it.
For the past few days, I have been messing with some of the features of HTML 5:
- local storage
- Offline web applications
These features enable the development of real applications, running in the browser. It has a lot of advantages: easily updating the application, reduce the workload on the server, etc.
But it changes the way you write your code. You have to adapt the usual protection mechanisms to these changes.
Here are some thoughts about the common web application vulnerabilities.
Warning: I consider here a web application with practically no server-side code: everything executes in the browser. And I'll use the point of view of someone attacking the application running in the browser. And I'll be optimist enough to trust the browser...
SQL injections
SQL injections in servers let you access the user's data, and access the server itself (file uploads, starting external programs, etc). With local storage and WebSQL, you won't be able to access the host, only the data (unless there's a browser vulnerability about that). And you can use some sort of prepared statement syntax to prevent injection. There may be a risk with key/value stores if you let the user input control the key.
Cross site scripting
This is in my opinion the biggest risk. If all the logic of your application is on the client's side, unwanted code executing in the browser has access to everything. This one can be mitigated by filtering what will be displayed on your webpage.
Cross site request forgery
This one is not critical, unless you use locally URL parameters (don't laugh, it has often been done and exploited in Flash applications). Be aware that an attackant could get data in local storage that way.
Persistency
It really worries me that so much data can stay a long time in the user's browser. With a database hosted on your server, if unwanted data(persistent XSS, malwares...) is stored, you can erase it, patch your website's code, and your users will be safe.
With HTML 5, you'll have to clean every user's data. You can't be sure that you have protected all your users (someone could wait 6 months before coming back to your website). And because you can't be sure, your code has to check for each known bad data. It needs a lot of code, time and tests.
Trust issues
It has been said a lot of times already: don't trust the data coming from your client. And in our case, don't trust it, even if it's data that your website put in local storage. It applies to data that will come back to your server, but also to data that will be displayed with a bit of Javascript/DOM code. Yes, XSS attacks could come from local storage. So, you need to escape everything that wil go into the webpage.
Are we screwed?
These were only quick thoughts about the vulnerabilities you could encounter with client side web applications. It is not really hard to protect the application, but you have to be very careful about what data you will trust. The good thing is, these vulnerabilities are not new: you can see them in lots of Flash applications. So, the mitigation mechanisms are well known, and easy to apply.