Wednesday, March 19th, 2008

Preventing XSS Exploitation with CSRF Tokens?!?!

A colleague and I were tossing around the idea of preventing XSS Exploitation with CSRF tokens. Now, before people start going “high and right” on me…hear me out… I DID NOT say “prevent XSS” with CSRF tokens, I said prevent “XSS Exploitation” with CSRF tokens. This discussion arose after someone presented me with the following scenario (this same scenario has been presented to me many, many times… typically at a bar after a few drinks):


You come into an organization and take over the application security department because the old security person left/was fired/was arrested/whatever. You take a look at the 10 million line flagship application and realize that its riddled with XSS holes, yet you don’t have the resources/time/cojo’s to fix all the exposures. What do you do?



This scenario is usually followed up by a pitch to sell me on some Web Application Firewall product….. I’ll put my thoughts on WAFs aside for a second… and I’ll try to get to the underlying issue of the scenario presented above: You need to do something to stop your customers from getting XSSd, you don’t have much time, you don’t have many resources and there is a ton of code to go through.


Now, what if you required CSRF tokens/canaries for every request? This doesn’t “fix” the XSS exposures, but it makes it a LOT more difficult to exploit (unless you want to exploit yourself). The CSRF tokens effectively prevent an attacker from sending the XSS to anyone else. Considering many token/canary values are implemented at the framework level, in most cases it would require a configuration change for the application. Now, once every page is protected by the canary, you can systematically examine the “high priority” pages or pages where canaries don’t make sense and remove the canary requirement after that particular page/functionality has gone through a review. In order to prevent the attacker from sending their own canary value, the CSRF token would have to be tied to the current users session (most good implementations do this anyway).


Now, once again, this DOES NOT FIX XSS, it just makes exploitation harder. This isn’t a new concept, in fact this same type of approach is being used by modern day operating systems. Take buffer overflows for example, protections like DEP, ASLR, Stackguard, GS flag… these protections do not prevent developers from writing buffer overflows and they do not “fix” buffer overflows… they do make exploiting buffer overflows a lot more difficult (unless you’re a Litchfield brother, HD Moore, or Alexander Sotirov).


Now, of course there are some cons to this strategy… First, the XSS exposures are not fixed (the WAFs don’t fix them either). This doesn’t protect against persistent XSS. There will be some performance hits to your web server when you have canaries for each request. This will NOT help you defend against injection attacks like SQL Injection or Command injection, that will require an audit… on the flip side… if you’re relying solely on a WAF to protect you against SQLi and Command Injection, I’d be worried…

Posted by xssniper | Filed in Security, Web Application Security

  • http://kuza55.blogspot.com/ kuza55

    CSRF Protected XSS vulnerabilities aren’t quite as exploitable as they were before the last Flash update, but they’re still pretty exploitable: http://kuza55.blogspot.com/2008/02/exploiting-csrf-protected-xss.html
    You can’t use a standard payload with most of the attacks there (though when you can write cookies with arbitrary paths, then you can), but many times even logged-out xss is good enough to do something worthwhile, even if it is at the very, very least phishing.

    Anyway, would you be able to clarify what you would consider ‘“high priority” pages’? Would this be pages where csrf tokens don’t help e.g. where you know you could have persistent xss? Or something else?

  • xssniper

    Sweet dude… great techniques… you bring up a great point on how, given the right mentality, skill set, and circumstances, CSRF tokens that protect XSS can be defeated… it’s a great parallel to how stack protections were defeated (in certain circumstances) by some legit security researchers…

    As for the “High Priority” pages, this should be different for every organization and in line with their specific business objectives. To some, it may be going over their flagship domain, to others it may be reviewing pages that have a need for fast page loads, while others may want to start with the pages that have the most dynamic content…

    I’m guessing if I was in this situation, I would probably start with the pages available to unauthenticated users first. These pages usually have the greatest exposure (take a look at XSSed.com and compare the unauthenticated vs authenticated XSS numbers) and these pages probably offer less dynamic content than authenticated only pages. I’m also guessing that you would want to allow people to link to your publically available content, so CSRF tokens may become a burden to some users (as they constantly have to go through a portal page).

    Oh yeah… I would definitely leave the CSRF protection for logout functionality in place :)

  • http://www.anachronic.com Arian

    A few years ago at BH EU and Vegas, and a few of the OWASP conferences, I explored this in depth. We even created a WAF that could operate transparently on the .NET framework (called razorwire) to use tokens for this mitigation. They did a lot more than just make “request automation” difficult though.

    We eventually scrapped it as we found too many caveats and too little time to develop our WAF.

    Protecting “high profile” pages is worthless though, as we’ve seen from real-world attacks. I’ve seen hackers target obscure dynamic pages and rewrite them to redirect and then CSRF (or whatever) the “high-value” pages of a site.

    Nice writeup.

    -ae