Back to qualys.com
55 posts

How to Protect Against Slow HTTP Attacks

Slow HTTP attacks are denial-of-service (DoS) attacks in which the attacker sends HTTP requests in pieces slowly, one at a time to a Web server. If an HTTP request is not complete, or if the transfer rate is very low, the server keeps its resources busy waiting for the rest of the data. When the server’s concurrent connection pool reaches its maximum, this creates a DoS. Slow HTTP attacks are easy to execute because they require only minimal resources from the attacker.

In this article, I describe several simple steps to protect against slow HTTP attacks and to make the attacks more difficult to execute.

Previous articles in the series cover:

Protection Strategies

To protect your Web server against slow HTTP attacks, I recommend the following:

  • Reject / drop connections with HTTP methods (verbs) not supported by the URL.
  • Limit the header and message body to a minimal reasonable length. Set tighter URL-specific limits as appropriate for every resource that accepts a message body.
  • Set an absolute connection timeout, if possible. Of course, if the timeout is too short, you risk dropping legitimate slow connections; and if it’s too long, you don’t get any protection from attacks. I recommend a timeout value based on your connection length statistics, e.g. a timeout slightly greater than median lifetime of connections should satisfy most of the legitimate clients.
  • The backlog of pending connections allows the server to hold connections it’s not ready to accept, and this allows it to withstand a larger slow HTTP attack, as well as gives legitimate users a chance to be served under high load. However, a large backlog also prolongs the attack, since it backlogs all connection requests regardless of whether they’re legitimate. If the server supports a backlog, I recommend making it reasonably large to so your HTTP server can handle a small attack.
  • Define the minimum incoming data rate, and drop connections that are slower than that rate. Care must be taken not to set the minimum too low, or you risk dropping legitimate connections.

Server-Specific Recommendations

Applying the above steps to the HTTP servers tested in the previous article indicates the following server-specific settings:

Apache

  • Using the <Limit> and <LimitExcept> directives to drop requests with methods not supported by the URL alone won’t help, because Apache waits for the entire request to complete before applying these directives. Therefore, use these parameters in conjunction with the LimitRequestFields, LimitRequestFieldSize, LimitRequestBody, LimitRequestLine, LimitXMLRequestBody directives as appropriate. For example, it is unlikely that your web app requires an 8190 byte header, or an unlimited body size, or 100 headers per request, as most default configurations have. 
  • Set reasonable TimeOut and KeepAliveTimeOut directive values. The default value of 300 seconds for TimeOut is overkill for most situations.
  • ListenBackLog’s default value of 511 could be increased, which is helpful when the server can’t accept connections fast enough.
  • Increase the MaxRequestWorkers directive to allow the server to handle the maximum number of simultaneous connections.
  • Adjust the AcceptFilter directive, which is supported on FreeBSD and Linux, and enables operating system specific optimizations for a listening socket by protocol type. For example, the httpready Accept Filter buffers entire HTTP requests at the kernel level.

A number of Apache modules are available to minimize the threat of slow HTTP attacks. For example, mod_reqtimeout’s RequestReadTimeout directive helps to control slow connections by setting timeout and minimum data rate for receiving requests.

I also recommend switching apache2 to experimental Event MPM mode where available.  This uses a dedicated thread to handle the listening sockets and all sockets that are in a Keep Alive state, which means incomplete connections use fewer resources while being polled.

Nginx

lighttpd

  • Restrict request verbs using the $HTTP["request-method"] field in the configuration file for the core module (available since version 1.4.19).
  • Use server.max_request-size to limit the size of the entire request including headers.
  • Set server.max-read-idle to a reasonable minimum so that the server closes slow connections. No absolute connection timeout option was found.

IIS 6

IIS 7

  • Limit request attributes is through the <RequestLimits> element, specifically the maxAllowedContentLength, maxQueryString, and maxUrl attributes.
  • Set <headerLimits> to configure the type and size of header your web server will accept.
  • Tune the connectionTimeout, headerWaitTimeout, and minBytesPerSecond attributes of the <limits> and <WebLimits> elements to minimize the impact of slow HTTP attacks.

What’s Next

The above are the simplest and most generic countermeasures to minimize the threat. Tuning the Web server configuration is effective to an extent, although there is always a tradeoff between limiting slow HTTP attacks and dropping legitimately slow requests. This means you can never prevent attacks simply using the above techniques.

Beyond configuring the web server, it’s possible to implement other layers of protection like event-driven software load balancers, hardware load balancers to perform delayed binding, and intrusion detection/prevention systems to drop connections with suspicious patterns.

However, today, it probably makes more sense to defend against specific tools rather than slow HTTP attacks in general. Tools have weaknesses that can be identified and and exploited when tailoring your protection. For example, slowhttptest doesn’t change the user-agent string once the test has begun, and it requests the same URL in every HTTP request. If a web server receives thousands of connections from the same IP with the same user-agent requesting the same resource within short period of time, it obviously hints that something is not legitimate. These kinds of patterns can be gleaned from the log files, therefore monitoring log files to detect the attack still remains the most effective countermeasure.

Testing Web Servers for Slow HTTP Attacks

Following the release of the slowhttptest tool, I ran benchmark tests of some popular Web servers. My testing shows that all of the observed Web servers (and probably others) are vulnerable to slow http attacks in their default configurations. Reports generated by the slowhttptest tool illustrate the differences in how the various Web servers handle slow http attacks.

Configurations Tested

Tests were run against the default, out-of-the-box configurations of the Web servers, which is the best level playing field for comparison. And while most deployments will customize their configuration, they will likely do it for reasons other than improving protection against slow http attacks. Therefore it’s useful to understand how the default configurations relate to slow http attack vulnerability.

The following HTTP servers were observed:

Web Server Operating System
Apache 2.2.19 MPM prefork Mac OS X 10.7 Lion Server
nginx 1.0.0 Mac OS X 10.7 Lion Server from MacPorts
lighttpd 1.4.28 Ubuntu 11.04
IIS 7 Windows 2008 Server

Observations

In addition to noting that all Web servers tested are vulnerable to slow http attacks, I drew some other generalizations about how different Web servers handle slow http attacks:

  • Apache, nginx and lighttpd wait for complete headers on any URL for requests without a message body (GET, for example) before issuing a response, even for requests with the verb FAKESLOWVERB.
  • Apache also waits for the entire body of requests with fake verbs before issuing a response with an error message.
  • For Apache, nginx and lighttpd, slow requests sent with fake verbs consume resources with the same success rate as requests sent with valid verbs, so the hacker doesn’t even need to bother finding a vulnerable URL.
  • Server administrators’ scripts typically query for particular expected values like method, or URL, or referer header, etc., but not for fake verbs. That means it is likely that slow http attacks using fake verbs or URLs can go unnoticed by the server administrator.
  • Each server except IIS is vulnerable to both slow header and slow message body attacks. IIS is vulnerable only to slow message body attacks.

However, there are some interesting differences in the results as well. The screenshots below, which show the graphical output of the slowhttptest tool, demonstrate how connection state changed during the tests, and illustrate how the various Web servers handle slow http attacks.

Apache MPM prefork:

Apache is generally the most vulnerable, and denial of service can be achieved with 355 connections on the system tested. Apache documentation indicates a 300-second timeout for connections, but my testing indicates this is not enforced.

In the test, the server accepted 483 connections and started processing 355 of them.  The 355 corresponds to RLIMIT_NPROC (max user processes), a machine-dependent value that is 709 on the machine tested, times MaxClients, whose default value in httpd.conf is 50%:  355 = 709 * 50%. 

The rest of the connections were accepted and backlogged. The limit for backlog is set by the ListenBackLog directive and is 512 in default httpd.conf, but is often limited to a smaller number by operating system, 128 in case of Mac OS X. The 483 connected connections shown in the graph correspond to the 355 being processed plus the 128 in the backlog.

One interesting side note: The Apache documentation says that ListenBackLog is the “maximum length of the queue of pending connections”, but a simple test shows that backlogged connections are being accepted, e.g. server sends SYN-ACK back, and backlogged connections are ready for write operations on the client side, which is not a normal behavior for pending connections. Apache documentation is probably using “pending” to mean the internal connection state in Apache, but I rather expect “pending” to refer to the state in the TCP stack, where “pending” means “waiting to be accepted”.

I terminated the test after 240 seconds, but the picture would be the same for a longer test.  A properly configured client can keep connections open for hours, until the limit for headers count or length is met. With such settings, DoS is achieved with N+1 connections, where N is number specified by MaxClients.

While testing, I noticed that my httpd.conf had a TimeOut directive set to 300, and Apache 2.0 documentation says that:

The TimeOut directive currently defines the amount of time Apache will wait for three things:

    1. The total amount of time it takes to receive a GET request.
    2. The amount of time between receipt of TCP packets on a POST or PUT request.
    3. The amount of time between ACKs on transmissions of TCP packets in responses.

I understand from the above paragraph that the opened connections should be closed after the TimeOut interval, i.e. 300 seconds in the default configuration. However, I observed that the connection is closed only if there is no data arriving for 300 seconds, which means this is not an effective preventative measure against DoS. This behavior also indicates that if there are some rules defined to throttle down too many connections with a similar pattern in a shorter period of time, they are also ineffective:  An attacker can initiate connections with very low connection rate and get the same results, as the connection can be prolonged virtually forever.

nginx:

nginx is also vulnerable to slow http attacks, but it offers more controls than Apache.

Surprisingly, the number of initially accepted connections was 377, even with the default settings of worker_connections = 1024, and worker_processes = 1.  But worker_rlimit_nofile, the maximum number of open file descriptors per worker that governs the maximum number of connections the worker can accept, has a default value of 377 on Mac OS X.

As shown in the graph above, the server accepts the connections it can accept, and leaves the rest of the connections pending. Due to some hardcoded timeout values, connections are closed after 70 seconds no matter how slow the data is arriving.  Nginx is therefore safer than default Apache, but it still gives attacker a chance to achieve DoS for 70 seconds.

The default TCP timeout (75), which closes pending connections, is longer than the nginx timeout (65), which closes accepted connections. This means that nginx moves some pending connections to the accepted state after it times out the first set of accepted connections. This extends the length of time that a batch of slow connection requests can tie up the server. 

In any case, a client can always re-establish connections every 65 seconds to keep the server under DoS conditions.

lighttpd:

Lighttpd with default configuration is vulnerable to both http attacks, which are fairly easy to carry out.

The default configuration allows a maximum of 480 connections to be accepted, as can be seen in the graph above, with the rest pending for 200 seconds, and then closed by a timeout. Lighttpd has a useful attribute called server.max-read-idle with default value 60, which closes a connection if no data is received before the timeout interval, but sending something to the socket every 59 seconds would reset it, allowing the attacker to keep connections open for a long time.

The lighttpd forums indicate that a fixed issue protects against slow HTTP request handling, it only fixes a waste of memory issue, and hitting the limit of concurrently processing connections is still pretty easy.

IIS 7:

IIS 7 offers good protection against slow headers, but this protection does not extend to slow message bodies. Because IIS is architected differently from the other Web servers tested, the behavior it displays is also different.

IIS 7 accepts all connections, but does not consider them writeable until headers sections are received in full. Such connections require fewer resources from IIS, and therefore IIS can maintain a relatively larger pool of these connections. In the default configuration, it is not possible to exhaust the pool with a single slowhttptest run, which is limited to 1024 connections on systems I tested on.  However, it would be possible to launch multiple instances of slowhttptest to get around this limitation.

For requests with a slow message body, IIS’ protection is useless, as it’s possible to send complete headers sections but then slow down the message body section. In this case, the connections are transferred to IIS’ internal processing queue, which is limited in size to, don’t be surprised, 100 connections.  Even though the screenshot shows 1000 connections, I experimentally figured out that 100 requests with slow message body are enough to get DoS.

Final Thoughts

Software configuration is all about tradeoffs, and it is normal to sacrifice one aspect for another. We see from the test results above that all default configuration files of the Web servers tested are sacrificing protection against slow HTTP DoS attacks in exchange for better handling of connections that are legitimately slow.

Because a lot of people are not aware of slow http attacks, they will tend to trust the default configuration files distributed with the Web servers. It would be great if the vendors creating distribution packages for Web servers would pay attention to handling and minimizing the impact of slow attacks, as much as the Web servers’ configuration allows it. Meanwhile, if you are running a Web server, be careful and always test your setup before relying on it for production use.

New Open-Source Tool for Slow HTTP DoS Attack Vulnerabilities

Slow HTTP attacks are denial-of-service (DoS) attacks that rely on the fact that the HTTP protocol, by design, requires a request to be completely received by the server before it is processed. If an HTTP request is not complete, or if the transfer rate is very low, the server keeps its resources busy waiting for the rest of the data. When the server’s concurrent connection pool reaches its maximum, this creates a denial of service. These attacks are problematic because they are easy to execute, i.e. they can be executed with minimal resources from the attacking machine.

Inspired by Robert “Rsnake” Hansen’s Slowloris and Tom Brennan’s OWASP slow post tools, I started developing another open-source tool, called slowhttptest, available with full documentation at https://github.com/shekyan/slowhttptest. Slowhttptest opens and maintains customizable slow connections to a target server, giving you a picture of the server’s limitations and weaknesses. It includes features of both of the above tools, plus some additional configurable parameters and nicely formatted output.

Slowhttptest is configurable to allow users to test different types of slow http scenarios. Supported features are:

  • slowing down either the header or the body section of the request
  • any HTTP verb can be used in the request
  • configurable Content-Length header
  • random size of follow-up chunks, limited by optional value
  • random header names and values
  • random message body data
  • configurable interval between follow-up data chunks
  • support for SSL
  • support for hosts names resolved to IPv6
  • verbosity levels in reporting
  • connection state change tracking
  • variable connection rate
  • detailed statistics available in CSV format and as a chart generated as HTML file using Google Chart Tools

How to Use

The tool works out of the box with default parameters, which are harmless and most likely will not cause a denial of service.

Type:

$ PREFIX/bin/slowhttptest

and the test begins with the default parameters.

Depending on which test mode you choose, the tool will send either slow headers:

GET / HTTP/1.1CRLF
Host: localhost:80 CRLF
User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2)CRLF
.
. n seconds
.
X-HMzV2bwpzQw9jU9fGjIJyZRknd7Sa54J: u6RrIoLRrte4QV92yojeewiuDa9BL2N7CRLF
.
. n seconds
.
X-nq0HRGnv1W: T5dSLCRLF
.
. n seconds
.
X-iFrjuN: PdR7Jcj27PCRLF
.
.
.

or slow message bodies:

POST / HTTP/1.1CRLF
Host: 10.10.25.116:443CRLF
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:5.0.1) Gecko/20100101
Firefox/5.0.1CRLF
Content-Length: 8192CRLF
Connection: closeCRLF
Referer: http://code.google.com/p/slowhttptest/CRLF
Content-Type: application/x-www-form-urlencodedCRLF
Accept: text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5CRLF
CRLF
foo=bar
.
. n seconds
.
&rjP8=du7FKMe
.
. n seconds
.
&93zgIx=jgfpopJ
.
.
.

Repeated until the server closes the connection or the test hits the specified time limit.

Depending on the verbosity level selected, the slowhttptest tool logs anything from heartbeat messages every 5 seconds to a full traffic dump. Output is available either in CSV format or in HTML for interactive use with Google Chart Tools.

Note: Care should be taken when using this tool to avoid inadvertently causing denial of service against your servers. For production servers, QualysGuard Web Application Scanner will perform passive (non-intrusive) automated tests that will indicate susceptibility to slow http attacks without the risk of causing denial of service.

Example Test and Results

The HTML screenshot below shows the results of running slowhttptest against a test server in a test lab.  In this scenario, the tool opens 1000 connections with rate of 200 connections per second, and the server was able to concurrently process only 377 connections, leaving the remaining 617 connections pending. Denial of service was achieved within the first 5 seconds of the test, and lasted 60 seconds, until the server timed out some of the active connections. At this point, the server transferred another set of connections from pending state to active state, thus causing DoS again, until the server timed out those connections.


Figure 1: Sample HTML output of slowhttptest results.

As is shown in the above test, the slowhttptest tool can be used to test a variety of different slow http attacks and to understand the effects they will have on specific server configurations. By having a visual representation of the server’s state, it is easy to understand how the server reacts to slow HTTP requests. It is then possible to adjust server configurations as appropriate. In follow-up posts, I will describe some detailed analysis of different HTTP servers’ behavior on slow attacks and mitigation techniques.

Any comments are highly appreciated, and I will review all feature requests posted on the project page at https://github.com/shekyan/slowhttptest. Many thanks to those who are contributing to this project.

Update, August 26, 2011:

Version 1.1. of slowhttptest includes a new test for the Apache range header handling vulnerability, also known as the “Apache Killer” attack. The usage example can be found at https://github.com/shekyan/slowhttptest/wiki/ApacheRangeTest.

Update, January 5, 2012:

Version 1.3 of slowhttptest includes a new test for the Slow Read DoS attack.

Web Application Scanning & CSRF

Interest in the QualysGuard Web Application Scanning (WAS) module has been growing since its new UI was demonstrated last week at BlackHat. Along with such interest come questions about how the scanner works. The ultimate goal for WAS is to provide accurate, scalable testing for the most common, highest profile vulnerabilities (think of SQL injection and XSS) so that manual testing can skip the tedious and time-consuming aspects of an app review and focus on complex vulns that require brains rather than RAM.

One complex vuln in particular is CSRF. Automated, universal CSRF detection is a tough challenge, which is why we try to solve the problem in pieces rather than all at once. It’s the type of challenge that keeps web scanning interesting. Here’s a brief look at the approach we’ve taken to start bringing CSRF detection into the realm of automation.

First, the test assumes an authenticated scan. If the scan is not given credentials, then the tests won’t be performed. Also, tests are targeted to specific manifestations of CSRFrather than the broad set of attacks possible from our friendly sleeping giant.

Tests roughly follow these steps. Fundamentally, we’re trying to model an attack rather than make inferences based on pattern matching:

1. Identify forms with a "session context". This is a weaker version of (but hopefully a subset of) a "security context", because lots of times security requires knowledge about the boundaries within an app and the authorized actions of a user. This knowledge is hard to come by automatically. Never the less, some utility can be had by looking at forms with the following attributes:

  • Only available to an authenticated user.
  • Are not "trivial" such as search forms or logout buttons.
  • Have an observable effect, either on the session or the HTTP response. (Hint: Here’s where the automated scan becomes narrow, meaning prone to false negatives.)

2. Set up two separate sessions for the user (i.e. login twice). Keep their cookie jars apart. We’ll refer to the sessions as Aardvark and Bobcat (or A & B or Alpha & Bravo, etc.). Remember, this is for a single user.

3. Obtain a form for session Aardvark.

4. Obtain a form for session Bobcat.

5. Swap the forms between the two sessions and submit. (Crossing the streams, like Egon told you not to do.)

  • The assumption is that any CSRF tokens in Aardvark’s form are tied to the session cookie(s) used by Aardvark and Bobcat’s belong to Bobcat. Things should blow up if the tokens and session don’t match.

6. Examine the "swapped" responses.

  • If the form’s fields never change between sessions, then this is a good indicator that no CSRF token is present. You have to run tests with a browser in order to make sure there’s no JavaScript dynamically changing the form when the page loads or the form is submitted.
  • If the response has a clear indication of error, then the app is more likely to be protected from CSRF. The obvious error is something like, "Invalid CSRF token". Sadly, the world is not unicorns and rainbows for automated scanning and errors may not be so obvious or point so directly to CSRF.
  • If the response is similar to the one received from the original request, then it appears that the form is not coupled to a user’s session. This is an indicator that the form is more probably vulnerable to CSRF.

What it won’t do, because these techniques are noisy and unreliable (as opposed to subtle and quick to anger):

  • Look for hidden form fields with names or values that match CSRF tokens. If an obvious token is present, that doesn’t mean the app is actually validating it.
  • Use static inspection of the form, DOM, or HTML to look for any examples of CSRF tokens. Why look for text patterns when you’re trying to determine a behavior? Not everything is solved by regexes. (Which really is unfortunate, by the way.)
  • Attempt to evaluate the predictability of anything that looks like a CSRF token.
  • Submit forms without loading the complete page and its resources in a browser; otherwise JavaScript-based countermeasures would not be noticed.

Nor will it demonstrate the compounding factor of CSRF onother vulnerabilities like XSS. That’s something that manual pen-testing should do. In other words, WAS is focused on identifying vulns (it should find an XSS vuln, but it won’t tie the vuln to a CSRF attack to demonstrate a threat). Manual pen-testing more often focuses on how deep an app can be compromised — and the real risks associated with it.

What it’ll miss:

  • Situations where sessions cookie(s) are static or relatively static for a user. This impairs the "swap" test.
  • CSRF that can affect unauthenticated users in a meaningful way. This is vague, but as you read more about CSRF you’ll find that some people consider any forgeable action should be considered a vuln. This speaks more to the issue of evaluating risk. You should be relying on people to analyze risk, not tools.
  • CSRF that affects the user’s privacy. This requires knowledge of the app’s policy and the impact of the attack.
  • Forms whose effect on a user’s security context manifests in a different response, or in a manner that isn’t immediately evident.
  • CSRF tokens in the header, which might lead to false positives.
  • CSRF vulns that manifest via links rather thanforms. Apps put all kinds of functionality in hrefs rather than explicit form tags.
  • Other situations where we play games of anecdotes and what-ifs.

What we are trying to do:

  • Reduce noise. Don’t report vulns for the sake of reporting a vuln if no clear security context or actionable data can be provided.
  • Provide a discussion point so we can explain thebenefits of automated web scanning and point out where manual follow-up will always be necessary.
  • Learn how real-world web sites implement CSRF in order to find common behaviors that might be detectable via automation. You’d be surprised (maybe) at how often apps have security countermeasures that look nothing like OWASP recommendations and, consequently, fare rather poorly.
  • Experiment with pushing the bounds of what automation can do, while avoiding hyperbolic claims that automation solves everything.

The current state of CSRF testing in WAS should be relied on as a positive indicator (vuln found, vuln exists) more so than a negative indicator (no vuln found, no vulns exist). That’s supposed to mean that a CSRF vuln reported by WAS should not be a false positive and should be something that the app’s devs need to fix. It also means that if WAS doesn’t find a vuln then the app may still have CSRF vulns. For this particular test a clean report doesn’t mean a clean app; there’re simply too many ways of looking at the CSRF problem to tackle it all at once. We’re trying to break the problem down into manageable parts in order to understand what approaches work. We want to hear your thoughts and feedback on this.

Identifying Slow HTTP Attack Vulnerabilities on Web Applications

Slow HTTP attacks rely on the fact that the HTTP protocol, by design, requires requests to be completely received by the server before they are processed. If an http request is not complete, or if the transfer rate is very low, the server keeps its resources busy waiting for the rest of the data. If the server keeps too many resources busy, this creates a denial of service.

These types of attack are easy to execute because a single machine is able to establish thousands of connections to a server and generate thousands of unfinished HTTP requests in a very short period of time using minimal bandwidth.

Due to implementation differences among various HTTP servers, two main attack vectors exist:

  • Slowloris: Slowing down HTTP headers, making the server wait for the final CRLF, which indicates the end of the headers section;
  • Slow POST: Slowing down the HTTP message body, making the server wait until all content arrives according to the Content-Length header; or until the final CRLF arrives, if HTTP 1.1 is being used and no Content-Length was declared.

The scary part is that these attacks can just look like requests that are taking a long time, so it’s hard to detect and prevent them by using traditional anti-DoS tools. Recent rumors indicate these attacks are happening right now: CIA.gov attacked using slowloris.

QualysGuard Web Application Scanner (WAS) uses a number of approaches to detect vulnerability to these attacks.

Slowloris Detection

To detect a slow headers (a.k.a. Slowloris) attack vulnerability (Qualys ID 150079), WAS opens two connections to the server and requests the base URL provided in the scan configuration.

The request sent to the first connection consists of a request line and one single header line but without the final CRLF, similar to the following:

GET / HTTP/1.1 CRLF

Connection: keep-alive CRLF

The request sent to the second connection looks identical to the first one, but WAS sends a follow-up header line some interval later to make the HTTP server think the peer is still alive:

Referer: http://www.qualys.com/products/qg_suite/was/ CRLF

Currently that interval is approximately 10 seconds plus the average response time during the crawl phase.

WAS considers the server platform vulnerable to a slowloris attack if the server closes the second connection more than 10 seconds later than the first one. In that case, the server prolonged its internal timeout value because it perceived the connection to be slow. Using a similar approach, an attacker could occupy a resource (thread or socket) on that server for virtually forever by sending a byte per T – 1 (or any random value less than T), where T is the timeout after which the server would drop the connection.

WAS does not report the server to be vulnerable if it keeps both connections open for the same long period of time (more than 2 minutes, for example), as that would be a false positive if the target server were IIS (which has protection against slow header attacks, but is less tolerant of real slow connections).

Slow POST Detection

To detect a slow POST (a.k.a. Are-You-Dead-Yet) attack vulnerability (QID 150085), WAS opens two other connections, and uses an action URL of a form it discovered during the crawl phase that doesn’t require authentication.

The request sent to the first connection looks like the following:

POST /url_that_accepts_post HTTP/1.1 CRLF

Host: host_to_test:port_if_not_default CRLF

User-Agent: Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0;) CRLF

Connection: close CRLF

Referer: http://www.qualys.com/products/qg_suite/was/ CRLF

Content-Type: application/x-www-form-urlencoded CRLF

Content-Length: 512 CRLF

Accept: text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 CRLF

CRLF

foo=bar

Similar to the slow headers approach, WAS sends an identical request to the second connection, and then 10 seconds later sends the following (again without the final CRLF):

alpha=beta

WAS considers the target vulnerable if any of the following conditions are met:

  • The server keeps the second connection open 10 seconds longer than the first one, or
  • The server keeps both connections open for more than 120 seconds, or
  • The server doesn’t close both connections within a 5 minute period (as WAS limits slow tests to 5 minutes only).

WAS assumes that if it is possible to either keep the connection open with an unfinished request for longer than 120 seconds or, even better, prolong the unfinished connection by sending a byte per T – 1 (or any random value less than T), then it’s possible to acquire all server sockets or threads within that interval.

WAS also performs a supplemental test to determine unhealthy behavior in handling POST requests, by sending a simple POST request to the base URI with a relatively large message body (65543 Kbytes). The content of the body is a random set of ASCII characters, and the content type is set to application/x-www-form-urlencoded. WAS assumes that if the server blindly accepts that request, e.g. responds with 200, then it gives an attacker more opportunity to prolong the slow connection by sending one byte per T – 1. Multiplying 65543 by the T – 1 would give you the length of time an attacker could keep that connection open. QID 150086 is reported on detection of that behavior.

Mitigation

Tests performed by WAS are passive and as non-intrusive as possible, which minimizes the risk of taking down the server. But because of the possibility of false positives, care should be taken, especially if the HTTP server or IPS (Intrusion Prevention System) is configured to change data processing behavior if a certain number of suspicious requests are detected.  If you are interested in active testing, which might take your server down, you can try some active testing using one of these available tools:

* slowloris

* r-u-dead-yet

* OWASP HTTP Post Tool (tests against slow headers as well)

Mitigation of slow HTTP attacks is platform specific, so it’d be nice for the community to share mitigation techniques in the comments below. I’ll post an update with information on some of those platforms, as well as general recommendations that can be extrapolated to particular platforms.

Update: Learn about New Open-Source Tool for Slow HTTP DoS Attack Vulnerabilities and download the slowhttptest tool.