Qualys Blog

www.qualys.com
70 posts

TLS Renegotiation and Denial of Service Attacks

A group of hackers known as THC (The Hacker’s Choice) last week released an interesting DoS tool that works at the SSL/TLS layer. The tool is exploiting the fact that, when a new SSL connection is being negotiated, the server will typically spend significantly more CPU resources than the client. Thus, if you are requesting many new SSL connections per second, you may end up using all of the server’s CPU.

The issue abused by the tool is not new, but what is new is that we now have a well publicised working exploit, and that always makes you pay attention. In addition, the tool uses the renegotiation feature, which means that it can force a server to perform many expensive cryptographic operations over a single TCP connection. It’s not clear if the relying on renegotiation helps with the DoS attack (there’s a very good analysis of the trade-offs on Eric Rescorla’s blog), however the fact that external DoS mitigation tools (e.g., rate limiting setups) are only seeing one TCP connection certainly helps with avoiding detection.

But that’s only if your server supports client-initiated renegotiation. If it does not, anyone wishing to perform a DoS attack against the SSL layer will have to fall back to using one TCP connection for one SSL connection. IIS, for example, does not support client-initiated renegotiation. Apache used to, but changed its behaviour when implementing RFC 5746 (which fixed the TLS Authentication Gap problem). Even if you depend on a product that does support client-initiated renegotiation, chances are you can easily disable that feature. And, when you do, you are not going to miss it (unlike server-initiated renegotiation, which some sites that require client certificates might need).

To help you with assessing your systems for this weakness, we have updated the SSL Labs assessment tool to test not only if secure renegotiation is supported (which we’ve been testing for some time now), but also to check if secure client-initiated renegotiation is enabled. Previously we only tested for insecure client-initiated renegotiation.

The sensible thing to do is to check for client-initiated renegotiation support in your servers, and disable it where possible. Although that won’t substantially help you overall (defending against DoS attacks is notoriously difficult and expensive), it will harden your defences against this particular technique.

Mitigating the BEAST attack on TLS

Update (19 March 2013): This blog post advises to use RC4 to migitate the BEAST attack, but RC4 has recently been discovered to be weaker than previously known. At this point the attacks against RC4 are still not practical. The only fully safe choice at the moment is the AES-GCM suites supported only in TLS 1.2. You can find out more in this new blog post.


During the summer rumours about a new attack against SSL started circulating. Then Opera released a patch, but made no comment about what it was patching. Eventually enough information leaked out that some smart people figured what the attack was about. What remained unknown was the exact technique used in the proof of concept, and that was eventually explained in Thai’s blog post. For a comprehensive overview of related links, go to Thierry Zoller’s blog post on BEAST.

As it turns out, the attack itself was conceived years ago, deemed impractical, but it was nevertheless fixed in TLS 1.1. The new attack technique introduced a few optimizations to make it practical.

In terms of mitigation, I expect this problem will be largely addressed on the client side, despite a potential compatibility problem that may cause some TLS sites to stop working. The only reliable way to defend against BEAST is to prioritise RC4 cipher suites, as proposed by PhoneFactor.

Just as an example, here’s one way to do the above in Apache:

SSLHonorCipherOrder On
SSLCipherSuite RC4-SHA:HIGH:!ADH

Not everyone likes RC4, even though there is little to no evidence that it is insecure in the context of SSL/TLS. If your server supports TLS 1.1+ you can try the approach recommended by Steve Caligo:

SSLHonorCipherOrder On
SSLCipherSuite ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:RC4:HIGH:!MD5:!aNULL:!EDH

The idea is that you put a few TLS 1.2 cipher suites first so that they can be picked up by TLS 1.2 clients, which are not vulnerable, followed by RC4 for TLS 1.0 clients.

Now that I’ve discussed what works as mitigation, let’s look at a few approaches that do not work:

  • Supporting TLS 1.1+ server-side is a good start, but does not amount to much because very few clients support newer versions of the protocol at this time. And even with TLS 1.1+ support client-side, there’s nothing preventing the MITM to force a protocol downgrade back to TLS 1.0. (For a discussion on defense techniques against downgrade attacks, see this thread on the TLS WG mailing list).
  • Enabling the empty fragment technique server-side (details for OpenSSL here) does not work either. TLS 1.0 uses two initialisation vectors (IVs), one each for client- and server-side of the communication channel. The vulnerability exploited by BEAST is on the client-side and cannot be addressed by making server-side changes to how data is sent.
  • Compression is said to make the attack impossible, but, as with TLS 1.1+, the support for it client-side is inconsistent.

Update (20 Jan 2012): In testing OpenSSL 1.0.1-beta2, which came out yesterday, I realised that it will happily negotiate AES-CBC-SHA256 even on a TLSv1.0 connection. So I removed it from the recommendation, replacing it with two other TLSv1.2 cipher suites.

SSL Labs: Announcing launch of two Convergence notaries

Convergence is Moxie Marlinspike's attempt to introduce fresh thinking into the debate about PKI, certificate authorities, and trust. A hint of what was in the works was in a blog post published in April (SSL And The Future Of Authenticity); the project was launched at Black Hat US in August. Moxie’s talk (here’s the video on YouTube) was entertaining and insightful.

Moxie advertises the project as a way of dispensing with certificate authorities ("An agile, distributed, and secure strategy for replacing Certificate Authorities"). At the first glance that’s true. You get a browser add-on (only Firefox for the time being) that, once activated, completely replaces the existing CA infrastructure. Whenever you visit an SSL site your browser will talk to two or more remote parties (notaries) and ask them to check the site’s certificate for you. If they both see the same certificate you decide to trust the site.

But when you dig deeper into the project, you realise that it consists of two parts. The first, and more important, part is the ability to delegate trust decisions from your browser to another party that’s remote to you. That means that you are no longer forced to accept the decisions of the browser vendors, but you can make your own. That ability is, for me, the most thrilling aspect of the project.

The second part of the project is the current backend implementation that makes trust decisions. The approach is great in its simplicity: if you can see the same certificate from several different locations you conclude that it must be the correct certificate. We mustn’t rush, however. We’ve just been given the ability to choose whom to trust, and it’s too soon to settle on any one implementation. I am far more interested in experimenting with different approaches, to see what works and what does not.

To that end, it makes me very happy to announce that we (Qualys) have decided to support Convergence by financing and running two notary servers. While it’s not yet clear if Convergence can succeed (there are many technological and adoption challenges to conquer), we want to play a part in it and help it succeed.

Finally, here are the links to the notary servers (one of which is in the US and the other in Europe):

Note: To use the above links, you have to have the Convergence plugin installed. After that all you need to do is click on the links and the notaries will become part of your configuration.

SSL Labs: Announcing launch of two Convergence notaries

convergence-site-cornerConvergence is Moxie Marlinspike's attempt to introduce fresh thinking into the debate about PKI, certificate authorities, and trust. A hint of what was in the works was in a blog post published in April (SSL And The Future Of Authenticity); the project was launched at Black Hat US in August. Moxie’s talk (here’s the video on YouTube) was entertaining and insightful.

Moxie advertises the project as a way of dispensing with certificate authorities ("An agile, distributed, and secure strategy for replacing Certificate Authorities"). At the first glance that’s true. You get a browser add-on (only Firefox for the time being) that, once activated, completely replaces the existing CA infrastructure. Whenever you visit an SSL site your browser will talk to two or more remote parties (notaries) and ask them to check the site’s certificate for you. If they both see the same certificate you decide to trust the site.

But when you dig deeper into the project, you realise that it consists of two parts. The first, and more important, part is the ability to delegate trust decisions from your browser to another party that’s remote to you. That means that you are no longer forced to accept the decisions of the browser vendors, but you can make your own. That ability is, for me, the most thrilling aspect of the project.

The second part of the project is the current backend implementation that makes trust decisions. The approach is great in its simplicity: if you can see the same certificate from several different locations you conclude that it must be the correct certificate. We mustn’t rush, however. We’ve just been given the ability to choose whom to trust, and it’s too soon to settle on any one implementation. I am far more interested in experimenting with different approaches, to see what works and what does not.

To that end, it makes me very happy to announce that we (Qualys) have decided to support Convergence by financing and running two notary servers. While it’s not yet clear if Convergence can succeed (there are many technological and adoption challenges to conquer), we want to play a part in it and help it succeed.

Finally, here are the links to the notary servers (one of which is in the US and the other in Europe):

Note: To use the above links, you have to have the Convergence plugin installed. After that, all you need to do is click on the links and the notaries will become part of your configuration. Please report any problems to convergence-notary@qualys domain name.

Private assessment option added to the SSL server test

Everyone I know likes the SSL assessment tool on the SSL Labs web site, but many dislike the fact that their domain name may end up on one of the boards that display recent test history. I am assuming most wouldn’t mind being on the "Recent Best-Rated" one, but you never know before the test how it’s going to turn out.

That’s why we now support the private assessment option. If you tick the checkbox underneath the domain name field, the test results will not be publicly revealed. Not only that, but the results will remain hidden for as long as the results remain cached.

If you intend to send links to hidden assessment results, make sure the URL contains the "hiddenResults=on" bit, which is needed to ensure that a new public test is run after the earlier results expire from the cache.

Disabling SSL renegotiation is a crutch, not a fix

In the days that followed the discovery of SSL/TLS Authentication Gap, some sites (those that did not need renegotiation) were able to deal with the problem by disabling renegotiation in server code. With no support for renegotiation, gone was the danger of exploitation. Good for them.

The sites that did need renegotiation had to wait, first for the TLS working group to solve the issue on the protocol level, and then for their SSL library (or web server) vendors to support the enhancement. The TLS working group did a great job negotiating the fix. As for the vendors, some implemented the new feature quickly, some dragged their feet a little, and some (Debian) seem to refuse to fix the problem, leaving their users vulnerable.

To sum it up, today, almost a year after the initial public discovery, we have some servers that are still vulnerable, some that refuse to support renegotiation, and some that support the new standard for secure renegotiation.

So where is the problem, you might ask? If disabling renegotiation prevents exploitation, that’s surely a good thing? Well, it depends on how you look at things. Try to look at the problem through the eyes of a browser developer. I was actually prompted to write about this problem by Yngve Nysæter Pettersen, who’s part of Opera’s security group. Opera wants to protect its users, and for that to be possible they need to know if a particular server supports secure renegotiation. If a server does, Opera can happily renegotiate whenever necessary. But if a server does not support secure renegotiation, you can make an argument that Opera should refuse any renegotiation attempts.

The servers that support secure renegotiation indicate so during the SSL handshake phase, and everyone’s happy and secure. The issue is with the servers that disable renegotiation, because they provide no indication of their security status. Some are insecure, while some aren’t. Without knowing, browsers can’t do anything. They can perhaps only inconvenience users and force them to manually configure protection levels.

While it is possible to test for insecure renegotiation (SSL Labs does it), the test is indicative but not conclusive — there is no way to test for server-initiated renegotiation. Besides, it’s not reasonable to expect browsers to test every SSL site they encounter.

My point is those that disabled SSL renegotiation must nevertheless implement the proper fix as soon as it becomes available for their platform. Patching is slow enough as it is, and we don’t need any further distractions to slow us down.

Qualys SSL Labs releases raw data from the Internet SSL survey

About two months ago, Qualys SSL Labs published the results of an Internet-wide SSL survey.  We said that we would make the raw data available, and today we are  following up on that promise. (By the way, we realize that two months is  a long time, but we couldn’t complete the process faster on this occasion. We hope to make future releases pretty much as soon as we obtain the data. As you may remember, our plan it to make our survey a quarterly event from 2011.)

The raw data contains the SSL assessment results of about 850,000 domain names (out of about 120M we inspected). The main file (1.2 GB 120 MB compressed, 3.5 GB 800 MB uncompressed) is a dump of our PostgreSQL database in CSV format. We include in the download a simple PHP script that iterates through all the rows, which means that you can consume the data directly. Alternatively, you can put the data back into the database and use SQL to run ad-hoc queries (we provide the schema along with the import instructions).

The database schema contains 63 fields that generally parallel the information you would obtain from the SSL Labs online test. The complete original certificate chain is included, which is handy if you want to look into the aspects we didn’t. We chose not to release certain sensitive data: the information on the low entropy private keys, renegotiation support, and HTTP server signatures was removed.

This is what you need to do to obtain the data:

  1. First, make sure that our terms and conditions are acceptable to you. At the core, we use the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported licence, but there are a few additional requirements. For example, we ask the obvious — that you don’t use the data for illegal activities. The other requirements are just common sense. (Please do read the entire file, however.)
  2. Second, send us an email (username "ivanr"; domain name "webkreator.com"), introduce yourself, and tell us how you intend to use the data. We will then send you back the download instructions. We need this second step to give us an idea if the data is used, and how.

Update: We are removing the certificate chain data from the database until we confirm that we are legally allowed to redistribute it. If you need such data in the meantime, retrieve it directly from the servers.

Podcast: Ivan talks about the Qualys SSL Labs Internet-wide SSL survey and the recent release of the raw data from the survey.

Disabling SSL renegotiation is a crutch, not a fix

In the days that followed the discovery of SSL/TLS Authentication Gap, some sites (those that did not need renegotiation) were able to deal with the problem by disabling renegotiation in server code. With no support for renegotiation, gone was the danger of exploitation. Good for them.

The sites that did need renegotiation had to wait, first for the TLS working group to solve the issue on the protocol level, and then for their SSL library (or web server) vendors to support the enhancement. The TLS working group did a great job negotiating the fix. As for the vendors, some implemented the new feature quickly, some dragged their feet a little, and some (Debian) seem to refuse to fix the problem, leaving their users vulnerable.

To sum it up, today, almost a year after the initial public discovery, we have some servers that are still vulnerable, some that refuse to support renegotiation, and some that support the new standard for secure renegotiation.

So where is the problem, you might ask? If disabling renegotiation prevents exploitation, that’s surely a good thing? Well, it depends on how you look at things. Try to look at the problem through the eyes of a browser developer. I was actually prompted to write about this problem by Yngve Nysæter Pettersen, who’s part of Opera’s security group. Opera wants to protect its users, and for that to be possible they need to know if a particular server supports secure renegotiation. If a server does, Opera can happily renegotiate whenever necessary. But if a server does not support secure renegotiation, you can make an argument that Opera should refuse any renegotiation attempts.

The servers that support secure renegotiation indicate so during the SSL handshake phase, and everyone’s happy and secure. The issue is with the servers that disable renegotiation, because they provide no indication of their security status. Some are insecure, while some aren’t. Without knowing, browsers can’t do anything. They can perhaps only inconvenience users and force them to manually configure protection levels.

While it is possible to test for insecure renegotiation (SSL Labs does it), the test is indicative but not conclusive — there is no way to test for server-initiated renegotiation. Besides, it’s not reasonable to expect browsers to test every SSL site they encounter.

My point is those that disabled SSL renegotiation must nevertheless implement the proper fix as soon as it becomes available for their platform. Patching is slow enough as it is, and we don’t need any further distractions to slow us down.

Internet SSL Survey 2010 Results

We are happy to announce the results of the first Internet SSL Survey conducted by SSL Labs. The 2010 version is a result of several months of full time work, resulting with a detailed analysis of how SSL is deployed on the Internet.

Attachments

Internet SSL Survey 2010 3.0 M

SSL and TLS Authentication Gap vulnerability discovered

A serious vulnerability has been discovered in the way web servers utilise SSL (and TLS, up to the most recent version, 1.2), effectively allowing an active man-in-the-middle attacker to inject arbitrary content into an encrypted data stream. Both the Apache web server and the IIS have been found to be vulnerable.

The problem is with the renegotiation feature, which allows one part of an encrypted connection (the one taking place before renegotiation) to be controlled by one party with the other part (the one taking place after renegotiation) to be controlled by another. A MITM attacker can open a connection to an SSL server, send some data, request renegotiation and, from that point on, continue to forward to the SSL server the data coming from a genuine user. One could argue that this is not a fault in the protocols, but it is certainly a severe usability issue. The protocols do not ensure continuity before and after negotiation.

To make things worse, web servers will combine the data they receive prior to renegotiation (which is coming from an attacker) with the data they receive after renegotiation (which is coming from a victim). This issue is the one affecting the majority of SSL users.

The following example demonstrates how the flaw can be exploited by an attacker to send an arbitrary request using the authentication credentials of a victim. The red parts are sent by the attacker and the blue parts are sent by the victim.

GET /path/to/resource.jsp HTTP/1.0
Dummy-Header:
GET /index.jsp HTTP/1.0
Cookie: sessionCookie=Token

The good news is that, although the attacker can execute an arbitrary request, he will not be able to retrieve the corresponding response. On the negative side, the client will see something different from that what she requested.

You can see that GET attacks are essentially trivial to execute. To date, no one has claimed a successful execution of a POST request using this flaw. Until someone does, that means that an application that only makes changes in response to POST requests will probably not be vulnerable. Further, an application not vulnerable to CSRF attacks will probably be safe too, because the attacker won’t be able to generate or predict the token required for the request to go through.

Mitigation options:

  1. If you can, disable renegotiation. There isn’t normally a configuration option to do this, but patches are being developed and will be available soon. The majority of web sites do not use renegotiation so disabling it won’t be a problem. Those that do will need to make changes to their sites to make them work without it.
  2. Use a web application firewall to monitor the contents of all request headers to spot what seems like an embedded HTTP request line. The good news is that the embedded request line will not be obfuscated, making it easier to detect. I do not believe that this advice can help the bypass of the client certificate authentication, though.
  3. If you can, monitor all connections that make use of the renegotiation feature. That won’t help you if renegotiation is an integral feature of your web site, but it may do if it is rarely used.

Further information: