Is HTTP Public Key Pinning Dead?

Ivan Ristic

Last updated on: December 21, 2022

I have a confession to make: I fear that HTTP Public Key Pinning (HPKP, RFC 7469)—a standard that was intended to bring public key pinning to the masses—might be dead. As a proponent of a fully encrypted and secure Internet I have every desire for HPKP to succeed, but I worry that it’s too difficult and too dangerous to use, and that it won’t go anywhere unless we fix it.

What is public key pinning?

Before I go on, let’s briefly discuss why we need public key pinning in the first place. The problem is with the way we manage trust: we have hundreds of CAs and each of them is able to issue a certificate for any web site in the world. Technically, owner permission is not necessary. Now, I think this system actually works rather well, which is evident from the fact that the rate of incidents is very low. But fraudulent certificates can be created in one way or another, and that’s not good enough for high-profile web sites. Further, technical people have trouble relying on a system that’s not foolproof.

Enter public key pinning, which is a technique that enables site owners to have a say in which certificates are valid for their sites. For example, in one of the possible deployment options, you choose two or more CAs to trust; after that, any certificate issued by anyone else is ignored. What’s not to like?

Public key pinning started at Google, and they first implemented in Chrome, pinning their own web sites. Their approach is an example of static pinning; the pins are not easy to change because they’re embedded in the browser. Chrome’s pinning has served us well over the years, uncovering many cases of fraudulent certificates that would otherwise perhaps fly under the radar. Google also allowed (and still does) that other organisations embed their pins in Chrome. These days, Firefox also supports static pinning, drawing from the same pins maintained by Google.

Whereas static pinning works well, it has a problem because maintaining pins is a slow manual process that doesn’t scale. For that reason Google also sparked the IETF work that lead to RFC 7469, officially known as “Public Key Pinning Extension for HTTP”, but which everybody calls just HPKP. HPKP is an example of dynamic pinning; web site owners can set the pins at will.

What is the problem with HPKP then?

The main problem with HPKP, and with pinning in general, is that it can brick web sites. The culprit is the memory effect: pins, once set, remain valid for a period of time. Each pin is associated with a unique cryptographic identity that the web site must possess to continue operation. If you lose control of these identities, you effectively also lose your web site.

Clearly, pinning introduces a paradigm shift. Without it, TLS is quite forgiving—if you lose your keys you can always create a new set and get a fresh certificate for them. With pinning, your keys become precious.

There is some relief in the fact that a valid HPKP configuration must include at least one backup key. The idea is that, if you something goes seriously wrong, you fetch your securely-stored backup key and resume normal operation.

Even if you don’t lose your pinning keys, you have to be careful how and when you’re changing them. Your configuration must, at any time, be offering at least one pin that matches the configuration you offered to all your previous users. If you rotate the keys too quickly you risk not having a pin for some of your older visitors.

To sum up, HPKP is not for the faint of heart; you essentially need to know what you’re doing and be careful about it.

Talking about knowing what you’re doing, HPKP is also too flexible about what you can do with it. With it you can pin any public key in the certificate chain, choosing from your own keys (the leaf certificate), the intermediate certificates, or the root. Each decision comes with its advantages and disadvantages, but you need to understand PKI very well to appreciate them. This flexibility is a point of great confusion that in practice often leads to paralysis (“What to do?”). Some sites will inevitably make the wrong choice and suffer for it.

Thus, a successful pinning strategy requires that you:

  1. Build a threat model to determine if there is a real threat out there that pinning can address at an acceptable cost
  2. Understand PKI and HPKP and choose the right place to pin
  3. Avoid losing your pinning keys
  4. Keep backup keys in a separate location in case of server compromise
  5. Have a robust plan for the key rotation and execute it smoothly

The above steps aren’t terribly difficult to carry out, but the stakes are pretty high. A serious mistake with pinning can lead to the business shutting down. The deployment numbers reflect this; in August 2016 Scott Helme found only 375 sites with HPKP deployed, along with 76 sites with HPKP running in report-only mode. Contrast these numbers with about 30,000 instances of HSTS in the same data set.

Abusing HPKP

A potentially bigger problem with HPKP is that it can be abused by malicious actors. Let’s say, for example, that someone breaks into your server (a very common occurrence) and thus gains control of your web site. They can then silently activate HPKP and serve pinning instructions to a large chunk of your user base. It’s very unlikely that you will detect this. After a long-enough period, they remove the pinning keys from the server and brick your web site just for the fun of it. Or, if you’re lucky, they seek ransom, giving you a chance to get the backup pinning key from them and keep your business.

The HPKP working group had been aware of this problem (one early example here), but didn’t include a mitigation mechanism in the standard. Early HPKP drafts had specified a ramp-up period; pinning had to be observed over a period of time to become fully operational. That feature eventually got removed, probably because it wasn’t quite clear how to effectively assess continuity. In the end, the RFC specified that one active pin and one inactive (backup) pin are sufficient to activate pinning for essentially any duration. The “Hostile Pinning” section of the RFC mentions this problem almost in passing, noting that sites should be able to recover after the maximum policy duration expires. The RFC leaves to browsers to decide on their maximum max-age and has a very soft recommendation to cap it at 60 days.

We are not yet seeing attacks just yet because HPKP is relatively new, but the word is starting to get out. Just this month there was a talk at DEFCON about what they called RansomPKP. If you’re interested in this topic, you should also read this follow-up from Scott Helme, who provided more details.

So why do I say that HPKP is dead?

I think that, ultimately, HPKP requires too much effort and that only a small number of sites will ever deploy it. At the same time, it can be—in the current form—used as a powerful weapon against everybody.

The irony of HPKP is that it’s not going to be used by many sites (because it’s too taxing), but that it can be used against the long tail of millions of small sites who are not even aware that HPKP exists. For the small number of sites who are using pinning, it’s just as likely that static pinning would work well, with less fuss and no danger for the rest of the Web.

Can HPKP be fixed?

To fix HPKP we need to 1) make it easier and less dangerous to deploy and 2) have a way to deal with potential malicious use.

For the latter, one possibility is to “dull” HPKP so that it remains useful but that the really dangerous aspects of it are addressed. There’s a variety of ways in which this could be done. For example, we could reintroduce the ramp-up mechanism. Another solution might be to restrict pinning to those who can demonstrate a level of security knowledge and operational proficiency, for example those who are already preloading HSTS. And perhaps browsers can build an undo mechanism that could be used to override broken pins.

Making HPKP easier to use probably means allowing sites to deploy safer pinning, for example pinning to CAs, not their roots. In fact, that’s largely how static pinning is done right now. It goes like this: you choose 2-3 CAs and require that only they can issue valid certificates for your site. This approach is not as secure as pinning to the leafs (your own keys), but it vastly reduces the attack surface and with much less effort. (This idea had also been discussed during the development of HPKP, but it was rejected because it wasn’t a pure-technical solution and required collaboration of many organisations.)

Technically, pinning to CAs is possible today provided you have a very good understanding of PKI. The key issue is understanding how different root stores include different root keys, how root keys change, and so on. I have some hope that with more public information about these topics and with help with interested CAs we can make this safer style of pinning possible, even without any changes to HPKP.

What can you do today?

Leaving all the possibilities of the future aside, let’s try to figure out what we can do today. Here are some ideas you can consider to make yourself safer:

  • First, you could have a monitoring system in place to audit your configuration and detect unwanted pinning. For large enterprises this could be a good idea anyway, because pinning (and other security technologies) could be deployed by an eager developer, without organisation-wide coordination. (As an example, both HPKP and HSTS have a memory effect and also support the includeSubDomains directive, that could make a configuration spread to an entire domain name, even to servers controlled by other teams.)
  • You could front your sites with a reverse proxy and make sure that the HPKP response header is never sent to your users. This defence measure won’t address all the possible attack vectors (e.g., if someone redirects the web site to other servers and abuses a misissued certificate), but it would prevent escalation from server take-over.
  • If you don’t mind your hand being forced, the pinning itself can be an effective defence against malicious pinning. All the attack vectors that include DNS hijacking and fraudulent certificates can be detected if you’re already using pinning. Sadly, an attacker who takes control over your pins (by compromising your servers) can rotate the pins to those they control. (But the previous control, the reverse proxy, helps in that case.)

Thanks to Ryan Sleevi and Scott Helme for reading early drafts of this post.

Show Comments (45)

Comments

Your email address will not be published. Required fields are marked *

  1. Hello,

    thank you for this summary. I’ve always wondered what happens when a HPKP-protected website is visited through a corporate proxy firewall which intercepts the tls connection and delivers its own certificate. Do these firewalls reliably filter the HPKP-headers?

    Best regards

    Adrian

      1. Does it means that HPKP enhanced website accessed via corporate proxy, will be blocked by browser? We assume that corporate mitm root CA is installed in browser as trusted.
        If so, do you have an example of such a website?

          1. WRT:
            > Correction: “to a locally trusted – and manually imported – root” 🙂

            Does this mean that a locally trusted cert imposed by e.g. MSFT Group Policy would *not* interfere with HPKP?

            Also, I thought i’d read that the local trust override was only Chrome – does anyone know details for other browsers?

            Cheers!

          2. It is not only in Chrome, it is done this way in all bigger browsers supporting HPKP.
            However also certs imported by group policies or such stuff are considered manually imported. All in all just all certs, which do not belong to the default set of CAs a browser ships with are manually imported. All of them are of course locally trusted. (that’s the whole reason for my correction)

          3. So when corporate proxies are involved, the proxy would have to cache the key-pin for it to provide any MITM protection between the proxy and the web server.

            Do any of them do this?

            Again it seems like DANE would be easier, the proxy can validate the fingerprint from the server without any need to cache any fingerprints. For any protocol the proxy acts as a go-between for.

          4. No proxies don’t need to do anything. As explained browsers are configured to allow proxy interception even with HPKP.
            Furthermore you don’t seriously think that such ssl interception proxies will get HPKP or DNSSEC/DANE in the next decade… We may rather need to get rid of such proxies, which only support SSLv3 or similar stuff.

  2. The RFC already assumes the browser will do a path validation as per RFC5280 so isn’t the simplest solution to require browsers to unpin revoked certs? If this were the case and a certificate was wrongly pinned or the key was lost etc the website owner would just have to request that the certificate is re-issued and for the previous certificate to be revoked.

    1. We could definitely use a way to revoke pins, but revoking certificates won’t work, at least not directly. That’s because, when pinning fails, it fails because the provided certificate chain (or trust path) doesn’t match any of the previously set pins. In other words, the certificate just won’t be there.

      1. I see what you mean I think (that if there is no pinned certificate for the site in the browser the MITM pin will be accepted). Once this MITM is discovered the site owner can request revocation of the MITM certificate. (The MITM has to have been issued by a trusted CA after all for the Browser to accept it)

        The other failure cases described above were loss or compromise of the key or a required re-issuance to change a certificate field etc. These case would be covered by revocation.

        I feel like this and Certificate Transparency etc are all solutions trying to fix the problem unreliable CAs (CAs mis-issuing certificates). if the CA cannot be relied upon then to a large extent I think the system is already sunk.

        1. Perhaps browsers could remember which certificates were used for which pins. Then, on pinning failure only they could check those same certificates for revocation. Technically, it’s doable, but it complicates the threat model and perhaps can be abused in some way. There is essentially no performance penalty involved, which is always a positive.

  3. I’ve been saying this for some time now, often met with ridicule. I use to deploy HPKP on my own servers because it was the only key pinning method browsers actually supported, but I stopped even doing that few months back.

    The fundamental problem with HPKP is that it is Trust On First Use – and even worse, no user interaction, so it is Blind Trust On First Use.

    A much better method is DANE – a fingerprint in a DNS TLSA record that is validated via DNSSEC.

    DANE is Validate On Every Use, trust is taken out of the equation.

    DANE also is generic in nature and works with a whole host of protocols. Hell I even use it with ircd even though I am not aware of any IRC clients that actually support it yet.

    DANE also allows quick recovery from a compromised / revoked key – you can generate a fresh private key, get a fresh cert, and update your DNS and at most have 2x the TTL that clients will reject the cert (TTL on a DNS record is usually fairly short)

    Thank you for posting this. HPKP is the wrong solution and always has been a bad solution.

    DNSSEC with a TLSA record is the technically right way to do things.

    The browser makers need to stop being so resistant to supporting it out of the box, that’s a tough fight, but we need to keep pressuring them to support it.

  4. I’d like to write a longer answer here. I think you’re raising some important points that need to be discussed, yet I still have a rather positive view about HPKP.

    First of all I’d like to explain why I welcomed the introduction of HPKP: One of the major discussion points when I talk to people about TLS and HTTPS is the lack of trust in the CA system. The constant stream of news about CAs failing with basic security (lately wosign, comodo + startencrypt) causes a very legitimate distrust in this system. Some people go to the extreme of rejecting TLS because of that (which I don’t think makes any sense, but I heard those voices).

    HPKP is a major counterargument here, because it provides a protection mechanism that in many situations can prevent attacks involving badly behaving or compromised CAs. I see HPKP together with Certificate Transparency as a major building block of a more resilient CA ecosystem. So I’d rather not like to see it go away or get weakened. And I fear that many ideas to counter problems with HPKP might just do that.

    I think we have two issues with HPKP that we need to discuss separately: One is that it’s hard to deploy, has a big footgun and thus the deployment rate is low. I came to the conclusion that it’s entirely okay if we strive for a very small deployment of HPKP (and I thought different on that in the past). We should communicate the message that HPKP is only for high risk sites. When people ask me for advice about HPKP the main thing I tell them these days is: “If you’re not entirely sure about all the details and risks of HPKP *just don’t use it*.”

    This separates HPKP from many other TLS-related features. E. g. I think I’d easily recommend HSTS to just about everyone running a webpage, because the gain is much higher and the risk much smaller.

    Now there’s the second issue and that’s the abuse of HPKP. I was in the Def Con talk mentioned above and I wasn’t very happy with it, there were some situations where I got the impressions the speakers lack understanding of the topic they were talking about and I’m pretty sure some of the proposals they made (e.g. using HPKP for filtering/censoring) won’t work the way they proposed them. But the RansomPKP scenario is certainly a legit idea that needs to be considered.

    However some things I want to point out: All the scenarios for bad use of HPKP either can be done already with other methods (e. g. the supercookie) or are not attacks by themselve, but potentially more severe consequences of attacks (RansomPKP). The RansomPKP scenario requires your server to get hacked.

    So I do see that RasomPKP and other scenarios are a legit threat, but we haven’t seen such attacks in the wild. As long as it is a small issue or an entirely theoretical issue I wouldn’t be too worried about it.

    Finally some comments about potential remedies against HPKP misuse: One of the nice things about HPKP in my opinion is that it’s a very strong security mechanism, but it’s in theory available to everyone who wants to use it. Many of the potential ideas floating around could change that. E. g. static pinning makes google (or other browser vendors) the gatekeeper of who is allowed to pin. Requiring HSTS preloading raises the question how long the preload list will scale and if at some point people will be rejected from it if they run small sites. If we do the “if revoked cert then revoke pin”-variant proposed above we give back power to the CAs that we wanted to take away with HPKP. If at all possible I’d really like to avoid going down any of those paths.

    Summary:
    * I think HPKP is a good technology, but one that should be used only by a few people. And we as people who are perceived as TLS experts should spread that message.
    * Abuse of HPKP is a problem, but it may not be a big one and right now it’s a theoretical one.
    * While I think HPKP is only for a few, I don’t want to create huge barriers for people who want to deploy it.

    1. It still is fundamentally a Blind Trust On First Use System.

      The same problem that it solves is solved in a technically superior way by DANE and DANE does not limit itself to a single protocol but can be used for anything that involves x509 certificates.

      DANE is the only key pinning solution for example that works to fix the broken “opportunistic TLS” that SMTP uses, and it is very effective when used with SMTP.

      HPKP is an example of google pushing a technology before it was properly vetted by the security community. Trust on First Use is always problematic when there is no validation process involved with that trust.

      Seriously, what advantage does HPKP have over DANE? What can it do that DANE can’t do? Absolutely nothing, so it is more logical to adopt the key pinning solution that isn’t TOFU but constantly validates. It is more logical to adopt the key pinning system that is not limited just to HTTPS.

  5. From my stats looking to the Alexa 1K top websites from 2015-07-26 to 2016-07-31 the number of websites NOT using HPKP increase from 99.47% to 99.58% which means almost nothing.
    If you look to the same numbers for HSTS the number of websites NOT using decrease from 95.59% to 87.73% which means around 80 websites adopted the HSTS in the last year.
    You can see all the results here: https://www.owasp.org/index.php/OWASP_Secure_Headers_Project#tab=Stats

  6. Oh one quick note.

    For those who do use HPKP if you use an EC certificate make sure one of your backup key-pins is for a 2048-bit RSA cert.

    I almost had that problem when I was using HPKP – several of my sites used secp521r1 and chrome with very little warning (none that I saw) suddenly dropped support for secp521r1.

    Fortunately I was still experimenting with ECDSA and was running those sites with RSA certs too, but if Google (or another browser) ever decides to drop a particular ECDSA curve again and it happens to be a curve you use – you will need a cert for a key-pin that the browsers already have cached.

    So if you use HPKP take the safe route and make sure one of your key-pins is for an RSA cert.

  7. Thanks for raising this, Ivan, i always like to see ideas challenged. We’re currently reviewing a load of associated tech (HPKP, HSTS, CSP, CORS, SRI…) and formulating our best practices on when/not to use them and how.

    I just wanted to make a couple of points in case they’re of interest to anyone reading, i guess primarily from a larger organisation perspective.

    WRT the above, we’re (BBC) currently unable to use HSTS in most circumstances because, since it applies to an entire domain/hostname, it’s too coarse. Our structure uses layer 7 (path-based) routing to the many/various product team websites (e.g. /news/* routes to the news team websites etc.). Since not all our teams are yet on HTTPS (some for workload e.g. the Olympics and Euros, some for 3rd party and some for legacy/other reasons), we can’t use HSTS. So this rules out some of the above. i’d love to see HSTS become finer-grained to help with this, we’re definitely not the only org doing L7 routing to decentralised teams.

    What I feel HPKP brings to the table from our perspective, primarily from a serious, global news perspective is authenticity. HTTPS (with decent config of course) provides generally reasonable levels of user privacy but we need people to be able to be confident that they are reading content which is produced by us and not interfered with en route. Short of client cert auth or something along those lines, which is logistically impractical to the public, HPKP is all we have right now – unless i’m missing something. I know that corporate proxies and potentially nation-states can interfere with block the ToFU nature of HPKP but this is a start at least.

    My plan for HPKP is likely to be pinning the CA inter certs as that’s a decent balance between the brittleness of the leaf and the permissiveness of the root, with backup pins of various types/sizes (e.g. 2048 & 4096 bit RSA, plus 256 and another EC). We’re also planning on configuring a simple report-uri endpoint (myself and a colleague have arrived at similar, AWS Lambda-based architectures) and regular monitoring of the HPKP config via some tooling i have started on (https://github.com/bbc/hpkp-tools, early days but you’ll see the idea).

    Anyway, apologies for the rambling, just sharing a few off-the-cuff thoughts and i’ll definitely be giving our current plans some more thought in the light of your concerns.

    Cheers!

  8. With DANE the proxy could even do it postfix style – depend upon the recursive DNS resolver actually enforcing the DNSSEC part. All the proxy would have to do is calculate the fingerprint from the cert and make sure it matches on of the TLSA records.

    1. > Is Public Key Pinning different from OCSP must staple?

      Yes, it’s something completly different. Read a bit more about these two things and you’ll see that they are about two fundamentally different.

  9. The funny thing is: HPKP is vulnerable to the very thing it tries to protect us from: A fraudulent certificate. If (and I don’t believe this is possible) CAs would never issue certificates to anybody than the rightful owner of a domain, then we would not need HPKP. But since this may happen, HPKP is actuallly a way to exploit this.

    1. Well, it is TOFU (trust of first use). That’s it.
      Of course one can intercept the initial connection, but what HPKP does is protecting all further connections and making it harder for an attacker if they want to succeed. Because when they do so, they have to intercept *all* connection for the whole time without interruption. And this is far more difficult than just using a fraudulent cert to intercept one specific connection for a single time.

  10. Yeah HPKP has some nasty flaws, I think the proper way to authorise certs is via DNS DANE records, that is a much more complex setup, however its extra complexity makes it less likely people without the know how try to work on it, however unlike HPKP it is harder for attackers to exploit and is less dangerous if a mistake is made. A HPKP record only needs .htaccess access so a compromised web account is all that is needed.

    HSTS with long expires also seems pointless and just adds risk, because when a HSTS header expires it will simply be renewed on the next visit, so I think a HSTS header only needs to last long enough for a period fo time to notice something is up, a year is way too long.

    To mitigate the risk one should set these headers actually in the vhost files which typically require root access to control, and disallow .htaccess to use these headers. it reduces convenience, but the extra barrier can allow sys admins to warn web developers of what they doing.

    I currently have 3 keys pinned on a 30 day expiry without preload. I dont pin any CA or root cert’s they all leaf cert pins.

    An alternative to HSTS is using rewrites or CSP headers, both are less risk if a mistake is made.

  11. It strikes me that there is a very simple solution to the problem of sites not wanting the risk and complexity of implementing HPKP, but also not wanting the exposure to malcious pinning that not setting HPKP brings: allowing a “null” or wildcard value with an expiry. For example, saying:

    Public-Key-Pins: *; max-age=259200

    Would mean that you explicitly have no pinned certs, and that the browser should cache that value just as it does for pinned certs. This doesn’t represent any significant risk because it’s much the same as not having HPKP at all, but it prevents malicious pinning for existing users. A malicious actor could pin this value too, but it would not gain them very much and would cause no harm in itself – certainly a lot less damaging than locking everyone out. Do you see anything wrong with this approach?

  12. Your comments are spot on, Ivan. Pinning provides value but the risk is huge. I found no resistance from developers to implement it in Public-Key-Pin-Report-Only mode perpetually and have been recommending people to go this route ever since. Best case scenario the report provides valuable information to start an investigation. Worst case scenario, you get a false positive.

  13. The issue is the DANE is complex to deploy and has no traction, and HPKP is simple to deploy and works in >50% of browsers used by our users.

    I deployed HPKP, it was easy, it was quick, and it works, despite having to get it past change boards and document policies on key management, and archive 3 keys etc.

    If I had to do the same with their DNS, and get that under DNSSEC, it probably wouldn’t have happened, and I would have protected <1% of the user base if I'd succeeded.

    Sure there is a theoretical risk of ransom for people who haven't deployed it yet, and who get their DNS hijacked, and a CA certificate issued, or their web servers are hijacked and they don't backup the key for a period. There are removal processes, they aren't easy, but that's fine. Note if their DNS is hijacked then DANE may also be hijacked for a similar DoS.

    Your enthusiasm is admirable, and I hope DANE succeeds, but meanwhile we will have to use what works, and HPKP works.

    1. DANE is not all that complex to deploy. It’s a single DNS record. If you mean DNSSEC that is required is hard to deploy, only when you are unfamiliar with the concepts but it is the job of a system administrator to learn concepts and there are also plenty of DNS services that offer DNSSEC for you. Namecheap does for example.

      HPKP has several flaws.

      A) No protection against trojan root certificates installed on the local computer. At least not with Chrome. So when Dell or Lenovo add their self-signed certificates to a PC and the easy to find private key is then used to sign certificates, Chrome doesn’t care that it doesn’t match the keypin.

      B) Since it is blind trust on first use, you have to cache relatively long (typically 60 days) to reduce the window when that blind trust can be exploited. When an employee that had access to the private keys leaves, it is best practice to generate fresh private keys. With HPKP you have to add the new key-pins but then wait the cache period before actually deploying them or browsers will reject the fresh keys. Not very secure.

      C) You better hope that your backup key isn’t compromised if your deployed key is compromised because then you are screwed.

      HPKP may seem easy to deploy, but it is highly flawed.

      1. I seriously don’t want to join this HPKP vs DANE, but I have to mention that argument A is really ridiculous. Because do you know that DNSSEC/DANE offers no protection at all (does not matter whether there is a Dell cert or not) in all browsers currently available? It’s just not checked by anything/one.

        1. There are extensions to check it in the browser, and some mail servers now check it – with DANE being the only RFC compliant mechanism to prevent protocol downgrade attacks.

          Also DNSSEC is enabled by default on unbound, many (minority but still many) corporate nameservers enforce DNSSEC protecting the network from DNS poison cache for domains that sign their records. Comcast is an ISP that enforces, and Google’s public DNS servers enforce. So do the caching nameservers in many hosting facilities (e.g. linode), protecting the servers from fraudulent DNS records when they are communication with another host on a zone that uses DNSSEC.

          The caching DNS server doesn’t protect you against a bad certificate even if there is a TLSA record, but it does protect you from fraudulent DNS results even if your software itself doesn’t validate.

          So to say that it isn’t checked by anything/anyone is just plain wrong. A lot of people get DNSSEC protection without even knowing it. It is browsers that currently are not enforcing by default, and part of that resistance to browsers embracing DANE is because Google pushed HPKP in Chrome before there was any real discussion on whether or not it was the right solution to the problem.

          1. by “protocol downgrade attacks” I mean downgrade to text. SMTP requires you connect via text if TLS can not be established, making it incredible easy for a MITM to downgrade the communication between mail servers. But if the server you are talking to uses DANE, then the connection is properly refused if it isn’t TLS with a matching certificate – preventing SMTP downgrade to text, HPKP can’t do that because it only addresses one protocol. DANE addresses every protocol.

  14. Well, I just ran into a website today that failed the key pinning test with a MOZILLA_PKIX_ERROR_KEY_PINNING_FAILURE message. Specifically, pixabay.com no longer works in Firefox for me – it is (was?) my go to for free royalty-free, no-attribution photos. To further insult all users, including those aware of HPKP, Mozilla intentionally went out of its way to disallow removing pins inside the browser. This is stupid. From what I can tell, HPKP is broken by design. That means that everyone should stop using it and browser vendors should rip out support from their browsers. And we should find another solution to the problem. I’m down for TLSA records in DNS.

  15. What I would almost suggest would be a client-side application to generate and upload keys, along with HPKP policies. The HPKP policies would pin the uploaded key plus two standby keys to ensure that (1) the server is providing HPKP, and (2) the server key can be discarded and replaced with one of the standby keys during renewal, along with a new HPKP policy containing a freshly-generated standby key pin. The CSR generation and (if using an ACME service like Let’s Encrypt) could still be performed server-side, while the keys still considered standby would be stored on the client until they are deployed..

  16. Standards like DANE and DNSSEC require us to trust the registrars (e.g. firms like GoDaddy), the TLD’s (e.g. VeriSign), and the root (e.g. ICANN). By adopting DANE and/or DNSSEC, we would be merely shifting our trust from one set of entities to another. I don’t see why we should trust these entities any more than we trust the CA’s. See Moxie Marlinspike’s blog post for a great write-up on this subject: https://moxie.org/blog/ssl-and-the-future-of-authenticity/

    1. We already have to trust the registrar. The registrar can change the authoritative DNS server for a zone which in turn can be used to fetch fraudulent x.509 certificates. It’s not shifting any trust, that trust is already required.

      One possible way to beef things is up, use the private key for the DNSSEC KSK to create a CSR that a Certificate Authority signs.

      That way a bad actor would not only have to get a fraudulent DS record into the TLD zone but also get a fraudulently signed X.509 certificate. That certificate could then be sent as part of the TLS handshake so that the client can verify the DS records match the X.509 that has the pubic key associated with the KSK.

      I really do not think it is necessary, but for applications like banks where EV level of certificate confidence is desired, it may be worth doing.

      1. Just to add that, with the current system, we don’t necessarily have to fully trust the DNS. Using public key pinning, web site operators can establish and enforce their own cryptographic identities.

        1. X.509 certs are issued to domain names, and so yes, you do have to trust DNS with the current system.

          If you don’t trust DNS then HPKP is useless because you don’t know the keypin is being sent by the owner of the domain name. The browser has to trust that it isn’t being lied to by the DNS system when it stores the keypin.

          DNSSEC gives a mechanism by which the browser can potentially validate that trust, but if deploying DNSSEC then may as well use DANE to validate the certificate. If not deploying DNSSEC then HPKP is the only option, but its an option that quite literally requires trust in DNS without validation.

          1. > If you don’t trust DNS then HPKP is useless because you don’t know the keypin is being sent by the owner of the domain name.

            No, the thing is the max-expire. The domain owner pins the correct key in the past and if a DNS redirection now occurs, the new server cannot deliver the legitimate pin as they have not got the certificate from the legitimate owner.

  17. The future of TLS authentication may lie not with CA’s, pinning, DNS-based solutions, or the like. If the user already has a relationship (i.e. an account) with the site (which is the case in most phishing and MITM attacks, where the goal of the attack is to steal the user’s password), and the user and site share a shared-secret (e.g. a password, or even a hashed password) – then it’s possible for the user and the site to mutually authenticate one another, and rule out the presence of an (active or passive) MITM using standards such as password-authenticated key exchange (PAKE). Another advantage of PAKE is that the two parties can cryptographically determine whether they both have knowledge of the shared secret, without either party transmitting the secret to the other. Also, this solution can be implemented in the application layer, without any modifications to the web browser, the existing TLS standards, or the web server, using just a browser plug-in and a small amount of server side (e.g. PHP) scripting; and it’s fully compatible with the existing PKI standards. See https://eprint.iacr.org/2014/731.pdf for more info.

    1. CAA does nothing for the client. It only reduces the odds of fraudulent certificates being issued, but it does not eliminate that threat and when is issued, the client will still accept the certificate.

      Without DNSSEC, CAA can be fooled.

      With DNSSEC, DANE provides actual security for the client.

      That’s why I’m not personally bothering with CAA.