[Servercert-wg] Transitive Trust and DCV (was Re: Ballot SC-080 V1)

Tim Hollebeek tim.hollebeek at digicert.com
Tue Sep 24 15:48:42 UTC 2024


This analysis is just fundamentally wrong, as the premise that “the 3.2.2.4 validation methods are only valid if they prove control of the domain name” is wrong.

 

The actual design criteria that the 3.2.2.4 methods were based on is “OWNS or controls”. Historically, the ‘control’ methods were actually thought to be weaker, as they did not prove domain ownership, and only showed instantaneous control, which was subject to hijacking/takeover attacks.

 

The full situation is actually much more complicated than that overly simplistic analysis, which is why we have problems. There are advantages and disadvantages to each method. But before commenting on the current situation, it’s best to understand the history and how we got there, and many people offering opinions about validation seem to have skipped that step.

 

I’d suggest people review the notes from the validation summit we had about five years ago as a really good resource for learning some of the complexity in this area. Perhaps it would be good to repeat the exercise, as the situation has evolved significantly since the last time we did a comprehensive validation method review.

 

-Tim

 

From: Servercert-wg <servercert-wg-bounces at cabforum.org> On Behalf Of Aaron Gable via Servercert-wg
Sent: Monday, September 23, 2024 6:03 PM
To: Dimitris Zacharopoulos (HARICA) <dzacharo at harica.gr>; CA/B Forum Server Certificate WG Public Discussion List <servercert-wg at cabforum.org>
Subject: Re: [Servercert-wg] Transitive Trust and DCV (was Re: Ballot SC-080 V1)

 

While I agree with Andrew's points about how the ACME validation methods can be MITM'd, I disagree with the conclusion that this makes them equivalent to WHOIS-based methods.

 

When an adversary MITMs one of the ACME validation methods, they are taking control of the domain in question. By manipulating the structure of the internet (usually via BGP hijacking) they are exerting control over the content seen at that domain name by a whole section of the internet -- a section that just so happens to include the CA performing validation. However briefly, they did in fact demonstrate control over that domain.

 

When an adversary MITMs a CA's WHOIS query, the domain being validated remains untouched. The adversary has not shown any control over the domain in question, only over the CA, or over a largely-unrelated third party.

 

This is the fundamental weakness of all secret-token based methods. All that is required is that the adversary take control of something other than the domain in question so that they can get the secret delivered to them. If I had my personal druthers, I would limit domain control validation to only methods where the token being public knowledge is not considered a weakness of the validation protocol.

 

Aaron

 

On Fri, Sep 20, 2024 at 10:47 AM Dimitris Zacharopoulos (HARICA) via Servercert-wg <servercert-wg at cabforum.org <mailto:servercert-wg at cabforum.org> > wrote:

Hi Mike,

On 19/9/2024 5:22 μ.μ., Mike Shaver via Servercert-wg wrote:

Hi Dimitris,

 

I've been thinking about your email all night, and I want to figure out where our reasoning about the integrity of DCV via WHOIS-1932 or HTTP-nonce (3.2.2.5.1) diverge. I was quite surprised by your statements about the properties of internet security, the responsibility of TLS and SCWG, and the limitations that should be accepted in performing DCV. I'm hoping that your new responsibilities as Chair (congratulations!) will still leave you a bit of time to point out where my reasoning doesn't connect. :)


Thank you for that :) New officers will start their duties on 2024-12-01 but I believe Member's contributions to the Forum are a lot more important when we're having these discussions, working on ballots and continuously improving the standards! Officers mainly have to do the "administrative" work and make sure the WG Charter and Bylaws are followed. 




 

Because I'm not sure where we diverge, I'm going to walk through my understanding of the principles underlying the DCV methods used historically by CA/BF (and CAs in the dark days of private deals with individual browsers), and the direction towards which the SCWG and root programs seem to be headed in terms of those principles. Please, *please* do not take this as me implying that you don't know these things: I'm not trying to lecture, but to be as explicit as reasonable in explaining my thinking, so that it's easiest for yourself or others to help me correct error in it. I would be quite grateful for such generosity, honestly! I also apologize in advance for the length of this message. I lack the wisdom to make it smaller, perhaps.

 

(Some of this is entangled with "what is this even for?" sorts of discussion about who the web PKI should ultimately benefit, but I've tried to keep that separate.)

 

On Wed, Sep 18, 2024 at 12:18 PM Dimitris Zacharopoulos (HARICA) <dzacharo at harica.gr <mailto:dzacharo at harica.gr> > wrote:

We don't need Domain Name Registrars to go through WebTrust or ETSI audits suitable for Trust Service Providers. These Registrars are the source of truth for the DNS on which all Internet connections, and the WebPKI relies on. It's so fundamental to the ecosystem that IMO it doesn't make sense to ask ourselves how this Forum can make them better. Other authorities should be working on that.

 

A recurring theme throughout web PKI operations has been that a participant in the ecosystem thinks that they are doing a good job, and perhaps indeed are doing a good job of some kind, but are not providing the guarantees or protections that others are assuming are in place. I don't mean "good job" in the sense of competence or ethics, but rather "meeting the requirements through actions". The farther that the participant is from the core of the web PKI ecosystem, which is to say the less likely they are to evaluate all their actions by the effects of those actions on the trustworthiness of web certificates and server authentication. This means, IMO, that this group and other stewards of the PKI should be very conservative, and very explicit, when depending on another participant of the ecosystem to maintain certain properties. Otherwise, those dependency-bearing organizations may not even know what is being expected of them.

 

I am not suggesting that CA/BF in any way place requirements on what an organization must do in order to be a domain registrar, and reporter of that registry. As you say, that is beyond both the scope and the expertise of this group, and this group has plenty to keep it busy! I am, rather, suggesting that SCWG should establish criteria against which it will determine if a registrar's publication of domain information should be considered to be reliable enough to accomplish DCV. Similarly, the group does not put any requirements forcing public DNS servers to strictly check DNSSEC, but if a CA is going to use a DNS server for checking CAA records or doing 3.2.2.4.7 "DNS Change" validation, then that's only acceptable if the DNS server enforces DNSSEC in the presence of a validation chain. (I think that might only be in Bugzilla and not yet captured in the BRs, but it's well-enough understood that failure to adhere to it has required reissuance.)


I see your point. In general, as you say, it is difficult for a group like the CA/B Forum to enforce rules and expectations placed upon another industry group (Domain Name Registrars/TLD Operators) if those other industry groups do not participate or don't even have knowledge about those expectations. I would be very concerned if the SCWG or the Forum at large, would create a set of "expectations from a Registrar/TLD Operator to be considered good-enough for the WebPKI to rely upon". I think this group would be heavily criticized for doing that.

If we believe this area is so security-critical for the WebPKI, the best think we could do is to have members of this Forum engage with IANA or other ccTLD-coordinating venues to promote ideas for improved security, external monitoring and transparency. It feels similar to what Members of this Forum have done in the past when we bring up ideas for improving the security of our ecosystem, and bring those ideas to IETF (LAMPS WG or other similar venues) to standardize.

I'm not opposed to adding requirements that prohibit the use of external information in egregious failure cases, but it's not easy to find the right language in a standard for this.




 

If a ".cookoo" TLD operator is not functioning properly, then the entire TLD is in jeopardy and every Domain owner under that TLD is at risk. Certificates are the least of people's problems when relying parties connect to websites operated under that unsafe TLD operator.

 

As above, this depends very much on exactly what form "not functioning properly" takes. If their systems are broken such that it's impossible to make changes to registrations, or to look up registrations, I think we would all agree that the TLD operator is not functioning properly, but that would not have as much risk to the integrity of the web PKI as those systems being broken such that *anyone* can change any TLD.

 

For CAs to rely on email address information from registrars for issuance of certificates, I think it's only appropriate that they ensure that the information is "fit for purpose" and that it's managed (and accessed) in a way consistent with the level of trust placed in it.

 

This means making sure that the integrity of the response is maintained, which is a role that WHOIS-3912 simply cannot perform.

 

No, the SCWG and TLS is not here to solve the unencrypted nature of the DNS protocol. IETF and DNSSEC is. There is a great number of Domain Names in the DNS without DNSSEC, and there is still heavy reliance on the unencrypted DNS protocol in almost EVERY Domain Validation under 3.2.2.4.

 

DNS being unencrypted is not a problem, and indeed even DNSSEC doesn't encrypt traffic. DNS results being *unauthenticated* is a problem for clients who wish to be certain that they have been given an authorized address in response to their lookup, but even with DNSSEC that leaves the issue of ensuring that the unauthenticated IP layer reached the "real" home of that address (thus DANE).

 

Even the Agreed-upon change to website method, 3.2.2.4.18, relies on "Authorized Ports that are offered via non-encrypted channels (ports 80, 25, 22).

 

Again, encryption during validation is not necessary for there to be a reliable chain of trust from the browser to the site certificate, via CA-operated root certificates. We need authentication, but only authentication, of every step of the delegation of trust.

 

I would go as far as to say that even the ACME methods connecting to https URLs are untrusted, because the endpoints are not protected by publicly trusted certificates and anyone could launch a MiTM attack.

 

This is a part that really got stuck in my head. Are you saying that ALPN-1 is vulnerable to a MITM attack during validation? That would be a pretty shocking situation, in my opinion!

 

How would the attacker get access to the key material needed to complete the challenge? It never leaves the subscriber machine from what I can tell. Similarly, the ACME account key is used in HTTP-01 to render MITM attacks ineffective.

 

This is how the trust in that connection is bootstrapped based on the trust in the connection between subscriber and CA when the certificate is requested. Presumably publicly-trusted certificates are used when the subscriber connects to the CA's server to make that request and obtain the account key!

 

This transitivity of trust goes to the issue with WHOIS. Even if the registrar is maximally diligent in their key ceremonies and internal processes and generating certificates correctly, WHOIS-3912 puts a *maximum* trustworthiness on the entire operation, which is that of unauthenticated TCP. In my opinion, that is not appropriate for web PKI, and is a relic of a time before this community took technical registration-time attacks as seriously as we do now.


I consider Andrew's response <https://lists.cabforum.org/pipermail/servercert-wg/2024-September/004883.html>  a lot clearer than mine. I believe it answers your questions and how to MiTM the ACME TLS-ALPN-01 challenge.





 

In order for the SCWG measures to be proportionate, we should not blame the entire WHOIS protocol but work on additional controls to minimize the risk of CAs using those problematic WHOIS libraries.

Could you describe how a non-problematic WHOIS-3912 library could provide assurance of the validity of the data returned, equivalent to the integrity of the results from HTTP-01/DNS-01/ALPN-01?


Assuming no MiTM, it's no different than any of those methods. The CA "trusts" that the information is coming from an authoritative source or from a source that demonstrated control of the Authorization Domain Name. In the HTTP-01 case, it retrieves a random value or request token from a designated URL that contains the FQDN to-be-included in the Certificate. In the WHOIS case, it retrieves the email address of a tech or admin contact associated with the registration of that Base Domain Name.

The .mobi issue, in my understanding, is that some CAs were (still are?) using WHOIS libraries that were not looking for the currently authoritative Registrar of that TLD. If the SCWG could identify those insecure libraries, or if criteria were added to use libraries that check for the most recent authoritative source for each Base Domain Name Registrar, it would be a good emergency mitigation.

We had a similar discussion when discussing the linting tools. At some point we need to add rules for continuous updates, taking into account proper testing and change management procedures.




Instead, we could focus on requiring immediate/emergency measures for CAs to use the WHOIS protocol securely

 

This is definitely one place that our reasoning diverges: I don't see a way to use WHOIS-3912 securely, where by "securely" I mean "in such a way as to not weaken the DCV guarantees that the web PKI wishes to make".

 

I disagree. We are trying to move from "http" to a "trustworthy https" (note: "trustworthy https" in the sense that it doesn't use untrusted certificates). At some point, CAs need to rely on unencrypted communication to achieve that.

 

Why do CAs need to rely on unauthenticated communication to achieve that? If we were establishing the very first root, someone would have to carry the key physically to the browser developer as a form of IRL authentication, but we have sufficient systems in place now to inductively create a completely authenticated chain of validation if we should want to.

 

Where is that chain impossible to authenticate, in your opinion?


Andrew's answer covers this very clearly IMO. This is just how it works. There is no single source of truth or "one key to rule them all" that everyone trusts. The closest we have is the DNS.




 

I'm confused by this statement. Is this a plea to the CAs to stop using what you think is an insecure method?

 

Yes, it is exactly that. I don't know how anyone can seriously claim that WHOIS-3912 is a secure method, regardless of how high quality the data on the other side is.


Then the second sentence of my snipped quoted statement applies:

"Everyone is entitled to an opinion but that's why we are having these discussions publicly so that the SCWG members can find "substantial consensus" as mandated in the Bylaws. I'm sure some CAs are already working, or have already stopped using WHOIS, proactively, until this discussion comes to an end."




Domain Registries for validation of domain contacts: domain registry information should, IMO, only be used at *all*, independent of protocol, if the SCWG can be confident that IANA or another trusted body will be able to ensure that all those registries, for all domains present and future, will meet the SCWG's requirements for reliability.

This is like saying that the Registrars, the main stewards assigned to run the DNS which is fundamental for how the Internet works and practically the World Wide Web, need to meet the SCWG requirements for reliability.

 

As above, domain registrars do not need to meet any SCWG requirements. CAs need to meet the SCWG requirements, and I think that those requirements should include minimum properties that must hold of a registrar's data management practices, and the means of accessing it, if such data is to be used as authoritative proof of domain control.

 

An example: do registrars inform registrants that anyone who can receive email at the listed contact address can be issued a certificate for their domain? I think that the significance of that address as used for DCV is not well-understood by most registrants, and that many will send that email to ticket/CS lists composed of people who are *not* authorized to make changes to DNS, and are not assumed to have the same authority for requesting certificates. Similarly, I don't think that registrants understand that someone popping a domain-privacy provider not only gets their email and contact information, but could plausibly also silently have certificates issued against their domain.

 

If these participants in the ecosystem (registrar and registrants) don't regard the email contact field as having such immense security significance, then it is unlikely that they will manage it with appropriate care. Given that, as you correctly point out, the SCWG is in no position to put requirements on registrars or registrants, all that we can do is say that CAs must not use registrar data unless it is managed to an appropriate standard, and accessed in an appropriate way. That is *well* within the mandate of the SCWG, and consistent with all of its other activities in defining validation standards.


This is a very tough sell because there is no stop to this line of arguments. Should CAs also monitor the behavior of Internet Service Providers responsible of the Internet connectivity? Should they monitor Telecommunication Providers for how they run the SMS GSM networks used by CAs to contact Applicants? Should they monitor the Postal Services when sending letters to Applicants?

The publicly-trusted ecosystem (WebPKI in the SCWG) needs to rely on some fundamental services offered by entities outside the scope of the CA/Browser Forum, that are governed by their own policies and practices. The CABF cannot police or directly intervene with these fundamental services at a global level.




 

I believe the WHOIS deprecation could follow a similar pattern but for sure the SCWG should urgently focus on requiring CAs to use WHOIS libraries that query the proper Registrar endpoints, IF they are using the WHOIS method to query Domain Contact information

 

If CAs are going to continue to use WHOIS-3912, I think that the SCWG should require that the traffic be carried over an authenticated TLS channel, or that the response be signed. Anything less doesn't address the fundamental insecurity of the *access protocol*, whatever the truth of the data returned in the request. Do you feel that unauthenticated requests over the public internet really have a place in DCV?


Based on Andrew's and my answers, I hope you can see that this point just doesn't apply.




 

Saying that the internet is fundamentally insecure is like saying that electricity is fundamentally insecure, to me. The base protocols of the internet, like IP, don't provide sufficient security properties given the importance of the modern web. But just as TCP provides in-order delivery over un-ordered IP, and TLS provides privacy and integrity over TCP (which features neither of those, save a trivial attempt at integrity aimed at signalling errors and not malicious interception), the web PKI can provide authentication of domain ownership and connection validity by layering appropriate protocols (computer and human) atop the less-capable lower layers. That to me is the essence of the mandate of the SCWG. We are to TLS what ICANN is to DNS: ICANN doesn't say that you can't return arbitrary nonsense in a DNS response from your server on a private network, but it *does* say that if you want to operate a DNS server on behalf of a gTLD, you need to meet certain requirements or ICANN simply won't point traffic at you. We should "stop pointing traffic" at certificates that were validated using WHOIS-3912 DCV, and I wish I'd pushed to do so multiple decades ago.


Apologies for oversimplifying, but I just want to clarify what I meant. The Internet was built on some principles. These principles included anonymity and clear-text communication. Encryption and authentication came later, but they were built on top of the anonymous/clear-text layers.

What does "secure the Internet" mean? There are various answers to that statement but one answer is "the assurance that communication between two points (point-to-point) is authenticated and encrypted". For this you need cryptography but you need to solve the key distribution problem. One solution is the WebPKI. There are other solutions (DANE, VPN, etc).

In order to validate a Domain Name and verify a binding between a key and a name, a CA must use some parts of the unauthenticated/unencrypted Internet in order to get assurance that it is interacting/communicating/contacting the proper and authorized recipient. I don't think there is a way around that.

Dimitris.

_______________________________________________
Servercert-wg mailing list
Servercert-wg at cabforum.org <mailto:Servercert-wg at cabforum.org> 
https://lists.cabforum.org/mailman/listinfo/servercert-wg

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cabforum.org/pipermail/servercert-wg/attachments/20240924/f273a009/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5231 bytes
Desc: not available
URL: <http://lists.cabforum.org/pipermail/servercert-wg/attachments/20240924/f273a009/attachment-0001.p7s>


More information about the Servercert-wg mailing list