[cabf_validation] Validation methods used for Wildcards/ADNs
sleevi at google.com
Wed Feb 10 17:38:23 UTC 2021
On Wed, Feb 10, 2021 at 11:59 AM Corey Bonnell <Corey.Bonnell at digicert.com>
> This misses the point I was making. The use of underscore-prefixed
> subdomains is a consideration that the entire ecosystem must design for as
> we “pretend” the use of a subordinate DNS label can assert control of the
> superior node as described above.
I think we agree more than we disagree here, in as much as we have already
previously discussed that:
1) Allowing arbitrary sub-domain records (prefixed or not) to validate the
parent is problematic
2) As an intermediate step, we allowed the use of underscore-prefixed
names, which at least eliminates one class of risk (since leading
underscore records are not to be used with A/AAAA records, and so the
immediate risk - cloud providers offering subdomain registrations - are at
least not jeopardized)
3) The CNAME allowance, as requested by DigiCert, was equally introduced as
an intermediate measure to assist in the migration to well-defined records
(e.g. TXT or CAA), in which we can avoid ambiguous interpretations
I think you're right for highlighting the risks, but I think you're wrong
for suggesting that this makes them indefinitely acceptable (that is, that
these risks are intended as the end-goal), which is how it comes across
through the comparison to 18/19 and the concerns being raised re:
To be a bit more explicit about some of where we see the end-goals, so that
we can work on how to ensure all methods move towards them:
1) We want to ensure that validations are rooted in DNS. The introduction
of CAs into SSL/TLS was explicitly stated as a "short term" solution until
there was a way more closely tied with the DNS protocol itself (e.g. the
- I'm not arguing we switch to DNSSEC, but with respect to how we see the
role CAs play in TLS for our browser, they exist to help augment and map
domain names (which make up the Web Origin) and public keys
2) We want to ensure that validations are "as fresh as" DNS. In practice,
this explicitly means the reduction or elimination of reuse of past
validations, in favor of obtaining fresh.
- Without getting into a longer remark on DNS TTLs, it's certainly the
case that "registration period" (e.g. up to 10 years, depending on the TLD
registry, although typically one) is not sufficient, and DNS TTLs of 5ms is
too short. So this is a principle to balance, rather than an absolute
3) We want to ensure that validations reflect the hierarchy of DNS. DNS is
inherently hierarchical, working from a root (".") to a TLD and downwards
through zones, either defined by the registry ("registerable" domains) or
by local policy (e.g. zone splits within an organization). Authorizations
should respect that flow.
4) We want to ensure that validations are what the Applicant "intends",
with a clear understanding. A blanket authorization of "Any key held by
this organization" is different than an authorization of "This key, with
these capabilities, for this time".
- More explicitly, request-based authorizations (this key, these
capabilities) >> delegated authorizations (any key from this provider,
whether "this provider" is "this CA" or "this cloud host")
- This is, as well, a balancing act in degrees, since there are valid use
cases, but also security risks.
- Put differently: It's quite problematic that some CAs treat "resellers"
as "Enterprise RAs", and that once a cert has been validated, allow an
unlimited number of certs (with an unlimited number of keys) "on behalf of"
the validated domain. This is a inevitably a reflection that some CAs see
them as "organization binding to domain", rather than "key binding to
domain", which has some historic truth to it, but no longer. Really, we
should be discussing about the authorization of keys, since that is what
Relying Parties are relying upon. If an Applicant truly wants to grant
carte blanch to a third-party organization to associate an
arbitrary/unbounded number of keys with a domain (by issuing certs), then
at minimum, we need that to be explicit and unambiguous. However, the "I
want to" may not be sufficient, since ultimately, it's users who bear the
risk from that.
5) We want the capabilities to be scoped to the validation performed.
- Put differently: If you validate via connecting to a port, you should
only be able to use the cert **for that port**. If you validate for DNS
Host record (A/AAAA), you should only be able to use the cert **for that
host**. If you validate via a domain resource record (such as CAA), then
your capabilities are captured in that record.
The issue you're highlighting here with 188.8.131.52.7 is the concern with "#4"
- Can a CNAME record be interpreted as a blanket authorization for TLS
issuance? The answer is "No, it shouldn't be", and in our view, the
allowance is temporary, at best, as we work to improve the security of
users and help migrate to reliable validations.
The issue we're highlighting with 184.108.40.206.6 (to the extent of reuse),
.17/.18 is the concern with #1, #3, and #4 - the existing approach and
authorization violates all of these. One of the fundamentals of TCP and DNS
is that access to a single port should not be confused with access to the
server. While there are informal assumptions that have also emerged (e.g.
the notion of ports < 1024 being "privileged" ports), these aren't as
enshrined as the more basic assumption: control over a TCP port != control
or authorization via DNS.
The discussion around SRVNames (and the brokenness of name constraints) is
the only way we can really tackle #5, and that's not been the most pressing
priority for us, but it's very much a direction we plan to move towards.
While I'm sure there is ample room for debate about both the relative
prioritization of these goals, and I'm sure some CAs would disagree (and
their customers), these capture some of the principles we're looking at
when we're trying to improve the security and reliability of our users'
communications, and that's why we plan to make progress here.
> I believe the concern is that pointing to a given machine via an A/AAAA
> record and serving the Random Value from a well-known URI on that machine
> is sufficient to assert control of the machine’s FQDN,
To be explicit: No, we don't think it's sufficient (e.g. this runs afoul of
#5). It is permitted to the extent we have not yet deployed technical
controls to restrict, but I don't want a takeaway to be that this is "fine
> but not for child nodes of the DNS and there is no way currently for a DNS
> administrator express that they would like to prohibit such ANEF validation
> via method 18/19. Additionally, the ANEF allowance for method 18/19 may be
> surprising for system administrators, which may introduce security issues
> if their systems do not design for such an allowance. Is this an accurate
> summary of the concern?
> > If the suggestion is to require explicitly-defined, unambiguous
> delegation via DNS, then I think that's an entirely reasonable path to
> discuss with respect to how these methods might look. They would,
> inevitably, be new methods, but the suggestion of "An explicit DNS record
> that delegates authority to use this HTTP service to perform validations
> for this domain namespace" certainly would move us closer to having a
> similar level of assurance, vis-a-vis the domain administrators
> authorization, while also providing flexibility to reduce the need to
> update DNS. We've taken this approach before, with respect to CAA
> extensions for e-mail and phone, so there is precedent.
> We (and I imagine many other CAs) would be very supportive of developing
> such a mechanism so that Applicants can signal in DNS that they wish to
> allow ANEF validation for methods 18/19. Providing such a mechanism would
> provide a smooth transition plan for Subscribers while addressing the
> concerns you raise.
The meta-goals above hopefully help us make progress, by understanding what
some of the essential properties here are, and which have been reflected in
our past discussions. At a minimum, it needs to be unambiguously explicit,
rooted in DNS, regularly revalidated, and appropriately authorized.
While we're open to discussing how to achieve that, I do think it's
important to consider that we don't see offering additional flexibility to
CAs/Applicants as being able to block the need to improve the security of
our users. Additionally, our first priority remains the security of our
users, and so even if site operators may wish to do something insecurely
(whether it be ANEF delegation via HTTP, perpetual delegation,
authorization of arbitrary keys), our concern will remain on ensuring our
users' security needs are met first and foremost.
We need to continue to make progress - not only on .18/.19, but for all of
220.127.116.11. We're hardly at a "mission accomplished" state - we're merely in
the conclusion of Phase 1, having (finally) phased out Any Other Method
while minimizing disruption, but still have many, many improvements to
continue to make.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Validation