[cabfpub] CAA concerns (and potential solutions)

Kirk Hall Kirk.Hall at entrustdatacard.com
Fri Oct 28 21:43:47 UTC 2016

Brilliant stuff.

From: Public [mailto:public-bounces at cabforum.org] On Behalf Of Peter Bowen via Public
Sent: Friday, October 28, 2016 9:53 AM
To: Ryan Sleevi <sleevi at google.com>
Cc: Peter Bowen <pzb at amzn.com>; CA/Browser Forum Public Discussion List <public at cabforum.org>
Subject: Re: [cabfpub] CAA concerns (and potential solutions)

On Oct 28, 2016, at 9:41 AM, Ryan Sleevi <sleevi at google.com<mailto:sleevi at google.com>> wrote:

On Thu, Oct 27, 2016 at 8:33 PM, Peter Bowen via Public <public at cabforum.org<mailto:public at cabforum.org>> wrote:
I propose that this be mitigated by adoption a two prong rule for CAA:
1) By default CAs must treat the presence of CAA records which do not include them as “hard fail” and not issue
2) However, if the CA has issued an Enterprise EV RA certificate containing a valid authorization domain, logged it in at least <n> public CT logs, the CA may treat CAA for those FQDNs and Wildcard DNs matching the authorization domain as “soft fail” and issue even if the CAA record specifies otherwise.
The CA must internally log any certs that are issued after a “soft fail” and report such via any iodef in the CAA record.

Your last sentence regarding iodef suggests you believe this isn't already required.

It is not required today, as CAs are not required to check for CAA records.

The second concern is around issuance latency.  If a certificate has dozens of subject alternative names or a CA is issuing massive number of certificates the full CAA checking algorithm can be slow,

I'm sorry, but I'm going to specifically object to this statement. It's not been shown to be slow - merely, some members have postulated that it "might" be slow, without showing concrete evidence or data to the contrary, and in the face of experience otherwise.

If members want to support this line of inquiry, data needs to be provided.

OK.  I used a machine with a locally running caching resolver.  I then did the following as a test:

Fetch top-1m.csv.zip
Unzip it
head -n 200 top-1m.csv | cut -d, -f2 | sed -r -e ’s/^/www./‘ > top200.txt
echo 'nameserver’ > resolv.conf
time (ruby -e 'IO.foreach("top200.txt"){|l| l.strip!; loop {|x|puts l + "."; l=l.split(".")[1..-1].join("."); break if l.empty?}}' | while read N; do drill -c resolv.conf "$N" IN TXT; done)

real      1m5.424s
user     0m0.476s
sys       0m0.296s

Running the last line again (e.g. after warming the named cache):

real      0m15.105s
user     0m0.444s
sys       0m0.404s

One CA on the call said they have an active contract that requires them to support issuance rates of 6M per hour (approximately 1670 per second).  Even if you reduce this by orders of magnitude, CAA doesn’t make sense.

Also consider the offline issuance process — e.g. a CA running at a factory, disconnected from the Internet.  How is this to be handled?

This should allow customers who have a need for massive issuance rates of unique hostnames to enter a CAA record at the common suffix label allowing the CA to have a near 100% cache hit rate on DNS look ups.

This seems a premature optimization based on unfounded concerns from those without implementation experience. As such, I have trouble weighting this request at all as reasonable.

I’m sure the numbers above can be improved by reusing processes, but it does show there are valid concerns.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cabforum.org/pipermail/public/attachments/20161028/50a9adb1/attachment-0003.html>

More information about the Public mailing list