[Servercert-wg] Ballot SC22: Reduce Certificate Lifetimes

Ryan Sleevi sleevi at google.com
Mon Aug 19 16:27:42 MST 2019


On Mon, Aug 19, 2019 at 6:56 PM Tobias S. Josefowitz <tobij at opera.com>
wrote:

> In line with that, I doubt the referenced post would prime the
> participants of said survey to much but their own, en masse not terribly
> informed opinion.
>
> It would go against my expectations if any such survey showed a nuanced
> understanding of regulatory challenges present in the ecosystem.
>

Hi Tobi,

Just to take a few snippets here of your mail. I certainly didn't have very
high expectations for the survey, based on the information previously
shared, so I doubt my perspective on the results from the specific
questions is that valuable here. That's why I was hoping to understand more
about DigiCert's goals and objectives in sharing the results, which I think
is far more important.

Part of the concern is that while DigiCert's post in this thread didn't
acknowledge the selection method, DigiCert's past communications from
not-yet-public calls made it clear that they were not after an objective
selection, and were carefully curating the list of customers solicited for
feedback. That is, while presented as "a customer survey" and "an
overwhelming number of customers", it was in fact a limited sample of
certain "high-value" customers, and thus at best "an overwhelming number of
hand-selected customers who responded to the survey".

I wasn't sure if that remained the selection methodology, or if something
more rigorous had been applied. However, given that DigiCert did not
provide context as to what they saw the particular value of the survey to
be, or its relevance to the discussion, it was also unclear how to
interpret the results they did decide to share. Assuming a perfectly fair
and balanced survey, it seemed useful to highlight how, even in a
responsibly selected study, techniques such as priming can spoil the
results, and might thus impair achieving those goals.

While I certainly understand that academic rigor is not the objective here,
it's important to consider these facts when evaluating the results DigiCert
shared. I also wanted to help DigiCert here; as they're laboriously working
to summarize respondents' free-form text results, if the survey was
spoiled, or if the desired objective was fundamentally unobtainable due to
the selection method, perhaps it's not worth that effort and not worth
further discussion? That surely would save time and energy, which could
then be used for more productive engagement?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cabforum.org/pipermail/servercert-wg/attachments/20190819/e13923b8/attachment.html>


More information about the Servercert-wg mailing list