Re: [ksk-rollover] 答复: Architectural reconsideration on ICANN's Root Zone KSK rollover
On Wed 2018-01-31 10:55:53+0800 Davey wrote:
So I'm inspired that it is not necessary for additional set of root server and coordination between server and resolver for this purpose. All the work can be done in server side.
This is true.
It can be implemented on server side with "two logic views"(similar but different from BIND multiple view mechanism. When authoritative server recognize the resolvers who support RFC5011 (via rfc8145 or combined with kskroll-sentinel), it can roll the key only for them. Roll KSK not once for all but per-resolver. In that case there is no need any modification on resolver. Root server operator should do this work only. So there is no interoperability problem. No specification of DNS is needed which shorten the time and concerns.
There is still risk in this. Many end users are behind NATs. This means that some queries from an IP can signal that it is using the new key, while others could signal it is not. Instead of making this some sort of automatic process, it could be entirely opt-in if root-servers configured views based on destination address and simply listened on an additional anycast address. But this still leaves the problem of how to advertise the new address, and wouldn't deal with the problem of resolvers which switched to the new address even though they weren't configured to trust the new keys. If we could convince the resolver implementations to ship with multiple hints files, and to selected on at startup based on configured trust anchors, we could see the traffic shift are resolvers were updated. -- Robert Story <http://www.isi.edu/~rstory> USC Information Sciences Institute <http://www.isi.edu/>
On 31 Jan 2018, at 5:27, Robert Story wrote:
On Wed 2018-01-31 10:55:53+0800 Davey wrote:
So I'm inspired that it is not necessary for additional set of root server and coordination between server and resolver for this purpose. All the work can be done in server side.
This is true.
It can be implemented on server side with "two logic views"(similar but different from BIND multiple view mechanism. When authoritative server recognize the resolvers who support RFC5011 (via rfc8145 or combined with kskroll-sentinel), it can roll the key only for them. Roll KSK not once for all but per-resolver. In that case there is no need any modification on resolver. Root server operator should do this work only. So there is no interoperability problem. No specification of DNS is needed which shorten the time and concerns.
There is still risk in this. Many end users are behind NATs.
a large majority of end users are behind IPv4 NATs.
This means that some queries from an IP can signal that it is using the new key, while others could signal it is not.
Instead of making this some sort of automatic process, it could be entirely opt-in if root-servers configured views based on destination address and simply listened on an additional anycast address.
But this still leaves the problem of how to advertise the new address, and wouldn't deal with the problem of resolvers which switched to the new address even though they weren't configured to trust the new keys.
If we could convince the resolver implementations to ship with multiple hints files, and to selected on at startup based on configured trust anchors, we could see the traffic shift are resolvers were updated.
I’m not sure that supporting multiple hints files would really help. I might be wrong. Marc.
-- Robert Story <http://www.isi.edu/~rstory> USC Information Sciences Institute <http://www.isi.edu/> _______________________________________________ ksk-rollover mailing list ksk-rollover@icann.org https://mm.icann.org/mailman/listinfo/ksk-rollover
On Wed 2018-01-31 07:53:51-0500 Marc wrote:
I’m not sure that supporting multiple hints files would really help. I might be wrong.
I think it could give us better information than kskroll-sentinel on how many folks are ready for the roll. To extend the idea a bit, if root servers listened on 3 addresses and there were 3 hints files (2017-ready, 2010-only, neither), we would know the status of every resolver that was updated, as soon as it was updated, without having to do any testing using ad campaigns that load pictures of fish. :-) [note: speaking for myself, not my employer.] -- Robert Story <http://www.isi.edu/~rstory> USC Information Sciences Institute <http://www.isi.edu/>
On Wed, Jan 31, 2018 at 9:15 AM, Robert Story <rstory@isi.edu> wrote:
On Wed 2018-01-31 07:53:51-0500 Marc wrote:
I’m not sure that supporting multiple hints files would really help. I might be wrong.
I think it could give us better information than kskroll-sentinel on how many folks are ready for the roll. To extend the idea a bit, if root servers listened on 3 addresses and there were 3 hints files (2017-ready, 2010-only, neither), we would know the status of every resolver that was updated, as soon as it was updated, without having to do any testing using ad campaigns that load pictures of fish. :-)
So, RFC8145 already gives information very similar to this... and it turns out that the information doesn't show what we thought it would -- it demonstrates the distribution of keys across *resolvers*. That's a nice metric, but fundamentally fairly useless; in my basement I have a machine BIND in a Docker instance. It only has the old key (because Docker[0]) -- this is interesting from an academic standpoint, but doesn't actually tell us anything - no-one is querying this instance. What we need (IMO, YMMV, etc) is something which exposes this information to *users* -- in an ideal world, there would be "no resolver left behind" - unfortunately that doesn't seem realistic (managed vs trusted-keys, non-5011 implementations, read-only filesystems, etc), so I think we need to focus on "minimal users left behind". I guess you could try scale 8145 (or multiple hints files) by the number of users using each resolver, but, well, that seems like you are back to the first issue.
[note: speaking for myself, not my employer.]
Hey, me too! <waves/> W [0]: The Docker instance doesn't have persistent storage, because it is part of a test infrastructure. It starts 5011, but usually doesn't complete it (because timers) or it completes it, and then I restart some tests and the Docker image reloads. Yes, this is a 10 minute fix, but...
-- Robert Story <http://www.isi.edu/~rstory> USC Information Sciences Institute <http://www.isi.edu/>
_______________________________________________ ksk-rollover mailing list ksk-rollover@icann.org https://mm.icann.org/mailman/listinfo/ksk-rollover
-- I don't think the execution is relevant when it was obviously a bad idea in the first place. This is like putting rabid weasels in your pants, and later expressing regret at having chosen those particular rabid weasels and that pair of pants. ---maf
Hi Marc and Robert,
There is still risk in this. Many end users are behind NATs.
a large majority of end users are behind IPv4 NATs.
Thanks for pointing out the weakness of this mechanism. I think DNS cookie can help in this scenario. DNS cookie is already standardized and deployed. Davey
participants (4)
-
Davey Song(宋林健) -
Marc Blanchet -
Robert Story -
Warren Kumari