followup of DNSSEC Workshop at ICANN64
Hi all, During DNSSEC Workshop at ICANN64, there were discussion regarding future KSK rollover. https://64.schedule.icann.org/meetings/961939 This is followup what I said. I support regular Root Zone KSK Rollover for operational maturity and DNS software matulity. The importance is doing regulary. Frequency may be once per 2-3 years, less than 5 years. -- Yoshiro YONEYA
Yoshira Yoneya and everyone interested in past and future KSK rollovers, On 13/03/2019 09.32, Yoshiro YONEYA wrote:
During DNSSEC Workshop at ICANN64, there were discussion regarding future KSK rollover.
https://64.schedule.icann.org/meetings/961939
This is followup what I said.
I support regular Root Zone KSK Rollover for operational maturity and DNS software matulity. The importance is doing regulary. Frequency may be once per 2-3 years, less than 5 years.
I was not at the workshop, but I agree rolls should be more frequent. Personally I would like to see rolls frequent enough that everything around a roll is automated. However, I am also happy if we start with a cadence of 2-3 years between rolls. 😊 Cheers, -- Shane
On Mar 14, 2019, at 5:58 PM, Shane Kerr <shane@time-travellers.org> wrote:
Personally I would like to see rolls frequent enough that everything around a roll is automated.
I mostly agree. I think the key thing is that key rolls must be *normal* and the software therefore designed with that assumption. For developers to believe that it is normal, it must be "frequent enough", whatever that means. I personally might vote for "quarterly" or "annual" if the implication is only that operators needed to be aware that it was happening in case something breaks, and 2-3 years might actually be OK, but for sure not "every five years". -------------------------------------------------------------------------------- Victorious warriors win first and then go to war, Defeated warriors go to war first and then seek to win. Sun Tzu
On Thu, Mar 14, 2019 at 6:42 PM Fred Baker <fredbaker.ietf@gmail.com> wrote:
On Mar 14, 2019, at 5:58 PM, Shane Kerr <shane@time-travellers.org> wrote:
Personally I would like to see rolls frequent enough that everything around a roll is automated.
I mostly agree. I think the key thing is that key rolls must be *normal* and the software therefore designed with that assumption. For developers to believe that it is normal, it must be "frequent enough", whatever that means. I personally might vote for "quarterly" or "annual" if the implication is only that operators needed to be aware that it was happening in case something breaks, and 2-3 years might actually be OK, but for sure not "every five years".
So, my original "gut feel" was approximately every year, and I still feel that that is roughly the right frequency -- but, I think that we first need to figure out what the cause of the increase in DNSKEY lookups is - it concerns me that we predicted no impact from the revocation, and we got... this. I think that, assuming we figure out the causes of the increase (and understand them well enough that we are fairly sure that they won't jump again!), my gut still says ~1year -- but, more research needed... W
-------------------------------------------------------------------------------- Victorious warriors win first and then go to war, Defeated warriors go to war first and then seek to win. Sun Tzu
_______________________________________________ ksk-rollover mailing list ksk-rollover@icann.org https://mm.icann.org/mailman/listinfo/ksk-rollover
-- I don't think the execution is relevant when it was obviously a bad idea in the first place. This is like putting rabid weasels in your pants, and later expressing regret at having chosen those particular rabid weasels and that pair of pants. ---maf
On 14. 03. 19 12:01, Warren Kumari wrote:
So, my original "gut feel" was approximately every year, and I still feel that that is roughly the right frequency -- but, I think that we first need to figure out what the cause of the increase in DNSKEY lookups is - it concerns me that we predicted no impact from the revocation, and we got... this. I think that, assuming we figure out the causes of the increase (and understand them well enough that we are fairly sure that they won't jump again!), my gut still says ~1year -- but, more research needed...
As a producer of a DNS validating CPE device/router, I must say, I am not very excited about frequent roll-overs. If your device stays at a retailer store for some time, you might be in a trouble. So I would prefer some longer periods. But it is more important how much in advance is the new key known/published. Ondrej
W
Â
-------------------------------------------------------------------------------- Victorious warriors win first and then go to war, Defeated warriors go to war first and then seek to win. Â Â Â Sun Tzu
_______________________________________________ ksk-rollover mailing list ksk-rollover@icann.org <mailto:ksk-rollover@icann.org> https://mm.icann.org/mailman/listinfo/ksk-rollover
-- I don't think the execution is relevant when it was obviously a bad idea in the first place. This is like putting rabid weasels in your pants, and later expressing regret at having chosen those particular rabid weasels and that pair of pants. Â Â ---maf
_______________________________________________ ksk-rollover mailing list ksk-rollover@icann.org https://mm.icann.org/mailman/listinfo/ksk-rollover
-- ( CZ.NIC z.s.p.o. ) ------------------------------------------------- Ondrej Filip - CEO Office : Milesovska 5, Praha 3, Czech Republic Email : ondrej.filip@nic.cz http://www.nic.cz Private: feela@network.cz -------------------------------------------------
Ondrej Filip <ondrej.filip@nic.cz> wrote: >> So, my original "gut feel" was approximately every year, and I still >> feel that that is roughly the right frequency -- but, I think that we >> first need to figure out what the cause of the increase in DNSKEY >> lookups is - it concerns me that we predicted no impact from the >> revocation, and we got... this. I think that, assuming we figure out >> the causes of the increase (and understand them well enough that we >> are fairly sure that they won't jump again!), my gut still says ~1year >> -- but, more research needed... > As a producer of a DNS validating CPE device/router, I must say, I am > not very excited about frequent roll-overs. If your device stays at a > retailer store for some time, you might be in a trouble. So I would > prefer some longer periods. But it is more important how much in > advance is the new key known/published. I am also concerned about such devices. Are you doing RFC5011? if not, would you be willing to do that? I know that Turris does automatic updates/patches... how much time would you need to see the new key in order to be sure that you had incorporated new anchors via software updates? When you said "store" above, I was thinking that the CPE device was deployed *at* a store. (One of my ISP customers has about a thousand brick-and-mortal retails with similar devices, and they are lucky to get any physical maintenance). I realize now that you meant that the device is a box at a store (like amazon...) and it takes awhile to get plugged in. I am particularly concerned about such devices, as they do not get updates while turned off. I think that we need to find a way to extend RFC5011 to provide a way to chain to current state of the art, and I think that turning DNSSEC off to do software patches is the wrong idea. -- ] Never tell me the odds! | ipv6 mesh networks [ ] Michael Richardson, Sandelman Software Works | IoT architect [ ] mcr@sandelman.ca http://www.sandelman.ca/ | ruby on rails [
Ondrej, On 14/03/2019 14.57, Ondrej Filip wrote:
On 14. 03. 19 12:01, Warren Kumari wrote:
So, my original "gut feel" was approximately every year, and I still feel that that is roughly the right frequency -- but, I think that we first need to figure out what the cause of the increase in DNSKEY lookups is - it concerns me that we predicted no impact from the revocation, and we got... this. I think that, assuming we figure out the causes of the increase (and understand them well enough that we are fairly sure that they won't jump again!), my gut still says ~1year -- but, more research needed...
As a producer of a DNS validating CPE device/router, I must say, I am not very excited about frequent roll-overs. If your device stays at a retailer store for some time, you might be in a trouble. So I would prefer some longer periods. But it is more important how much in advance is the new key known/published.
As far as I know, patching is still the only solution we have to securing software on the Internet. That means there must be some way to get software updates to the device. A device can detect a mismatch between the root KSK that it has and the one at the root servers. In that case, it can disable the root trust anchor until it updates the KSK, in whatever way it thinks is reliable enough - probably by updating the package that configured the KSK. My own suggestion would be to include a trust anchor for a zone that serves updated software, since that would minimize the risk of being sent to bogus servers looking for updates. This trust anchor can live forever if you want, because of the awesome decision that DNSKEY records have no lifetime. 😉 Presumably any software update mechanism has its own signatures as well, so the chances of actually getting invalid updates seems minimal. There is already a need to disable DNSSEC at certain times, for example before a device has synchronized its clock, so fiddling with trust anchors is probably something that the device has to do anyway. Maybe someone already has BCP in this area written down? If not, maybe it should be? Cheers, -- Shane
Shane Kerr <shane@time-travellers.org> wrote: > A device can detect a mismatch between the root KSK that it has and the > one at the root servers. In that case, it can disable the root trust > anchor until it updates the KSK, in whatever way it thinks is reliable > enough - probably by updating the package that configured the KSK. Cool. So I can get DNSSEC turned off on those by breaking the . signatures. The device will turn off DNSSEC, and I can do whatever it was that DNSSEC was preventing. > My own suggestion would be to include a trust anchor for a zone that > serves updated software, since that would minimize the risk of being > sent to bogus servers looking for updates. This trust anchor can live > forever if you want, because of the awesome decision that DNSKEY > records have no lifetime. 😉 Presumably any software update mechanism > has its own signatures as well, so the chances of actually getting > invalid updates seems minimal. This is a reasonable suggestion, but you need to include NS-glue records/hints as well. Some update systems (yum comes to mind) have many many anchors, mirrors, which are discovered from lists and by measurements. That's a lot of glue.... it might be better to have a single source for a new trust anchor package, and just update that. > There is already a need to disable DNSSEC at certain times, for example > before a device has synchronized its clock, so fiddling with trust > anchors is probably something that the device has to do anyway. No, that's not true. You don't need to disable DNSSEC, you just need to disable checking the expiry on the records. That opens one up to replay attacks, but not substitution attacks. OpenWRT's dnsmasq already does this. > Maybe someone already has BCP in this area written down? If not, maybe > it should be? Given your domain, I guess synchronizing your clock is a regular problem :-) What happens if you go back to the 1990s? Do you un-type-roll-over? :-) :-) And do you have a HOSTS.TXT for even earlier visits? -- Michael Richardson <mcr+IETF@sandelman.ca>, Sandelman Software Works -= IPv6 IoT consulting =-
Yoshiro YONEYA <yoshiro.yoneya@jprs.co.jp> wrote: > During DNSSEC Workshop at ICANN64, there were discussion regarding > future KSK rollover. > https://64.schedule.icann.org/meetings/961939 > This is followup what I said. > I support regular Root Zone KSK Rollover for operational maturity and > DNS software matulity. > The importance is doing regulary. Frequency may be once per 2-3 years, > less than 5 years. I also want regular rollover, and I'd like it to be frequent enough that it gets tested. I also want it infrequent enough to never be without an anchor. So, I feel uncomfortable with this frequency. I don't have much in the way of facts, just gut instinct. **It feels too long and yet too short** I think it should be either every ten years, or every year. I'd like to be able to take a Long-Term-Support (LTS) release DVD (kept in physical media, and therefore known not to have been tampered with) of some OS and install it during it's entirely securely, and have it apply it's updates using DNSSEC. I think it's reasonable that a live boot/install device do RFC5110 to update itself before reaching out to update software, but I don't think that we leave the chain of keys in place long enough for a 3-5 year LTS to be able to catch up. That leaves the system turning off DNSSEC in order to get new software with new trust anchors. Yes, the new software might be signed with known trust anchors, so that chain could be intact. But, RFC5110 ought to let us run the original software. Maybe this desire is controversial. (Why would a paranoid person one use such an old release? There are a number of reasons I can think about, some of them involving investigation of potential other compromised software tool chains. Is this enough justification? Maybe.) I think that we need a broad software industry survey of software release scheduling and patching to inform us about how to include keys. Maybe someone has already done this? This is as much social science as anything else. I wonder if the chain of root KSKs could get moved to another point so that we'd have a record of forward signatures? -- Michael Richardson <mcr+IETF@sandelman.ca>, Sandelman Software Works -= IPv6 IoT consulting =-
Michael Richardson <mcr+ietf@sandelman.ca> wrote:
I also want regular rollover, and I'd like it to be frequent enough that it gets tested. I also want it infrequent enough to never be without an anchor.
Trust anchor lifetime can be decoupled from rollover frequency. If keys are generated a few years in advance of going into active use, there is plenty of time for them to be disseminated beforehand. They do not have to be pre-published in the zone (although that is what RFC 5011 was designed for); they can be distributed out of band by software updates or other means. If there are annual rollovers with keys generated N years in advance, at any time there will be N pre-published keys one of which might be pre-published in the zone, one active KSK in production, and maybe one in retirement. Tony. -- f.anthony.n.finch <dot@dotat.at> http://dotat.at/ Irish Sea: West or southwest 6 to gale 8, decreasing 5 for a time. Rough, becoming moderate. Rain. Moderate or good.
Tony Finch <dot@dotat.at> wrote: >> I also want regular rollover, and I'd like it to be frequent enough that it >> gets tested. I also want it infrequent enough to never be without an anchor. > Trust anchor lifetime can be decoupled from rollover frequency. > If keys are generated a few years in advance of going into active use, > there is plenty of time for them to be disseminated beforehand. They do > not have to be pre-published in the zone (although that is what RFC 5011 > was designed for); they can be distributed out of band by software updates > or other means. If there are annual rollovers with keys generated N years > in advance, at any time there will be N pre-published keys one of which > might be pre-published in the zone, one active KSK in production, and > maybe one in retirement. Yes, I'd like to do that. I'd like N=10, and the roll-over frequency to be yearly. -- Michael Richardson <mcr+IETF@sandelman.ca>, Sandelman Software Works -= IPv6 IoT consulting =-
On 3/14/19 12:38 PM, Michael Richardson wrote:
If keys are generated a few years in advance of going into active use, there is plenty of time for them to be disseminated beforehand. They do not have to be pre-published in the zone (although that is what RFC 5011 was designed for); they can be distributed out of band by software updates or other means. If there are annual rollovers with keys generated N years in advance, at any time there will be N pre-published keys one of which might be pre-published in the zone, one active KSK in production, and maybe one in retirement.
Yes, I'd like to do that. I'd like N=10, and the roll-over frequency to be yearly.
The problem with generating that many keys out into the future is they then become hostages to fortune should any issues arise during that time-span with the integrity of those keys. e.g. a breach which causes the private keys to be disclosed, flaws being discovered in the algorithm in use, or the processes used to generate the keys, etc. Which would likely mean a complete reset for new keys to be generated, and a very large pile of baked-in pre-disseminated keys needing revoked. The overall approach and annual rollover makes sense to me, but I think care needs to be taken with the numbers proposed. Keith
>>> If keys are generated a few years in advance of going into active >>> use, there is plenty of time for them to be disseminated >>> beforehand. They do not have to be pre-published in the zone >>> (although that is what RFC 5011 was designed for); they can be >>> distributed out of band by software updates or other means. If >>> there are annual rollovers with keys generated N years in advance, >>> at any time there will be N pre-published keys one of which might >>> be pre-published in the zone, one active KSK in production, and >>> maybe one in retirement. On 3/14/19 12:38 PM, Michael Richardson wrote: >> Yes, I'd like to do that. I'd like N=10, and the roll-over frequency >> to be yearly. Keith Mitchell <keith@dns-oarc.net> wrote: > The problem with generating that many keys out into the future is they > then become hostages to fortune should any issues arise during that > time-span with the integrity of those keys. e.g. a breach which causes > the private keys to be disclosed, flaws being discovered in the > algorithm in use, or the processes used to generate the keys, etc. > Which would likely mean a complete reset for new keys to be generated, > and a very large pile of baked-in pre-disseminated keys needing revoked. It seems that these issues exist if there are *any* keys generated before use, independantly of the number of keys. Based upon my reading of the spec sheet of the HSM that ICANN uses, it can store ~1K key pairs, so it's not like we need two devices for 10 vs 5 keys. -- Michael Richardson <mcr+IETF@sandelman.ca>, Sandelman Software Works -= IPv6 IoT consulting =-
> On Mar 15, 2019, at 4:30 PM, Michael Richardson <mcr+ietf@sandelman.ca> > wrote: > It seems that these issues exist if there are *any* keys generated before > use, independently of the number of keys. Fred Baker <fred@isc.org> wrote: > To my mind, the argument has the issue backwards. The issue is that the > resolver needs to be able to resolve keys currently signed by the IANA, which > means that it needs to know "a" current public key to validate the DNSKEY > response. Rather than publish 2^12 keys that might or might not be used in > the future, it seems to me that it has to be able to identify *a* public key > that has been used in the past and use that to get the current public > key. I agree with you. (I think that there is a substantive difference between 2^12 keys, and 12 keys, and I think 2^12 was hyperbole... moving on) > It might be simplest to let resolver software bake whatever "current key"it > likes, I agree that having a path from the current key at time X to the current key at time Y is a good thing. I don't think it needs to be contained in . > perhaps among a set of candidates, into the software, and use > that > public key to sign the NS request to the root. If the decrypted request is > recognized by the root, it replies to the DNSKEY request with the current > key. While the math says you can encrypt/decrypt with either "public" or "private" keys, and the practice says otherwise. In particular, a) none of root name servers have the private keys online b) the root name servers are run by 13 different entities, and each has 50+ anycast nodes operating as root name servers. c) thanks to DNSSEC there is significant thinking to having recursive resolvers take a copy of the root zone by <someprotocol>, and not bothering the root name servers as often. > The attack on that would be to flood the root with requests using whatever > key might be randomly generated (or no real key), consuming the root's > resource to validate signatures. I'm not sure I have a response to that, but > it probably comes down to a cheap way to identify candidates for validation > and ignore the rest. The attackers know the real public key (it's public), so they don't have to fake it. They could put random garbage in, which I think is your point. -- ] Never tell me the odds! | ipv6 mesh networks [ ] Michael Richardson, Sandelman Software Works | IoT architect [ ] mcr@sandelman.ca http://www.sandelman.ca/ | ruby on rails [ -- Michael Richardson <mcr+IETF@sandelman.ca>, Sandelman Software Works -= IPv6 IoT consulting =-
Michael Richardson writes:
It seems that these issues exist if there are *any* keys generated before use, independantly of the number of keys.
Yes, exactly, which makes me scratch my head every time someone proposes a list of pre-generated keys as the solution to this problem. It seems to me that what such a list gets you is lead time on cracking future keys, or more things that end up useless in the event some aspect of the whole process is found to have been faulty. This in exchange for the busywork of changing the current key more frequently without adding any real additional security in the process.
Hello, At 03:16 PM 16-03-2019, Dave Lawrence wrote:
It seems to me that what such a list gets you is lead time on cracking future keys, or more things that end up useless in the event some aspect of the whole process is found to have been faulty. This in exchange for the busywork of changing the current key more frequently without adding any real additional security in the process.
The first "trust anchor" was in use for around 10 years. Although it has not caused any security issue, it is better to have "key rotation". There have been discussions in DNSOP and in other venues about "cracking keys" but they were not about the KSK "private key". The current design was not driven by technical limitations of the HSMs used to store the cryptographic material. Having more "keys" might require changes to the design. That would open up an additional set of issues to consider. Regards, S. Moonesamy
S Moonesamy writes:
The first "trust anchor" was in use for around 10 years. Although it has not caused any security issue, it is better to have "key rotation".
Right, I completely agree that we should have regular key rotation and have previously offered my opinion that I'd like to see it once per year. I think that achieving it by rolling to a published list of pre-generated keys is a poor way of doing it.
Dave Lawrence <tale@dd.org> wrote: > Michael Richardson writes: >> It seems that these issues exist if there are *any* keys generated >> before use, independantly of the number of keys. > Yes, exactly, which makes me scratch my head every time someone > proposes a list of pre-generated keys as the solution to this > problem. Interesting that we agree on a core assumption and then come to opposite conclusions :-) > It seems to me that what such a list gets you is lead time on cracking > future keys, or more things that end up useless in the event some > aspect of the whole process is found to have been faulty. This in > exchange for the busywork of changing the current key more frequently > without adding any real additional security in the process. I could live with a KSK being in use for a long period of time. But, I don't buy the lead time argument. If any of the N keys are vulnerable to brute force attack in the planned use of period, then all the keys are vulnerable to an adversary with 1/N more resources. Do you agree with this? Brute force is not the only attack: there are possible "Mission Impossible"-like exfiltration attacks against the HSM(s). Do these attacks depend upon how many keys there are? I don't think so. -- Michael Richardson <mcr+IETF@sandelman.ca>, Sandelman Software Works -= IPv6 IoT consulting =-
Hi Michael, I would like to disclose that I am one of the Crypto Officers. For the sake of transparency, I'll mention that my travel expenses for the last KSK Ceremony were sponsored by ICANN. Please let me know if you would like to have more information about that or anything else which might cause a potential conflict of interest. At 01:24 PM 17-03-2019, Michael Richardson wrote:
Brute force is not the only attack: there are possible "Mission Impossible"-like exfiltration attacks against the HSM(s). Do these attacks depend upon how many keys there are? I don't think so.
After the last KSK Ceremony, there was a discussion with the Root Zone Manager (Public Technical Identifiers) about the physical controls for the facility [1] where some of the HSMs are located. I took the concerns raised on the different threads [2] into account for that discussion. The issue, as I see it, is not whether an "exflitration attack" could happen; it is whether it will be detected and publicly disclosed. Regards, S. Moonesamy 1. There are two facilities. I am commenting on the one which I have accessed. 2. As an example, https://mm.icann.org/pipermail/ksk-rollover/2019-February/000646.html
Hi, thank you for the reply and context. S Moonesamy <sm+icann@elandsys.com> wrote: > At 01:24 PM 17-03-2019, Michael Richardson wrote: >> Brute force is not the only attack: there are possible "Mission >> Impossible"-like exfiltration attacks against the HSM(s). Do these >> attacks >> depend upon how many keys there are? I don't think so. > After the last KSK Ceremony, there was a discussion with the Root Zone > Manager (Public Technical Identifiers) about the physical controls for > the facility [1] where some of the HSMs are located. I took the > concerns raised on the different threads [2] into account for that > discussion. The issue, as I see it, is not whether an "exflitration > attack" could happen; it is whether > it will be detected and publicly disclosed. I am not addressing the absolute risk of exfiltration attacks, but rather asking if having more keys in the HSM causes a relative change to the risk of exfiltration attacks. More keys generated might mean that the HSM is unlocked more often, but I don't think this would be the case. My understanding is that the HSMs need to be acccessed on a regular basis by the Security Officers anyway in order to sign new ZSKs. -- Michael Richardson <mcr+IETF@sandelman.ca>, Sandelman Software Works -= IPv6 IoT consulting =-
Hi Michael, At 07:59 AM 18-03-2019, Michael Richardson wrote:
I am not addressing the absolute risk of exfiltration attacks, but rather asking if having more keys in the HSM causes a relative change to the risk of exfiltration attacks.
The simple answer is no.
More keys generated might mean that the HSM is unlocked more often, but I don't think this would be the case. My understanding is that the HSMs need to be acccessed on a regular basis by the Security Officers anyway in order to sign new ZSKs.
The HSMs on the West Coast (U.S.) are activated twice a year during scheduled KSK Ceremonies by using three out of the seven "OP" cards. Physical access to the HSMs (hardware device) is under the control of the Root Zone Manager. A KSK Ceremony takes more time (not more Ceremonies) if there are more "keys" to generate. The same number of KSK Ceremonies were held for the "keys" required for the roll-over process. I have requested authorization to attend events which might entail access to a security card and the Root Zone Manager agreed to those requests. Such events are usually scheduled within a day of a KSK Ceremony. Regards, S. Moonesamy
Michael Richardson writes:
But, I don't buy the lead time argument. If any of the N keys are vulnerable to brute force attack in the planned use of period, then all the keys are vulnerable to an adversary with 1/N more resources. Do you agree with this?
I completely agree with it. Of course, it does kind of gloss over the details, sidestepping just how high the resource bar is. I will of course also acknowledge that this applies equally well to the current key, both for and against my own position. What we're of course trying to find is the right balance. It is pretty straightforward to posit that if one key is crackable by an adversary of sufficient power in an amount of time that is not less than the defined rolling period, that multiplying the number of such keys extends the period of time that be spent cracking them. I'm a bit perplexed about how lead time can't possibly be relevant.
Brute force is not the only attack: there are possible "Mission Impossible"-like exfiltration attacks against the HSM(s). Do these attacks depend upon how many keys there are? I don't think so.
Right, but that was the other part of my point. So great, you generated 20 keys for the future, and Tom Cruise just stole the Rootsday Device that controls them all. Or Quantum Computerman finally had his breakthrough and *poof* cracking time plummets. What good has having made that list gotten you? We don't want those baked-in products using the list then either. The only thing pre-generating a bunch of keys seems to buy you is a known schedule that you can bake into products. While I won't say that's valueless, for my tastes it doesn't have enough value to be seen as a useful solution for this problem.
Dave Lawrence <tale@dd.org> wrote:
Yes, exactly, which makes me scratch my head every time someone proposes a list of pre-generated keys as the solution to this problem.
Right, pre-generated keys don't make any meaningful difference to the cryptographic security of the system: they are to do with operational preparedness. But it's arguable whether they help very much with that either :-) The underlying issue is that we don't have a bootstrap system. If we did, then it ought to be able to solve the device-on-a-shelf problem and the compromise problem. (And the time problem!) Currently the consensus seems to be, let vendors deal with the problem; or (like unbound-anchor) rely on the ICANN publication signing keys, which just moves the problem from one key to another more mysterious key. A few years ago I wrote down some ideas about having a set of third-party "witnesses" that can answer questions about the current root key and time. No witness is trusted by itself; instead a client only believes an answer if enough witnesses agree. A client is pre-configured with several long-lived witness IP addresses and keys. The client requires a quorum of some subset of its configured witnesses when it is bootstrapping. A few witnesses can fail for whatever reason (compromise, bankruptcy, ...) and clients can still bootstrap OK. So there's no single point of failure, and the lifetime of devices on shelves depends only on the attrition rate of witnesses. I originally thought of implementing this idea on top of DNS; then later https; now I think roughtime might be a good option. But the hard part is getting witnesses lined up and committing to provide the service. Tony. -- f.anthony.n.finch <dot@dotat.at> http://dotat.at/ Southeast Iceland: Southwesterly 5 to 7, occasionally gale 8 later. Moderate or rough, becoming very rough later. Rain then showers. Good, occasionally poor.
On 3/14/19, 08:58, "ksk-rollover on behalf of Michael Richardson" <ksk-rollover-bounces@icann.org on behalf of mcr+ietf@sandelman.ca> wrote:
I think it's reasonable that a live boot/install device do RFC5011 to update itself before reaching out to update software, but I don't think that we leave the chain of keys in place long enough for a 3-5 year LTS to be able to catch up.
(I altered the RFC number in the included text assuming Automated Updates is meant. [...Why I try to refrain from using document numbers as identifiers...] ;) )
(Why would a paranoid person one use such an old release? There are a number of reasons I can think about, some of them involving investigation of potential other compromised software tool chains. Is this enough justification? Maybe.)
The use case of having a device re-establish a secure state after an extended period of being disconnected is one of those "classic problems." I've heard this applied to devices in submarines go radio-silent while the keys are rolled. And, for instance, from 10 years back, "DNSSEC Trust Anchor History Service" https://tools.ietf.org/html/draft-wijngaards-dnsop-trust-history-02 That has resulted in an resource record type reservation but the draft hasn't made it into a RFC document. https://www.iana.org/assignments/dns-parameters/TALINK/talink-completed-temp... There were considerable mail threads around that draft in the IETF. My suggestion is to measure the incidence of the use-case. Now would be the time to do so. The key is rolled, how many re-connected devices will try to catch up? It isn't clear how to measure this, but it would be good to measure the volume of this use case. (Who has the data? What should be measured? Is there an objective measure of the need?) Anyone want to take this on? If the impact is high enough, intent (Michael's use of 'paranoia') doesn't matter. In the sense that there have been attempts to address this in the past that have not come to fruition, if the impact is seen, we ought to redouble our efforts. With use cases quantified, the parameters ('3-5 years' from Michael's mail) would be based on what's real, which may precipitate a workable solution.
As I also stated in the DNSSEC workshop, I support a regular root KSK rollover, annually but not longer than two years, we need to develop muscle memory to rollover the key. Also, if the removal of the old key tomorrow is non eventful then I think it would be worthwhile to roll the key in 6 months while our memory is still fresh, this may force the one who manually update to use automated mechanisms. As for the unexpected increased DNSKEY query results, as I said, it looks very interesting but if there were real users or applications problems behind it then they would be been fix by now, and in my view the increase is probably not end-user / application impacting. Just plain old hardcoding ;-) Jacques
-----Original Message----- From: ksk-rollover <ksk-rollover-bounces@icann.org> On Behalf Of Yoshiro YONEYA Sent: March 13, 2019 5:33 PM To: ksk-rollover@icann.org Subject: [ksk-rollover] followup of DNSSEC Workshop at ICANN64
Hi all,
During DNSSEC Workshop at ICANN64, there were discussion regarding future KSK rollover.
https://64.schedule.icann.org/meetings/961939
This is followup what I said.
I support regular Root Zone KSK Rollover for operational maturity and DNS software matulity. The importance is doing regulary. Frequency may be once per 2-3 years, less than 5 years.
-- Yoshiro YONEYA
_______________________________________________ ksk-rollover mailing list ksk-rollover@icann.org https://mm.icann.org/mailman/listinfo/ksk-rollover
Thanks again for a yet another great DNSSEC workshop in Kobe! Let me chime in and recap what I said at the meeting. I’m for regular rolling of the root KSK. Less than 5 years, which is too long to keep institutional and operational memory, but no more than every year, which would just be too much churn. Since we’re not in any hurry, I would use some time to look more into the strange increases we’ve seen, but it is not something that keeps me up at night. With regards to online standby keys, it needs to be seen in a holistic way. What threats or scenarios are those keys trying to mitigate? Do they actually provide the security we think they do? E.g. if the active and standby keys are generated in the same HSM, it is no protection from an HSM compromise. What new vulnerabilities do published standby keys pose? With all the lessons learned since 2010, let’s go back to defining the problem we’re trying to solve, rather than having standby keys as a solution looking for a problem. Med venlig hilsen / Best regards Erwin Lansing Head of Security & Chief Technologist [cid:image001.png@01D407D6.ABC8B400][cid:image008.png@01D407D6.CD80C0B0]<https://www.facebook.com/dkhostmaster><https://www.facebook.com/dkhostmaster><https://www.facebook.com/dkhostmaster><https://www.facebook.com/dkhostmaster><https://www.facebook.com/dkhostmaster><https://www.facebook.com/dkhostmaster> <https://www.facebook.com/dkhostmaster> [cid:image009.png@01D407D6.CD80C0B0] <https://www.linkedin.com/company/dk-hostmaster-as> <https://www.linkedin.com/company/dk-hostmaster-as> <https://www.linkedin.com/company/dk-hostmaster-as> <https://www.linkedin.com/company/dk-hostmaster-as> <https://www.linkedin.com/company/dk-hostmaster-as> <https://www.linkedin.com/company/dk-hostmaster-as> <https://www.linkedin.com/company/dk-hostmaster-as> <http://www.internetdagen.dk/>DK Hostmaster A/S • Ørestads Boulevard 108, 11. sal • 2300 København S +45 2980 9214 • erwin@dk-hostmaster.dk • www.dk<http://www.dk>-hostmaster.dk<http://hostmaster.dk> [cid:image007.png@01D407D6.ABC8B400] This is an email from DK Hostmaster A/S. This message may contain confidential information and is intended solely for the use of the intended addressee. If you are not the intended addressee, please notify the sender immediately and delete this e-mail from your system. On 21 Mar 2019, at 14.42, Jacques Latour <Jacques.Latour@cira.ca<mailto:Jacques.Latour@cira.ca>> wrote: As I also stated in the DNSSEC workshop, I support a regular root KSK rollover, annually but not longer than two years, we need to develop muscle memory to rollover the key. Also, if the removal of the old key tomorrow is non eventful then I think it would be worthwhile to roll the key in 6 months while our memory is still fresh, this may force the one who manually update to use automated mechanisms. As for the unexpected increased DNSKEY query results, as I said, it looks very interesting but if there were real users or applications problems behind it then they would be been fix by now, and in my view the increase is probably not end-user / application impacting. Just plain old hardcoding ;-) Jacques -----Original Message----- From: ksk-rollover <ksk-rollover-bounces@icann.org<mailto:ksk-rollover-bounces@icann.org>> On Behalf Of Yoshiro YONEYA Sent: March 13, 2019 5:33 PM To: ksk-rollover@icann.org<mailto:ksk-rollover@icann.org> Subject: [ksk-rollover] followup of DNSSEC Workshop at ICANN64 Hi all, During DNSSEC Workshop at ICANN64, there were discussion regarding future KSK rollover. https://64.schedule.icann.org/meetings/961939 This is followup what I said. I support regular Root Zone KSK Rollover for operational maturity and DNS software matulity. The importance is doing regulary. Frequency may be once per 2-3 years, less than 5 years. -- Yoshiro YONEYA _______________________________________________ ksk-rollover mailing list ksk-rollover@icann.org https://mm.icann.org/mailman/listinfo/ksk-rollover _______________________________________________ ksk-rollover mailing list ksk-rollover@icann.org<mailto:ksk-rollover@icann.org> https://mm.icann.org/mailman/listinfo/ksk-rollover
Erwin Lansing via ksk-rollover <ksk-rollover@icann.org> wrote: > With regards to online standby keys, it needs to be seen in a holistic > way. What threats or scenarios are those keys trying to mitigate? Do > they actually provide the security we think they do? E.g. if the active > and standby keys are generated in the same HSM, it is no protection > from an HSM compromise. What new vulnerabilities do published standby > keys pose? With all the lessons learned since 2010, let’s go back to > defining the problem we’re trying to solve, rather than having standby > keys as a solution looking for a problem. Pre-published keys let us embed anchors into devices/firmware that might sit on shelves/in boxes for a few years. It also lets us install operating systems that are not the most recent (a Long-Term-Support) in order to reproduce systems that in production. Of course, we want to update these things with patches, but that requires DNS, and if we are going to take the view that DNSSEC should always be on, then we need it to be on during patching. That's the problem statement. (and... there are solutions other than pre-published keys.) -- Michael Richardson <mcr+IETF@sandelman.ca>, Sandelman Software Works -= IPv6 IoT consulting =-
participants (15)
-
Dave Lawrence -
Edward Lewis -
Erwin Lansing -
Fred Baker -
Fred Baker -
Jacques Latour -
Keith Mitchell -
Michael Richardson -
Michael Richardson -
Ondrej Filip -
S Moonesamy -
Shane Kerr -
Tony Finch -
Warren Kumari -
Yoshiro YONEYA