Security Incident Reporting and c-root incident
Dear Caucus and WP members, This is Alt from JPNIC. As you may already know, c-root is reported to be experiencing routing and XFR issue, affecting ongoing DNSSEC algorythm rollover of gov. and int. In my opinion this issue itself is not a material impact for the entire RSS, however, if our security incident reporting document were taking effect at this time, is this eligeble to be reported? RSOs are free to report any incident at its own will regardless of the effect of this docment but I just wonder if this kind of issue effecting multiple TLD operations could be within "what to report"(nearly equals to requirement/mandatory) in the definition of this docment. -- 大谷 亘 Wataru Ohgai alt@nic.ad.jp 一般社団法人日本ネットワークインフォメーションセンター JPNIC Japan Network Information Center
On Wed 2024-05-22 20:06:14+0900 Wataru wrote:
As you may already know, c-root is reported to be experiencing routing and XFR issue, affecting ongoing DNSSEC algorythm rollover of gov. and int.
In my opinion this issue itself is not a material impact for the entire RSS, however, if our security incident reporting document were taking effect at this time, is this eligeble to be reported?
Without more details, my personal opinion (ie no hats) is that it would not be. I spent a few minutes googling to try to find any references to this, but couldn't. So it seems there is no material impact to the RSS. I'm assuming the routing issue is external to C-Root (ie not due to their router being compromised or something like that), so any reporting on this would be informational (and optional) reporting from C-Root directly. Regards, Robert USC Information Sciences Institute <http://www.isi.edu/> Networking and Cybersecurity Division
Hi Robert, On May 22, 2024, at 17:17, Robert Story <rstory@ant.isi.edu> wrote:
On Wed 2024-05-22 20:06:14+0900 Wataru wrote:
As you may already know, c-root is reported to be experiencing routing and XFR issue, affecting ongoing DNSSEC algorythm rollover of gov. and int.
In my opinion this issue itself is not a material impact for the entire RSS, however, if our security incident reporting document were taking effect at this time, is this eligeble to be reported?
Without more details, my personal opinion (ie no hats) is that it would not be. I spent a few minutes googling to try to find any references to this, but couldn't. So it seems there is no material impact to the RSS.
It seems like the problems with c.root-servers.org (note, .org) have no material impact to the root server system. However, the fact that C-Root has been failing to keep up with new revisions of the root zone as they are published for some period of time seems material. On the DNS-OARC dns-operations mailing list there are reports of two top-level domain DNSSEC algorithm rolls whose timing have been impacted, for example, so it doesn't seem to be much of a stretch to say that there's potential for security-related consequences of whatever this mishap turns out to be, even if they are minor. I am not familiar with the work that Wataru mentioned and I don't know how "security incident" is defined, but I think Wataru's question is reasonable. I know you didn't mean to suggest that spending a few minutes searching for impact is sufficient as criteria for judging whether an incident has occured, but we have metrics defined in RSSAC002 that relate directly to serving stale data; those metrics for C are surely well beyond the expected values over this event. Perhaps it's an idea to use those metrics as quantitative measures of impact? Joe
Hi Joe, comments in-line, below... All opinions are my own personal thoughts (again, no hats). On Wed 2024-05-22 19:22:55+0200 jabley@strandkip.nl wrote:
It seems like the problems with c.root-servers.org (note, .org) have no material impact to the root server system.
However, the fact that C-Root has been failing to keep up with new revisions of the root zone as they are published for some period of time seems material. On the DNS-OARC dns-operations mailing list there are reports of two top-level domain DNSSEC algorithm rolls whose timing have been impacted, for example, so it doesn't seem to be much of a stretch to say that there's potential for security-related consequences of whatever this mishap turns out to be, even if they are minor.
I am not familiar with the work that Wataru mentioned and I don't know how "security incident" is defined, but I think Wataru's question is reasonable.
The work being referenced is the Security Incident Reporting work party, and the document is here: https://docs.google.com/document/d/1NvSw7PoLGYhXPuMEjiBgqjCtp_khTGGEh0DaHkNJ... I completely agree that the question is reasonable, and I was merely stating my opinion based on my feel for the way the document has been progressing.
I know you didn't mean to suggest that spending a few minutes searching for impact is sufficient as criteria for judging whether an incident has occured, but we have metrics defined in RSSAC002 that relate directly to serving stale data; those metrics for C are surely well beyond the expected values over this event. Perhaps it's an idea to use those metrics as quantitative measures of impact?
The statement of work for the SIR wp explicitly states that 'the work party should focus on security incidents that have a *material adverse effect* on the root service.' The working party is carefully avoiding tying any hard numbers or rules to whether or not an incident qualifies as 'reportable', or trying to imagine whether or not any particular scenario qualifies or not, and explicitly stating that the decision is left to the RSO(s). Based on the information I have at the moment, my personal opinion is that this incident wouldn't qualify for security incident reporting as defined in the document. Other interesting questions are: - what is the impact of stale data being served from some or all of the instances of a single RSO? Does it depend on how old the stale data is? - what would the impact have been if the rollovers had proceeded? The answers to those questions, or other additional information, could possibly sway my opinion. Regards, Robert USC Information Sciences Institute <http://www.isi.edu/> Networking and Cybersecurity Division
Hi Robert, I just recently joined the work party, so apologies if I’m missing some context. On May 22, 2024, at 4:14 PM, Robert Story <rstory@ant.isi.edu> wrote:
The work being referenced is the Security Incident Reporting work party, and the document is here:
https://docs.google.com/document/d/1NvSw7PoLGYhXPuMEjiBgqjCtp_khTGGEh0DaHkNJ...
I completely agree that the question is reasonable, and I was merely stating my opinion based on my feel for the way the document has been progressing.
I know you didn't mean to suggest that spending a few minutes searching for impact is sufficient as criteria for judging whether an incident has occured, but we have metrics defined in RSSAC002 that relate directly to serving stale data; those metrics for C are surely well beyond the expected values over this event. Perhaps it's an idea to use those metrics as quantitative measures of impact?
The statement of work for the SIR wp explicitly states that 'the work party should focus on security incidents that have a *material adverse effect* on the root service.'
I guess this gets into the definition of “root service”. While it’s arguably true that this most recent incident did not impact _resolution_ service, I gather .GOV and .INT (prudently) delayed completing their key change until it has been resolved. If you assume that “root service” includes enabling zone maintenance, such as changing keys, it would seem that this incident did indeed have "material adverse effect” on root service.
The working party is carefully avoiding tying any hard numbers or rules to whether or not an incident qualifies as 'reportable', or trying to imagine whether or not any particular scenario qualifies or not, and explicitly stating that the decision is left to the RSO(s).
This seems kind of superfluous: as with all things RSOs, it is always the decision of individual RSOs to play or not as they see fit.
Based on the information I have at the moment, my personal opinion is that this incident wouldn't qualify for security incident reporting as defined in the document.
If you’re talking about the RSS SIR Working Document, section 4.2 states: "Data integrity refers to the "correctness" of the data in responses generated by the RSS. […] Examples of reportable incidents that affect Integrity: * Any part of the RSS serving incorrect data for the root zone” Providing stale data would appear to me to be “serving incorrect data for the root zone."
Other interesting questions are:
- what is the impact of stale data being served from some or all of the instances of a single RSO? Does it depend on how old the stale data is?
Yes, it depends on how old the stale data is but I don’t think it would be a good idea to try to quantify this. Back in August 2018, the operators of “C” misconfigured a firewall in a way that blocked zone transfers. IIRC, this misconfiguration was noticed when the operators of the RU ccTLD notified IANA their DS wasn’t updated at “C” (I believe after people complained to them). In this scenario, given caching, etc., it would probably be difficult to draw a line around “too old."
- what would the impact have been if the rollovers had proceeded?
Potentially a repeat of the .RU issue. Regards, -drc
Let me add that CVSS scoring documentation (I know this is not CVE) states - assume the vulnerable configuration. So, in this context we should assume that the key rollover might have already started and what would be the impact of delayed updates to a single instance of the root server when assessing the risk and the severity of the incident. We should not just shrug because the luck in timing was on our side this time. Frankly, it’s also bit worrying that Cogent had to be alerted by the third party (and the other related bits reported on dns-operations), so I think this deserves a full post-mortem as the bare minimum. Cheers, -- Ondřej Surý (He/Him)
On 22. 5. 2024, at 23:20, David Conrad <david.conrad@layer9.tech> wrote:
While it’s arguably true that this most recent incident did not impact _resolution_ service, I gather .GOV and .INT (prudently) delayed completing their key change until it has been resolved.
On Thu 2024-05-23 00:11:36+0200 Ondřej wrote:
So, in this context we should assume that the key rollover might have already started and what would be the impact of delayed updates to a single instance of the root server when assessing the risk and the severity of the incident. We should not just shrug because the luck in timing was on our side this time.
Yes, and this is what RSO(s) would have to consider for making a decision on whether or not an incident would be a 'reportable security incident'.
Frankly, it’s also bit worrying that Cogent had to be alerted by the third party (and the other related bits reported on dns-operations), so I think this deserves a full post-mortem as the bare minimum.
I agree. During the SIR work party calls, the idea of 'informational' reporting has come up quite a few times, Perhaps that caucus might take that up in a future work party. Regards, Robert USC Information Sciences Institute <http://www.isi.edu/> Networking and Cybersecurity Division
On Wed 2024-05-22 17:20:13-0400 David wrote:
If you’re talking about the RSS SIR Working Document, section 4.2 states:
"Data integrity refers to the "correctness" of the data in responses generated by the RSS. […] Examples of reportable incidents that affect Integrity: * Any part of the RSS serving incorrect data for the root zone”
Providing stale data would appear to me to be “serving incorrect data for the root zone."
I can see that argument, but I can also see an argument that stale formerly correct data is not as big a deal as unauthorized modification to bad data. Does stale data from 1 RSO have a 'materially adverse effect' on the RSS? At any rate, this is exactly why the work party is trying very hard not to get into the details of every possible scenario and depends on the RSO(s) to make the call. Regards, Robert USC Information Sciences Institute <http://www.isi.edu/> Networking and Cybersecurity Division
Robert, On May 22, 2024, at 6:23 PM, Robert Story <rstory@ant.isi.edu> wrote:
I can see that argument, but I can also see an argument that stale formerly correct data is not as big a deal as unauthorized modification to bad data.
A bit of a red herring, but I don’t think the reason the data served by a root server is wrong matters that much to “the public” when they’re looking to be informed of "any potential security incidents that might affect [the RSS’s] proper functioning”.
Does stale data from 1 RSO have a 'materially adverse effect' on the RSS?
We seem to be attempting to split the “materially adverse effect on the RSS” hair. To me, this was an externally visible event that impacted the planned activities of two TLD operators. I’d note that in the last similar incident, Cogent self-reported. It is surprising to me that this would not be considered a reportable incident. Section 4.5 speaks to severity of incidents. I could see an argument that this most recent incident could be considered a lower severity, but not reporting it would seem odd to me.
At any rate, this is exactly why the work party is trying very hard not to get into the details of every possible scenario and depends on the RSO(s) to make the call.
It is obviously impossible to list the details of every possible scenario, so I’d have assumed their would be guidelines to help inform which incidents should be reported, e.g., “was the incident externally visible”, “did the incident result in sustained resolution failure”, etc. More generally, I worry that depending on self reporting of potentially embarrassing incidents won’t be particularly supportive of goal 5 of the SOW (“5. Maintain/improve confidence in the RSS by providing incident reporting.”) if stuff that is externally visible isn’t reported on. Regards, -drc
On Wed 2024-05-22 19:03:58-0400 David wrote:
To me, this was an externally visible event that impacted the planned activities of two TLD operators. I’d note that in the last similar incident, Cogent self-reported. It is surprising to me that this would not be considered a reportable incident. Section 4.5 speaks to severity of incidents. I could see an argument that this most recent incident could be considered a lower severity, but not reporting it would seem odd to me.
I'm not saying it shouldn't be reported, just that my personal opinion is that this instance it is debatable.
It is obviously impossible to list the details of every possible scenario, so I’d have assumed their would be guidelines to help inform which incidents should be reported, e.g., “was the incident externally visible”, “did the incident result in sustained resolution failure”, etc.
For both of those guidelines, they lead down rat holes. Viable to or sustained failures for how many people? I'll also note that at various times in the history of the document there were such guidelines (from me and others), but they have been pared back to be less specific over time.
More generally, I worry that depending on self reporting of potentially embarrassing incidents won’t be particularly supportive of goal 5 of the SOW (“5. Maintain/improve confidence in the RSS by providing incident reporting.”) if stuff that is externally visible isn’t reported on.
I totally agree here. But this specific question for this work party is about 'reportable security incident'. I mentioned 'informational' reporting in an earlier email. There has also been talk about 'transparency' report, but again the work party has decided it's not in scope for this document. The work party is on-going, so I invite folks to make suggestions to the document and participate in the calls! Regards, Robert USC Information Sciences Institute <http://www.isi.edu/> Networking and Cybersecurity Division
rstory> I can see that argument, but I can also see an argument that rstory> stale formerly correct data is not as big a deal as unauthorized rstory> modification to bad data. Does stale data from 1 RSO have a rstory> 'materially adverse effect' on the RSS? The issue isn't just "data". It's key/DNSSEC data that would have been stale. It affected rollovers, which need a safe overlap to not cause validation failures. I'd call that material enough to warrant a report. Certainly, the folks running .INT and .GOV are concerned enough to alter their schedules. A report isn't a fine or criminal offense. The RSO group is an operator group. Operations work better with advanced warning, reports, RCOs, and mitigations against future failures. That includes incident reports where the material outage was highly possible but averted.
On Wed 2024-05-22 17:18:58-0600 Paul wrote:
Operations work better with advanced warning, reports, RCOs, and mitigations against future failures.
I agree with you on this.
That includes incident reports where the material outage was highly possible but averted.
That's not what the conclusion has been by the work party so far for this particular document. Feel free to join in on the fun and convince folks otherwise! Regards, Robert USC Information Sciences Institute <http://www.isi.edu/> Networking and Cybersecurity Division
ebersman> That includes incident reports where the material outage was ebersman> highly possible but averted. rstory> That's not what the conclusion has been by the work party so far rstory> for this particular document. Feel free to join in on the fun rstory> and convince folks otherwise! Every ops job I've ever had, I valued anything that told me what worked and what didn't work. Metrics and measurement are a key part of the RSOs' responsibility. Surely such incident reports are a valuable metric.
On Wed 2024-05-22 17:51:54-0600 Paul wrote:
ebersman> That includes incident reports where the material outage was ebersman> highly possible but averted.
rstory> That's not what the conclusion has been by the work party so far rstory> for this particular document. Feel free to join in on the fun rstory> and convince folks otherwise!
Every ops job I've ever had, I valued anything that told me what worked and what didn't work.
Metrics and measurement are a key part of the RSOs' responsibility. Surely such incident reports are a valuable metric.
Yes, they are. But the scope of work for this document is for reporting on security incidents that had a material impact on the RSS. Possible or averted events are out of scope for this work party. Also, the document explicitly states that RSO(s) may publish reports of their own for any type of event. So regardless of whether or not a RSO decided to submit a security incident report to the future RSS governance system, it could independently publish an incident or after action report. Regards, Robert USC Information Sciences Institute <http://www.isi.edu/> Networking and Cybersecurity Division
Robert, On May 23, 2024, at 9:00 AM, Robert Story <rstory@ant.isi.edu> wrote:
Yes, they are. But the scope of work for this document is for reporting on security incidents that had a material impact on the RSS. Possible or averted events are out of scope for this work party.
As mentioned, this was an externally visible event that impacted the activities of two TLD operators. What is your definition of “material impact on the RSS”? Thanks, -drc
I agree with this. Even before dnssec, emergency root zone changes to add or remove an NS record or to change glue records could be service effecting if delayed. There have always been occasional delays but we should be measuring so as to manage them. p vixie On May 23, 2024 15:05, David Conrad <david.conrad@layer9.tech> wrote: Robert, On May 23, 2024, at 9:00 AM, Robert Story <rstory@ant.isi.edu> wrote:
Yes, they are. But the scope of work for this document is for reporting on security incidents that had a material impact on the RSS. Possible or averted events are out of scope for this work party.
As mentioned, this was an externally visible event that impacted the activities of two TLD operators. What is your definition of “material impact on the RSS”? Thanks, -drc _______________________________________________ rssac-caucus mailing list rssac-caucus@icann.org https://mm.icann.org/mailman/listinfo/rssac-caucus _______________________________________________ By submitting your personal data, you consent to the processing of your personal data for purposes of subscribing to this mailing list accordance with the ICANN Privacy Policy (https://www.icann.org/privacy/policy) and the website Terms of Service (https://www.icann.org/privacy/tos). You can visit the Mailman link above to change your membership status or configuration, including unsubscribing, setting digest-style delivery or disabling delivery altogether (e.g., for a vacation), and so on.
On Thu 2024-05-23 09:04:53-0400 David wrote:
As mentioned, this was an externally visible event that impacted the activities of two TLD operators.
What is your definition of “material impact on the RSS”?
[nitpick: I left out the work 'adverse' in my last email.] This is exactly the question that the work party has been wrestling with, and this situation will hopefully stimulate some useful discussions on the next call. But to date, we've avoided any specific definition, leaving it up to the RSOs. I don't think it's productive to continue debating whether or not my personal opinion of an event I'm not involved with and have very limited information about is right or wrong. I encourage everyone to read the document and scope of work, join future calls and/or make comments on the document if they feel changes are needed. Regards, Robert USC Information Sciences Institute <http://www.isi.edu/> Networking and Cybersecurity Division
May be of interest(about the c-root server incident): https://arstechnica.com/security/2024/05/dns-glitch-that-threatened-internet... Dessalegn On Thu, May 23, 2024 at 5:58 PM Robert Story <rstory@ant.isi.edu> wrote:
On Thu 2024-05-23 09:04:53-0400 David wrote:
As mentioned, this was an externally visible event that impacted the activities of two TLD operators.
What is your definition of “material impact on the RSS”?
[nitpick: I left out the work 'adverse' in my last email.]
This is exactly the question that the work party has been wrestling with, and this situation will hopefully stimulate some useful discussions on the next call. But to date, we've avoided any specific definition, leaving it up to the RSOs.
I don't think it's productive to continue debating whether or not my personal opinion of an event I'm not involved with and have very limited information about is right or wrong. I encourage everyone to read the document and scope of work, join future calls and/or make comments on the document if they feel changes are needed.
Regards, Robert
USC Information Sciences Institute <http://www.isi.edu/> Networking and Cybersecurity Division _______________________________________________ rssac-caucus mailing list rssac-caucus@icann.org https://mm.icann.org/mailman/listinfo/rssac-caucus
_______________________________________________ By submitting your personal data, you consent to the processing of your personal data for purposes of subscribing to this mailing list accordance with the ICANN Privacy Policy (https://www.icann.org/privacy/policy) and the website Terms of Service (https://www.icann.org/privacy/tos). You can visit the Mailman link above to change your membership status or configuration, including unsubscribing, setting digest-style delivery or disabling delivery altogether (e.g., for a vacation), and so on.
Focusing on the "stale data" point: On May 22, 2024, at 14:20, David Conrad <david.conrad@layer9.tech> wrote:
If you’re talking about the RSS SIR Working Document, section 4.2 states:
"Data integrity refers to the "correctness" of the data in responses generated by the RSS. […] Examples of reportable incidents that affect Integrity: * Any part of the RSS serving incorrect data for the root zone”
Providing stale data would appear to me to be “serving incorrect data for the root zone.
The question of "how long is it acceptable to serve a version of a root zone after the RZM has issued a new version" was debated extensively during the development of RSSAC047. The recommended threshold in that document is 65 minutes, over the course of an entire month. That is, an RSO would only not pass this metric if its publication latency is worse than 65 minutes averaged over the approximately 60 zones published in a month. --Paul Hoffman
Paul, correct me if I'm wrong, but the reported delay was 3 full days, right? That's about 4320 minutes just this month, and divided by 60 that's about 70 minutes on average? Ondřej -- Ondřej Surý (He/Him) ondrej@sury.org
On 23. 5. 2024, at 0:39, Paul Hoffman <paul.hoffman@icann.org> wrote:
Focusing on the "stale data" point:
On May 22, 2024, at 14:20, David Conrad <david.conrad@layer9.tech> wrote:
If you’re talking about the RSS SIR Working Document, section 4.2 states:
"Data integrity refers to the "correctness" of the data in responses generated by the RSS. […] Examples of reportable incidents that affect Integrity: * Any part of the RSS serving incorrect data for the root zone”
Providing stale data would appear to me to be “serving incorrect data for the root zone.
The question of "how long is it acceptable to serve a version of a root zone after the RZM has issued a new version" was debated extensively during the development of RSSAC047. The recommended threshold in that document is 65 minutes, over the course of an entire month. That is, an RSO would only not pass this metric if its publication latency is worse than 65 minutes averaged over the approximately 60 zones published in a month.
--Paul Hoffman
_______________________________________________ rssac-caucus mailing list rssac-caucus@icann.org https://mm.icann.org/mailman/listinfo/rssac-caucus
_______________________________________________ By submitting your personal data, you consent to the processing of your personal data for purposes of subscribing to this mailing list accordance with the ICANN Privacy Policy (https://www.icann.org/privacy/policy) and the website Terms of Service (https://www.icann.org/privacy/tos). You can visit the Mailman link above to change your membership status or configuration, including unsubscribing, setting digest-style delivery or disabling delivery altogether (e.g., for a vacation), and so on.
Robert, everyone, Sorry that I should have attached some references at the first time. Let me do now. 1. c-root has experiencing zone transfer/update and routing issue: https://lists.dns-oarc.net/pipermail/dns-operations/2024-May/022558.html 2. gov. (maintained by Cloudflare) new alg 13 DS is not appearing in data from c-root, decided temporary terminate alg roll over until this issue is fixed: https://lists.dns-oarc.net/pipermail/dns-operations/2024-May/022566.html 3. The same on int. (managed by ICANN): https://lists.dns-oarc.net/pipermail/dns-operations/2024-May/022573.html Just to make it clear, the purpose I've posted this question is not to suggest c-root to do or not to do something just like people doing in other MLs, but to think: - if this ongoing issue is somehow related to the Security Incident Reporting doc. - find any lack of requirements or suggestion and add them to the doc before finalizing it (if there is any) This is jist an example that we may be able to gain something for the doc. --- alt (from iPhone)
On May 23, 2024, at 6:20, David Conrad <david.conrad@layer9.tech> wrote:
Hi Robert,
I just recently joined the work party, so apologies if I’m missing some context.
On May 22, 2024, at 4:14 PM, Robert Story <rstory@ant.isi.edu> wrote: The work being referenced is the Security Incident Reporting work party, and the document is here:
https://docs.google.com/document/d/1NvSw7PoLGYhXPuMEjiBgqjCtp_khTGGEh0DaHkNJ...
I completely agree that the question is reasonable, and I was merely stating my opinion based on my feel for the way the document has been progressing.
I know you didn't mean to suggest that spending a few minutes searching for impact is sufficient as criteria for judging whether an incident has occured, but we have metrics defined in RSSAC002 that relate directly to serving stale data; those metrics for C are surely well beyond the expected values over this event. Perhaps it's an idea to use those metrics as quantitative measures of impact?
The statement of work for the SIR wp explicitly states that 'the work party should focus on security incidents that have a *material adverse effect* on the root service.'
I guess this gets into the definition of “root service”. While it’s arguably true that this most recent incident did not impact _resolution_ service, I gather .GOV and .INT (prudently) delayed completing their key change until it has been resolved. If you assume that “root service” includes enabling zone maintenance, such as changing keys, it would seem that this incident did indeed have "material adverse effect” on root service.
The working party is carefully avoiding tying any hard numbers or rules to whether or not an incident qualifies as 'reportable', or trying to imagine whether or not any particular scenario qualifies or not, and explicitly stating that the decision is left to the RSO(s).
This seems kind of superfluous: as with all things RSOs, it is always the decision of individual RSOs to play or not as they see fit.
Based on the information I have at the moment, my personal opinion is that this incident wouldn't qualify for security incident reporting as defined in the document.
If you’re talking about the RSS SIR Working Document, section 4.2 states:
"Data integrity refers to the "correctness" of the data in responses generated by the RSS. […] Examples of reportable incidents that affect Integrity: * Any part of the RSS serving incorrect data for the root zone”
Providing stale data would appear to me to be “serving incorrect data for the root zone."
Other interesting questions are:
- what is the impact of stale data being served from some or all of the instances of a single RSO? Does it depend on how old the stale data is?
Yes, it depends on how old the stale data is but I don’t think it would be a good idea to try to quantify this. Back in August 2018, the operators of “C” misconfigured a firewall in a way that blocked zone transfers. IIRC, this misconfiguration was noticed when the operators of the RU ccTLD notified IANA their DS wasn’t updated at “C” (I believe after people complained to them). In this scenario, given caching, etc., it would probably be difficult to draw a line around “too old."
- what would the impact have been if the rollovers had proceeded?
Potentially a repeat of the .RU issue.
Regards, -drc
_______________________________________________ rssac-caucus mailing list rssac-caucus@icann.org https://mm.icann.org/mailman/listinfo/rssac-caucus
_______________________________________________ By submitting your personal data, you consent to the processing of your personal data for purposes of subscribing to this mailing list accordance with the ICANN Privacy Policy (https://www.icann.org/privacy/policy) and the website Terms of Service (https://www.icann.org/privacy/tos). You can visit the Mailman link above to change your membership status or configuration, including unsubscribing, setting digest-style delivery or disabling delivery altogether (e.g., for a vacation), and so on.
On Thu 2024-05-23 08:28:04+0900 Wataru wrote:
Just to make it clear, the purpose I've posted this question is not to suggest c-root to do or not to do something just like people doing in other MLs, but to think: - if this ongoing issue is somehow related to the Security Incident Reporting doc. - find any lack of requirements or suggestion and add them to the doc before finalizing it (if there is any)
This is jist an example that we may be able to gain something for the doc.
I agree, and if nothing else we've got a few more people to read the doc and possible motivated them to participate in the working party. Regards, Robert USC Information Sciences Institute <http://www.isi.edu/> Networking and Cybersecurity Division
participants (10)
-
David Conrad -
Dessalegn Yehuala -
jabley@strandkip.nl -
Ondřej Surý -
Paul Ebersman -
Paul Hoffman -
Paul Vixie -
Robert Story -
Wataru "Alt" Ohgai -
Wataru Ohgai (iPhone)