Re: [RSSAC Caucus] [Non-DoD Source] Re: FOR REVIEW: Requirements for Measurements of the Local Perspective on the Root Server System
Hi, Paul. Thanks for the feedback. The reasoning behind these latency measurements is to have some context for interpreting latency to root server instances. For example, a slow "last mile" link would likely have effects on both root server and open resolver latencies. The work party decided on using popular open resolvers as a reference latency target since they are a well-deployed service and have similar processing times (non-recursive DB lookup). The exact math for such an analysis is not described in this document, but the reasoning is laid out at the end of section 2.1, under item #4 in the list of "measurements of interest". If you think that additional text is required, I would be glad to propose something and work with you to address this. Thanks! -Ken On 8/24/21, 2:38 PM, "rssac-caucus on behalf of Paul Hoffman" <rssac-caucus-bounces@icann.org on behalf of paul.hoffman@icann.org> wrote: All active links contained in this email were disabled. Please verify the identity of the sender, and confirm the authenticity of all links contained within the message prior to copying and pasting the address to a Web browser. ---- On Aug 20, 2021, at 4:44 AM, Andrew McConachie <andrew.mcconachie@icann.org> wrote: > > Dear RSSAC Caucus Members, > > Thanks to everyone who contributed comments and suggestions into the document. > > Ken and I worked through them all and have produced a version for the final 48-hour review. A PDF is also attached for your review. > <Caution-https://docs.google.com/document/d/11slZDTqrcwTwywpbi3JwHuU_FoaoN54u0f3B2UFj...> > > Please review and provide any final comments by Tuesday August 24th. Greetings again. I was not active in the work party, but I have done some review passes on the document late in the process. I still have a deep concern about the part in the middle where timings of some sort from open public resolvers are made. I put a comment in this near-final document, but have not seen any significant discussion. If RSSAC wants to do these probes, it would be really good if their use was explained in the document. --Paul Hoffman
On Aug 25, 2021, at 8:53 AM, Renard, Kenneth D CTR USARMY DEVCOM ARL (USA) <kenneth.d.renard.ctr@army.mil> wrote:
Hi, Paul. Thanks for the feedback.
The reasoning behind these latency measurements is to have some context for interpreting latency to root server instances. For example, a slow "last mile" link would likely have effects on both root server and open resolver latencies. The work party decided on using popular open resolvers as a reference latency target since they are a well-deployed service and have similar processing times (non-recursive DB lookup). The exact math for such an analysis is not described in this document, but the reasoning is laid out at the end of section 2.1, under item #4 in the list of "measurements of interest".
If you think that additional text is required, I would be glad to propose something and work with you to address this.
It's not just text: I think additional analysis is needed. In the eventual tool, let's assume that from a particular point a user gets times of 25 ms to Cloudflare, 35 ms to GPDNS, 80 ms to OpenDNS, and 50 ms to Quad9. What value would be used for the "last mile"? The mean of those? The median? Saying "are intended to be aggregated" indicates that we (I think correctly) don't know how to estimate a base latency. --Paul Hoffman
On Aug 25, 2021, at 10:38 AM, Paul Hoffman <paul.hoffman@icann.org> wrote:
On Aug 25, 2021, at 8:53 AM, Renard, Kenneth D CTR USARMY DEVCOM ARL (USA) <kenneth.d.renard.ctr@army.mil> wrote:
Hi, Paul. Thanks for the feedback.
The reasoning behind these latency measurements is to have some context for interpreting latency to root server instances. For example, a slow "last mile" link would likely have effects on both root server and open resolver latencies. The work party decided on using popular open resolvers as a reference latency target since they are a well-deployed service and have similar processing times (non-recursive DB lookup). The exact math for such an analysis is not described in this document, but the reasoning is laid out at the end of section 2.1, under item #4 in the list of "measurements of interest".
If you think that additional text is required, I would be glad to propose something and work with you to address this.
It's not just text: I think additional analysis is needed. In the eventual tool, let's assume that from a particular point a user gets times of 25 ms to Cloudflare, 35 ms to GPDNS, 80 ms to OpenDNS, and 50 ms to Quad9. What value would be used for the "last mile"? The mean of those? The median? Saying "are intended to be aggregated" indicates that we (I think correctly) don't know how to estimate a base latency.
--Paul Hoffman
I don't agree that additional analysis is needed, nor do I think this document needs to specify rules or formulas for calculating last mile latency, at this time. While those things might be really nice to have, I don't think we have the collective will to come to agreement on that in any reasonable amount of time. I think it will have to suffice to leave the interpretation of any reference latency measurements to the party performing the data analysis. Since this is all new we don't have to get it right the first time. If it turns out to be wrong or useless or under-specified then we can revise the document after acquiring some experience. DW
On 27/08/2021 00:07, Wessels, Duane via rssac-caucus wrote:
I don't agree that additional analysis is needed, nor do I think this document needs to specify rules or formulas for calculating last mile latency, at this time. While those things might be really nice to have, I don't think we have the collective will to come to agreement on that in any reasonable amount of time.
I think it will have to suffice to leave the interpretation of any reference latency measurements to the party performing the data analysis. Since this is all new we don't have to get it right the first time. If it turns out to be wrong or useless or under-specified then we can revise the document after acquiring some experience.
I agree. My recollection is that we made a conscious decision not to include a methodologuy for extracting a last-mile baseline in this initial version. Ray
On Fri, Aug 27, 2021 at 4:23 AM Ray Bellis <ray@isc.org> wrote:
On 27/08/2021 00:07, Wessels, Duane via rssac-caucus wrote:
I don't agree that additional analysis is needed, nor do I think this document needs to specify rules or formulas for calculating last mile latency, at this time. While those things might be really nice to have, I don't think we have the collective will to come to agreement on that in any reasonable amount of time.
I think it will have to suffice to leave the interpretation of any reference latency measurements to the party performing the data analysis. Since this is all new we don't have to get it right the first time. If it turns out to be wrong or useless or under-specified then we can revise the document after acquiring some experience.
I agree. My recollection is that we made a conscious decision not to include a methodologuy for extracting a last-mile baseline in this initial version.
It's also useful as an intuitive / manual check. We can figure out the exact [median|mean|n-th percentile] later. If the latency to various root servers is around 300ms, and the latency from that same network is 250 ms to Cloudflare, 350 ms to GPDNS, 800 ms to OpenDNS, and 500 ms to Quad9, then it suggests that that network has other issues, and those should be investigated before assuming that a closer root server instance would help. If the latency to various root servers is around 300ms, and the latency from that same network is 2.5 ms to Cloudflare, 3.5 ms to GPDNS, 8.0 ms to OpenDNS, and 5.0 ms to Quad9, then it suggests that that network could benefit from closer root-server instances, and also that it is likely that they can be deployed[0]. I cannot easily provide a proof of the above[1], but intuitively it seems correct, W [0]: Actually, I suspect that in that case, there are other issues that need to be investigated - I find it unlikely that there would be that wide a latency spread without some other confounding factors, but I wanted to be able to reuse the numbers from above :-) [1]: Although I'm sure I could handwave some sort of plausible sounding statement about X standard deviations away from the mean of Y measurements (after discarding Q outliers) against Z well connected public servers.
Ray _______________________________________________ rssac-caucus mailing list rssac-caucus@icann.org https://mm.icann.org/mailman/listinfo/rssac-caucus
_______________________________________________ By submitting your personal data, you consent to the processing of your personal data for purposes of subscribing to this mailing list accordance with the ICANN Privacy Policy (https://www.icann.org/privacy/policy) and the website Terms of Service (https://www.icann.org/privacy/tos). You can visit the Mailman link above to change your membership status or configuration, including unsubscribing, setting digest-style delivery or disabling delivery altogether (e.g., for a vacation), and so on.
-- The computing scientist’s main challenge is not to get confused by the complexities of his own making. -- E. W. Dijkstra
Median or percentile, yes. Mean, no. On Fri, Aug 27, 2021 at 10:20 AM Warren Kumari <warren@kumari.net> wrote:
On Fri, Aug 27, 2021 at 4:23 AM Ray Bellis <ray@isc.org> wrote:
On 27/08/2021 00:07, Wessels, Duane via rssac-caucus wrote:
I don't agree that additional analysis is needed, nor do I think this document needs to specify rules or formulas for calculating last mile latency, at this time. While those things might be really nice to have, I don't think we have the collective will to come to agreement on that in any reasonable amount of time.
I think it will have to suffice to leave the interpretation of any reference latency measurements to the party performing the data analysis. Since this is all new we don't have to get it right the first time. If it turns out to be wrong or useless or under-specified then we can revise the document after acquiring some experience.
I agree. My recollection is that we made a conscious decision not to include a methodologuy for extracting a last-mile baseline in this initial version.
It's also useful as an intuitive / manual check. We can figure out the exact [median|mean|n-th percentile] later. If the latency to various root servers is around 300ms, and the latency from that same network is 250 ms to Cloudflare, 350 ms to GPDNS, 800 ms to OpenDNS, and 500 ms to Quad9, then it suggests that that network has other issues, and those should be investigated before assuming that a closer root server instance would help. If the latency to various root servers is around 300ms, and the latency from that same network is 2.5 ms to Cloudflare, 3.5 ms to GPDNS, 8.0 ms to OpenDNS, and 5.0 ms to Quad9, then it suggests that that network could benefit from closer root-server instances, and also that it is likely that they can be deployed[0].
I cannot easily provide a proof of the above[1], but intuitively it seems correct, W [0]: Actually, I suspect that in that case, there are other issues that need to be investigated - I find it unlikely that there would be that wide a latency spread without some other confounding factors, but I wanted to be able to reuse the numbers from above :-) [1]: Although I'm sure I could handwave some sort of plausible sounding statement about X standard deviations away from the mean of Y measurements (after discarding Q outliers) against Z well connected public servers.
Ray _______________________________________________ rssac-caucus mailing list rssac-caucus@icann.org https://mm.icann.org/mailman/listinfo/rssac-caucus
_______________________________________________ By submitting your personal data, you consent to the processing of your personal data for purposes of subscribing to this mailing list accordance with the ICANN Privacy Policy (https://www.icann.org/privacy/policy) and the website Terms of Service (https://www.icann.org/privacy/tos). You can visit the Mailman link above to change your membership status or configuration, including unsubscribing, setting digest-style delivery or disabling delivery altogether (e.g., for a vacation), and so on.
-- The computing scientist’s main challenge is not to get confused by the complexities of his own making. -- E. W. Dijkstra _______________________________________________ rssac-caucus mailing list rssac-caucus@icann.org https://mm.icann.org/mailman/listinfo/rssac-caucus
_______________________________________________ By submitting your personal data, you consent to the processing of your personal data for purposes of subscribing to this mailing list accordance with the ICANN Privacy Policy (https://www.icann.org/privacy/policy) and the website Terms of Service (https://www.icann.org/privacy/tos). You can visit the Mailman link above to change your membership status or configuration, including unsubscribing, setting digest-style delivery or disabling delivery altogether (e.g., for a vacation), and so on.
On Aug 26, 2021, at 4:07 PM, Wessels, Duane <dwessels@verisign.com> wrote:
On Aug 25, 2021, at 10:38 AM, Paul Hoffman <paul.hoffman@icann.org> wrote:
On Aug 25, 2021, at 8:53 AM, Renard, Kenneth D CTR USARMY DEVCOM ARL (USA) <kenneth.d.renard.ctr@army.mil> wrote:
Hi, Paul. Thanks for the feedback.
The reasoning behind these latency measurements is to have some context for interpreting latency to root server instances. For example, a slow "last mile" link would likely have effects on both root server and open resolver latencies. The work party decided on using popular open resolvers as a reference latency target since they are a well-deployed service and have similar processing times (non-recursive DB lookup). The exact math for such an analysis is not described in this document, but the reasoning is laid out at the end of section 2.1, under item #4 in the list of "measurements of interest".
If you think that additional text is required, I would be glad to propose something and work with you to address this.
It's not just text: I think additional analysis is needed. In the eventual tool, let's assume that from a particular point a user gets times of 25 ms to Cloudflare, 35 ms to GPDNS, 80 ms to OpenDNS, and 50 ms to Quad9. What value would be used for the "last mile"? The mean of those? The median? Saying "are intended to be aggregated" indicates that we (I think correctly) don't know how to estimate a base latency.
--Paul Hoffman
I don't agree that additional analysis is needed, nor do I think this document needs to specify rules or formulas for calculating last mile latency, at this time. While those things might be really nice to have, I don't think we have the collective will to come to agreement on that in any reasonable amount of time.
I think it will have to suffice to leave the interpretation of any reference latency measurements to the party performing the data analysis. Since this is all new we don't have to get it right the first time. If it turns out to be wrong or useless or under-specified then we can revise the document after acquiring some experience.
Given this, having the document say "the interpretation of any reference latency measurements is left to the party performing the data analysis" would help clarify the lack of specificity about how the analysis would be done. --Paul Hoffman
On Fri, Aug 27, 2021 at 3:20 PM Paul Hoffman <paul.hoffman@icann.org> wrote:
On Aug 26, 2021, at 4:07 PM, Wessels, Duane <dwessels@verisign.com> wrote:
On Aug 25, 2021, at 10:38 AM, Paul Hoffman <paul.hoffman@icann.org>
wrote:
On Aug 25, 2021, at 8:53 AM, Renard, Kenneth D CTR USARMY DEVCOM ARL
(USA) <kenneth.d.renard.ctr@army.mil> wrote:
Hi, Paul. Thanks for the feedback.
The reasoning behind these latency measurements is to have some
context for interpreting latency to root server instances. For example, a slow "last mile" link would likely have effects on both root server and open resolver latencies. The work party decided on using popular open resolvers as a reference latency target since they are a well-deployed service and have similar processing times (non-recursive DB lookup). The exact math for such an analysis is not described in this document, but the reasoning is laid out at the end of section 2.1, under item #4 in the list of "measurements of interest".
If you think that additional text is required, I would be glad to
propose something and work with you to address this.
It's not just text: I think additional analysis is needed. In the eventual tool, let's assume that from a particular point a user gets times of 25 ms to Cloudflare, 35 ms to GPDNS, 80 ms to OpenDNS, and 50 ms to Quad9. What value would be used for the "last mile"? The mean of those? The median? Saying "are intended to be aggregated" indicates that we (I think correctly) don't know how to estimate a base latency.
--Paul Hoffman
I don't agree that additional analysis is needed, nor do I think this document needs to specify rules or formulas for calculating last mile latency, at this time. While those things might be really nice to have, I don't think we have the collective will to come to agreement on that in any reasonable amount of time.
I think it will have to suffice to leave the interpretation of any reference latency measurements to the party performing the data analysis. Since this is all new we don't have to get it right the first time. If it turns out to be wrong or useless or under-specified then we can revise the document after acquiring some experience.
Given this, having the document say "the interpretation of any reference latency measurements is left to the party performing the data analysis" would help clarify the lack of specificity about how the analysis would be done.
Huh. That's a good point[0], and something which we should add. What's the process to add something like that at this point? It seems obvious and not controversial, so... W
--Paul Hoffman_______________________________________________ rssac-caucus mailing list rssac-caucus@icann.org https://mm.icann.org/mailman/listinfo/rssac-caucus
_______________________________________________ By submitting your personal data, you consent to the processing of your personal data for purposes of subscribing to this mailing list accordance with the ICANN Privacy Policy (https://www.icann.org/privacy/policy) and the website Terms of Service (https://www.icann.org/privacy/tos). You can visit the Mailman link above to change your membership status or configuration, including unsubscribing, setting digest-style delivery or disabling delivery altogether (e.g., for a vacation), and so on.
-- The computing scientist’s main challenge is not to get confused by the complexities of his own making. -- E. W. Dijkstra
I don't see the importance of putting any specific statement on how the analysis should be done. It can only be known at the time of doing exploratory analysis of the data.Any suggestion on how the data analysis should be done needs to be informed at least by conducting exploratory analysis on historical data. Dessalegn On Fri, Aug 27, 2021 at 10:25 PM Warren Kumari <warren@kumari.net> wrote:
On Fri, Aug 27, 2021 at 3:20 PM Paul Hoffman <paul.hoffman@icann.org> wrote:
On Aug 26, 2021, at 4:07 PM, Wessels, Duane <dwessels@verisign.com> wrote:
On Aug 25, 2021, at 10:38 AM, Paul Hoffman <paul.hoffman@icann.org>
wrote:
On Aug 25, 2021, at 8:53 AM, Renard, Kenneth D CTR USARMY DEVCOM ARL
(USA) <kenneth.d.renard.ctr@army.mil> wrote:
Hi, Paul. Thanks for the feedback.
The reasoning behind these latency measurements is to have some
context for interpreting latency to root server instances. For example, a slow "last mile" link would likely have effects on both root server and open resolver latencies. The work party decided on using popular open resolvers as a reference latency target since they are a well-deployed service and have similar processing times (non-recursive DB lookup). The exact math for such an analysis is not described in this document, but the reasoning is laid out at the end of section 2.1, under item #4 in the list of "measurements of interest".
If you think that additional text is required, I would be glad to
propose something and work with you to address this.
It's not just text: I think additional analysis is needed. In the eventual tool, let's assume that from a particular point a user gets times of 25 ms to Cloudflare, 35 ms to GPDNS, 80 ms to OpenDNS, and 50 ms to Quad9. What value would be used for the "last mile"? The mean of those? The median? Saying "are intended to be aggregated" indicates that we (I think correctly) don't know how to estimate a base latency.
--Paul Hoffman
I don't agree that additional analysis is needed, nor do I think this document needs to specify rules or formulas for calculating last mile latency, at this time. While those things might be really nice to have, I don't think we have the collective will to come to agreement on that in any reasonable amount of time.
I think it will have to suffice to leave the interpretation of any reference latency measurements to the party performing the data analysis. Since this is all new we don't have to get it right the first time. If it turns out to be wrong or useless or under-specified then we can revise the document after acquiring some experience.
Given this, having the document say "the interpretation of any reference latency measurements is left to the party performing the data analysis" would help clarify the lack of specificity about how the analysis would be done.
Huh. That's a good point[0], and something which we should add. What's the process to add something like that at this point? It seems obvious and not controversial, so...
W
--Paul Hoffman_______________________________________________ rssac-caucus mailing list rssac-caucus@icann.org https://mm.icann.org/mailman/listinfo/rssac-caucus
_______________________________________________ By submitting your personal data, you consent to the processing of your personal data for purposes of subscribing to this mailing list accordance with the ICANN Privacy Policy (https://www.icann.org/privacy/policy) and the website Terms of Service (https://www.icann.org/privacy/tos). You can visit the Mailman link above to change your membership status or configuration, including unsubscribing, setting digest-style delivery or disabling delivery altogether (e.g., for a vacation), and so on.
-- The computing scientist’s main challenge is not to get confused by the complexities of his own making. -- E. W. Dijkstra _______________________________________________ rssac-caucus mailing list rssac-caucus@icann.org https://mm.icann.org/mailman/listinfo/rssac-caucus
_______________________________________________ By submitting your personal data, you consent to the processing of your personal data for purposes of subscribing to this mailing list accordance with the ICANN Privacy Policy (https://www.icann.org/privacy/policy) and the website Terms of Service (https://www.icann.org/privacy/tos). You can visit the Mailman link above to change your membership status or configuration, including unsubscribing, setting digest-style delivery or disabling delivery altogether (e.g., for a vacation), and so on.
I have added Paul's statement to bullet #4 near the end of section 2.1 as a suggestion. I am conflicted about this since we already state "Analysis of the collected data is open-ended and not described in this document" at the end of the introduction. Looking for work party input on the suggested addition in section 2.1. I would really like to see this document ready for vote at the 7 September RSSAC meeting. I think this requires a 1 week stable period before the meeting. https://docs.google.com/document/d/11slZDTqrcwTwywpbi3JwHuU_FoaoN54u0f3B2UFj... Thank you all, for the discussion! -Ken On Sat, Aug 28, 2021 at 6:36 AM Dessalegn Yehuala <mequanint.yehuala@gmail.com> wrote:
I don't see the importance of putting any specific statement on how the analysis should be done. It can only be known at the time of doing exploratory analysis of the data.Any suggestion on how the data analysis should be done needs to be informed at least by conducting exploratory analysis on historical data.
Dessalegn
On Fri, Aug 27, 2021 at 10:25 PM Warren Kumari <warren@kumari.net> wrote:
On Fri, Aug 27, 2021 at 3:20 PM Paul Hoffman <paul.hoffman@icann.org> wrote:
On Aug 26, 2021, at 4:07 PM, Wessels, Duane <dwessels@verisign.com> wrote:
On Aug 25, 2021, at 10:38 AM, Paul Hoffman <paul.hoffman@icann.org> wrote:
On Aug 25, 2021, at 8:53 AM, Renard, Kenneth D CTR USARMY DEVCOM ARL (USA) <kenneth.d.renard.ctr@army.mil> wrote:
Hi, Paul. Thanks for the feedback.
The reasoning behind these latency measurements is to have some context for interpreting latency to root server instances. For example, a slow "last mile" link would likely have effects on both root server and open resolver latencies. The work party decided on using popular open resolvers as a reference latency target since they are a well-deployed service and have similar processing times (non-recursive DB lookup). The exact math for such an analysis is not described in this document, but the reasoning is laid out at the end of section 2.1, under item #4 in the list of "measurements of interest".
If you think that additional text is required, I would be glad to propose something and work with you to address this.
It's not just text: I think additional analysis is needed. In the eventual tool, let's assume that from a particular point a user gets times of 25 ms to Cloudflare, 35 ms to GPDNS, 80 ms to OpenDNS, and 50 ms to Quad9. What value would be used for the "last mile"? The mean of those? The median? Saying "are intended to be aggregated" indicates that we (I think correctly) don't know how to estimate a base latency.
--Paul Hoffman
I don't agree that additional analysis is needed, nor do I think this document needs to specify rules or formulas for calculating last mile latency, at this time. While those things might be really nice to have, I don't think we have the collective will to come to agreement on that in any reasonable amount of time.
I think it will have to suffice to leave the interpretation of any reference latency measurements to the party performing the data analysis. Since this is all new we don't have to get it right the first time. If it turns out to be wrong or useless or under-specified then we can revise the document after acquiring some experience.
Given this, having the document say "the interpretation of any reference latency measurements is left to the party performing the data analysis" would help clarify the lack of specificity about how the analysis would be done.
Huh. That's a good point[0], and something which we should add. What's the process to add something like that at this point? It seems obvious and not controversial, so...
W
--Paul Hoffman_______________________________________________ rssac-caucus mailing list rssac-caucus@icann.org https://mm.icann.org/mailman/listinfo/rssac-caucus
_______________________________________________ By submitting your personal data, you consent to the processing of your personal data for purposes of subscribing to this mailing list accordance with the ICANN Privacy Policy (https://www.icann.org/privacy/policy) and the website Terms of Service (https://www.icann.org/privacy/tos). You can visit the Mailman link above to change your membership status or configuration, including unsubscribing, setting digest-style delivery or disabling delivery altogether (e.g., for a vacation), and so on.
-- The computing scientist’s main challenge is not to get confused by the complexities of his own making. -- E. W. Dijkstra _______________________________________________ rssac-caucus mailing list rssac-caucus@icann.org https://mm.icann.org/mailman/listinfo/rssac-caucus
_______________________________________________ By submitting your personal data, you consent to the processing of your personal data for purposes of subscribing to this mailing list accordance with the ICANN Privacy Policy (https://www.icann.org/privacy/policy) and the website Terms of Service (https://www.icann.org/privacy/tos). You can visit the Mailman link above to change your membership status or configuration, including unsubscribing, setting digest-style delivery or disabling delivery altogether (e.g., for a vacation), and so on.
_______________________________________________ rssac-caucus mailing list rssac-caucus@icann.org https://mm.icann.org/mailman/listinfo/rssac-caucus
_______________________________________________ By submitting your personal data, you consent to the processing of your personal data for purposes of subscribing to this mailing list accordance with the ICANN Privacy Policy (https://www.icann.org/privacy/policy) and the website Terms of Service (https://www.icann.org/privacy/tos). You can visit the Mailman link above to change your membership status or configuration, including unsubscribing, setting digest-style delivery or disabling delivery altogether (e.g., for a vacation), and so on.
participants (8)
-
Dessalegn Yehuala -
Ken Renard -
Paul Hoffman -
Ray Bellis -
Renard, Kenneth D CTR USARMY DEVCOM ARL (USA) -
Steve Crocker -
Warren Kumari -
Wessels, Duane