Colleagues: Sorry for coming in late to this discussion, but I am generally supportive of the direction of this thread. However, if I could play devil¹s advocate for a moment, one challenge that lies before us in the development of any reconsideration/review mechanism ³with teeth² is: what filters or tests should be required in order to access the mechanism? My concern is that, if we do not give thought to this at the outset, then -every- substantive decision made by the Board or Staff will be reviewed. At least once. If we assume that someone/somewhere will disagree with the initial outcome, how do we guard against an environment in which no decision is final until all paths for appeal have been exhausted? Thanks‹ J. On 3/5/15, 15:20 , "Bruce Tonkin" <Bruce.Tonkin@melbourneit.com.au> wrote:
Hello Greg,
Where the SSRP went awry was in its actual results. I'm not prepared to say this was a design flaw or a process flaw. But the results flabbergasted many people. Somehow it seemed to mutate into a "bad eyesight similarity review," since the only two "positives" were one where "i" gets confused with "l" and one where "rn" gets confused with "m". Meanwhile, singulars were not similar to plurals. So "hotels" is a similar string to "hoteis" but not to "hotel". "Fascinating," as the late Mr. Spock might say.
But -- there's no recourse for results, unless a process was not followed. So all of this stands. In a similar vein, it became apparent to many that all of the Objection processes should have an appeal mechanism, if only to deal with inconsistent results (though I tend to think it should be able to revisit the merits of each case as well). This is absolutely a design flaw in my mind. So, I think we can embrace the fact that these processes resulted from multistakeholder activities without believing that this makes them perfect or unassailable.
Yes one of the original goals from the new gTLD policy was:
³New generic top-level domains (gTLDs) must be introduced in an orderly, timely and predictable way.²
The predictable bit to me would be that if you formed another panel of experts with respect to string similarity that the results would basically be the same. I don¹t think the current iteration of the string similarity test meets that requirement yet. As an engineer - we would say that we want the results to be deterministic. Ie you get the same result each time you run the process.
It is clear that in some cases applicants or concerned members of the community wanted to get a "second opinion" and that was not available in the process. The challenge then is what to do when the second opinion differs from the first opinion. How do you ensure the second step has more levels of expertise, rigour, due diligence, etc to mean that it should over-ride the first opinion.
Regards, Bruce Tonkin
_______________________________________________ Accountability-Cross-Community mailing list Accountability-Cross-Community@icann.org https://mm.icann.org/mailman/listinfo/accountability-cross-community