action items from last call and ALAC performance indicators
Hi, Looking through the action items from last month's call I see I have two <https://st.icann.org/euralo/index.cgi?action_items_17_march_2009>. "New gTLD Draft Applicant Guidebook, Version 2 (V2) A Peake will rework a draft statement based on that information on behalf of Euralo and submit it to the discussion list 30 April is the deadline for the Euralo comments, and then the statement will be sent to the Icann board." Did I say I'd do this? I've not been following the Applicant Guidebook closely so I'm surprised I volunteered to anything on this. Note, the ALAC has just voted to support a statement on the guidebook developed from the Summit WG statement <https://st.icann.org/working-groups/index.cgi?at_large_gtld_working_group_st...> All voted in favor. ALAC also recently made a statement on the GNSO non-commercial stakeholder group <http://forum.icann.org/lists/sg-petitions-charters/msg00020.html> EU RALO may wish to consider. Second action item: "ALAC Review WG Draft Final Report A Peake will draft a statement on behalf of Euralo and poste it on the Euralo discussion- list." I have a note of this one but thought I said I would try to make time to summarize any changes from the earlier reports we'd discussed, highlight issues I thought we should take note of for possible comment. I am pretty certain I didn't say I would draft a statement. Anyway, my apologies, I haven't done whatever it was. Started, and got distracted by work, life and other things. Perhaps ironic given the two incomplete actions by my name... I have volunteered to join a working group to draft a set of performance indicators for ALAC members and liaisons. See email below. I'd like to know what RALO members think about ways to judge ALAC members' performance. Stepping back a bit, I think we need to begin with a job description, an agreed set of tasks and goals to judge performance against. At the moment performance is judged on participation <http://www.atlarge.icann.org/alac/performance.htm> using 3 measures: ALAC conference calls, ICANN conferences and ALS accreditation votes. Liaisons are also asked to submit liaison reports which are recorded. Current indicators are quantitative - they tell you if person turns up, they don't give any indication of any work done (I could have a 100% record of attending meetings, calls and votes, but never comment on a list, sleep during the calls except for when I need to wake to vote, etc.) How can we introduce more qualitative measures, while remembering ALAC members and liaisons are volunteers? Speck to you soon. Thanks, Adam
Delivered-To: ajp@glocom.ac.jp X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AtQFAMKs0ElMQ4lH/2dsb2JhbACBUotFwkcG Date: Mon, 30 Mar 2009 14:49:47 -0400 To: ALAC Working List <alac@atlarge-lists.icann.org> From: Alan Greenberg <alan.greenberg@mcgill.ca> Subject: [ALAC] Performance Indicators X-BeenThere: alac@atlarge-lists.icann.org List-Id: At-Large Advisory Committee <alac_atlarge-lists.icann.org.atlarge-lists.icann.org> List-Unsubscribe: <http://atlarge-lists.icann.org/mailman/options/alac_atlarge-lists.icann.org>, <mailto:alac-request@atlarge-lists.icann.org?subject=unsubscribe> List-Archive: <http://atlarge-lists.icann.org/pipermail/alac_atlarge-lists.icann.org> List-Post: <mailto:alac@atlarge-lists.icann.org> List-Help: <mailto:alac-request@atlarge-lists.icann.org?subject=help> List-Subscribe: <http://atlarge-lists.icann.org/mailman/listinfo/alac_atlarge-lists.icann.org>, <mailto:alac-request@atlarge-lists.icann.org?subject=subscribe> Sender: alac-bounces@atlarge-lists.icann.org
In Mexico City, I was asked to lead a group developing a draft set of performance indicators for ALAC members and Liaisons.
I would suggest that this include RALO leadership, but purely from the point of view of their interaction with the ALAC. Specifically, there are some tasks related to taking information from the ALAC and sending them on to ALSs, and getting information back, that might be handled by wither the RALO-appointed ALAC members from the region, or by the RALO leadership. I don't think that the ALAC cares which it is in each case, as long as SOMEONE is doing it.
A similar committee was created in Cairo, but it never met, and so we are starting again. The members of the original group were
We one WG member from each region. Cheryl and Adam have already volunteered. So we now need a volunteer from LACRALO and from AFRALO.
Alan
_______________________________________________ ALAC mailing list ALAC@atlarge-lists.icann.org http://atlarge-lists.icann.org/mailman/listinfo/alac_atlarge-lists.icann.org
At-Large Online: http://www.atlarge.icann.org ALAC Working Wiki: http://st.icann.org/alac
Hi Adam, On Apr 21, 2009, at 3:58 PM, Adam Peake wrote:
I'd like to know what RALO members think about ways to judge ALAC members' performance.
Stepping back a bit, I think we need to begin with a job description, an agreed set of tasks and goals to judge performance against. At the moment performance is judged on participation <http://www.atlarge.icann.org/alac/performance.htm
using 3 measures: ALAC conference calls, ICANN conferences and ALS accreditation votes. Liaisons are also asked to submit liaison reports which are recorded.
Current indicators are quantitative - they tell you if person turns up, they don't give any indication of any work done (I could have a 100% record of attending meetings, calls and votes, but never comment on a list, sleep during the calls except for when I need to wake to vote, etc.)
How can we introduce more qualitative measures, while remembering ALAC members and liaisons are volunteers?
If social science history is any indication, it might be difficult to agree a fixed set of qualitative measures by working abstractly and deductively. But you could probably arrive at some by working inductively. Why not take a set of xyz important decisions (I guess this would have to include non-decisions, divided decisions, non- inclusive decisions, etc) ALAC has had to make in the past year, look at the discussions and process dynamics leading up to them, and see if you can identify some patterns, good/not so good practices, etc. that could enable you to define measures? Of course, it'd have to be done with some sensitivity, i.e. on a non-personalized basis, but unless you look at how the group actually functions or doesn't, defining contextually-relevant qualitative measures could be a frustrating exercise. Two cents, BD
participants (2)
-
Adam Peake -
William Drake