As emailed to Provost Alan Harrison, 20 March 2012:
Dear Dr Harrison,
I write to alert you to a serious problem in student assessment that we face as a University.
On numerous occasions I have been involved in ranking students for Provincial and National organizations. In ~2000-2003 I was a member of an NSERC panel to evaluate studentships and postdoctoral fellowships, and I am now chairing an OGS panel for doctoral studentships. The criteria vary slightly from year to year, but academic performance indicators (i.e., marks) are an important component of the evaluation for the awards: for MSc students, marks account for 50-60% of the final evaluation. For PhD students, they count for around 40%. Even for postdoctoral fellowships, marks count for roughly 20% of the final score.
As you may know, marking schemes vary for universities in Canada and abroad. Many universities provide % marks, but Alberta, for instance, assigns marks out of 12, France out of 20, and Greece out of 10. Such numerical marks are not difficult to convert into percentages. The real difficulty begins when only letter grades are assigned. No matter what these marks refer to, the assessment is crude; a letter mark always signifies an ambiguous range of percentage marks. This makes it impossible to convert to a percentage mark. If A+ represents 90-100%, then the evaluators enter 90, although it might have been 99%. If A- represents 80-85%, we enter 80%, although it might have been 85%. So in addition to the imprecision, this means that students whose institutions assign letter grades are most often shortchanged by the conversions. Approximately 55-60% of the applicants will be successful, but when the lowest mark is 84% average mark and the highest ~96%, marks of ABC are making it impossible to assess a file, it is all a tie.
When it is impossible to distinguish between students on the basis of their marks, one is thrown back on other criteria such as research. But assessing research potential is very difficult, especially for Master’s students who have not done much yet. A student may stand out for being the seventh or eighth author on a minor publication. It is unfair to privilege a student on such a basis over others who may have significantly better percentage marks–but when everything from 85 to 89 is simply “A,” marginal indicators are all we have to go on. This is unfair to our students, who deserve our support.
In short, marking is a very important part of teaching that we must not allow to be corrupted.
I was on sick-leave when the marking-system changeover was instituted at Queen’s. In my correspondence with the registrar’s office, I have been told that the change was not designed to accommodate our new software, but was instituted to coordinate with other universities that use letter-grades. Even if that is so, why should we reduce our system to fit with the worst? As professors who can clearly see the shortcomings of the letter-grade system when we attempt to rank students for awards, it is up to us to show leadership; we should not have to wait for the students to complain.
Nevertheless, the student group QueensYOU is bringing this problem to our attention, and they deserve all our support. We always used to provide percentage marks, at least in the sciences. Why corrupt our system now? If, as the registrar says, we are not constrained by the software, then it should be quite possible to keep a record of both letter and percentage marks.
Thank you very much for your attention to this important matter.
Leda Raptis, Ph.D.
Department of Microbiology and Immunology
and Dept. of Pathology
Botterell Hall, Room 713