An Example of a Rubric for Grading Regular Exams: Approaches to Scoring Rubrics (4)
In my previous three posts, I have discussed the nature of grading rubrics used for essay writing in regular exams. In this final installment of the series, I will share the grading rubric I always use for regular exams.
Please note that this rubric is specifically for grading regular exams. As I've mentioned in previous posts, my top priorities for writing tasks in regular exams are grading efficiency and continuity of assignment.
While this rubric may not fully align with the ideal principles of rubric assessment, I believe that continuing to assign tasks using a simplified grading method, even if imperfect, contributes more to students' improvement in English than seeking a flawless rubric that complicates the grading process or increases the psychological barrier for teachers to assign tasks.
Example of a Grading Rubric
Here is the rubric I use for writing tasks in regular exams.
I provide this rubric on the exam paper and show it to students. I also show it to them in advance, including it in exam guides.
Three-tier Evaluation
Although it's a four-tier evaluation, D is applied in cases of overwhelmingly insufficient word count or unrelated responses, making it effectively a three-tier ABC evaluation. (For D, depending on the situation, either 2 or 3 points are chosen, and 1 point may be given for responses less than a sentence.)
While Grade A is 10/9, I usually consider 9 the highest score; a score of 10 is rare. This is to show that there's always room for improvement. Even if an answer is grammatically flawless and logically sound, there's always a way to express it more excellently, making "full marks" an exceptional recognition for outstanding answers only.
If in doubt, choose the middle score.
Since it's effectively a three-tier evaluation of "well-written," "fair," and "poorly written," I have set middle scores for cases of uncertainty.
For example, if A is 4 points and B is 3 points, choosing between A and B can be challenging. However, setting a middle score allows for straightforward "if in doubt, choose the middle score," improving grading efficiency.
For instance, I have set [9, (8), 7, (6), 5] as middle scores. If it were [10, 7, 4], choosing between 10 and 7 would lead to further indecision between 9 and 8. Thus, the middle scores are always designed to be a specific number.
"No matter who evaluates it"
Each stage of the description starts with "no matter who evaluates it." While it's inevitable for graders to be "strict" or "lenient" depending on the grader (i.e., the teacher), incorporating the viewpoint of achieving similar evaluations across teachers helps ensure some degree of fairness among graders.
If a teacher thinks, "Am I the only one who would deduct points for this?" then choosing the middle score is a simple solution. This not only improves grading efficiency but also adds a level of agreement from students' perspective on the fairness of their scores.
Meeting the task requirements/demands
The rubric is designed to be reusable for different writing tasks by using the versatile description "meeting the task requirements/demands."
The annotation includes "Requests for assignments include requirements specified by the class instructor in class," allowing instructors to emphasize points they focused on during grading without overly worrying about alignment with other teachers.
For example, a teacher who emphasized topic sentence formation can grade strictly on that aspect. If handwriting neatness was emphasized, messy handwriting can receive a strict evaluation. Again, if a teacher feels they are the only one deducting points for a particular issue, opting for the middle score for a milder deduction is advisable.
Burden on the reader
For the evaluation of grammatical and pragmatic aspects, I use the expression whether or not it is "burdensome" to the reader.
While it is common to differentiate between global errors that significantly affect meaning and local errors that do not, this distinction can be ambiguous, and perceptions vary among readers. Additionally, aligning grading standards among teachers for specific grammatical errors or untaught grammar points can be challenging.
Whether it burdens the reader is a subjective judgment, but ultimately, if the content can be understood without difficulty on first reading, it should be acceptable. This subjective criterion does not require that graders agree, but it should ensure a reasonable degree of fairness among them.
Meeting all of the criteria, or two or more of them
For desirable evaluation criteria, fulfilling all unquestionably merits an A, "generally" fulfilling them merits a B. Conversely, for undesirable criteria, fitting all warrants a D, and fitting two or more merits a C.
I personally find the vague expression "generally" for an evaluation of B crucial. "Generally" meeting the criteria can be interpreted in two ways: one is when all three criteria are met to some extent; the other when only two criteria are sufficiently met. (If only one desirable criterion is met, it fits two undesirable criteria, resulting in an evaluation of C or lower.)
No correction or comments as a rule
Finally, as a rule, I do not provide corrections or comments for writing tasks in regular exams, as stated in the rubric's annotation.
Finally, I clearly state in the rubric that, as a rule, corrections or comments are not to be provided.
This approach prevents the increase in the hurdle for assigning essay writing tasks due to the labor of grading and correcting.
Even if errors observed in regular exam writings are corrected, it's doubtful that the corrections will be internalized by learners. Considering the pressure of regular exams, these errors are likely mistakes rather than errors. Thus, I do not feel it is worth the effort to provide written corrective feedback for regular exam writings.
Sustainable writing evaluation
As I've mentioned at the beginning, the most important thing is to continue assigning essay writing tasks in regular exams, prioritizing grading methods that require as little time and effort as possible.
Some may criticize this rubric as overly simplistic and unfair to students. I believe that no evaluation or feedback is perfect. Allowing students to work on two essays, even with a somewhat rough evaluation, is more beneficial for improving their abilities than providing meticulously accurate evaluations for a single essay.
(This is an English version of my previous post, 定期考査の採点用ルーブリックの一例(採点用ルーブリックのあり方④).