A cornerstone of traditional math class are the tests and exams. Making the test is a lot of work, although some parts can be recycled every semester. Marking the paper test though, is always a long slog. Marking scales almost linearly with the number of students and needs to be done quickly to ensure relevant feedback to the students. But insightful marking comes from a slow and careful review of the students’ work. Creating a fully digital test takes a lot of work up front, but is mostly marked by the system. You then have to match and review all of the rough work, or just trust that the correct solution means students understand the material. Discussion grading offers a bit of a compromise.
Why Test at all?
The point of testing students is to efficiently determine what they understand about a topic. One-on-one meetings would be by far the most effective way to do this, but the time, effort, and also consistency of this process makes it near impossible. On the flip side you could use a multiple choice test where students get all or no marks. This leaves little room for students to express what they understand. New digital assessment systems can auto-grade of complex mathematical expressions and graphs. But someone still has to go through and evaluate the students’ s process and rough work.
To get the best of both worlds, digital tests can quickly determine what students know perfectly, then employ one-on-one meetings to evaluate the rest. With good randomization and in-person testing we get decent mass testing. Then dedicate the time to work directly with the students that need it. This time spent focusing on students like this can even take less time than traditional marking.
Let’s take a midterm test for a math class of 50 marks with 100 students. This might take around 17 hours of marking, at around 10 minutes per test (your mileage may vary greatly). A decent percentage of this time is reviewing correct answers and blank ones. Not to mention you will probably meet with a good percentage of students anyway and spend time handing back and discussing their test.
When using a digital test correct short answers are dealt with automatically, leaving incorrect and blank answers only that need to be reviewed. Lastly, essay style or picture questions would need manual grading either way. A typical class average around 70% leaves only 30% of the test to be re-evaluated. At least half of this is probably due to a Case 1 Assessment Error, where the student simply doesn’t know the concept and has no reason to dispute the grade. This in theory leaves around 15%, or 15 students worth of work, to meet with. For a 2hr paper test a verbal assessment might take a half-hour conversation. This leaving a final estimate of 7.5hrs of meetings instead of 17hrs of marking. Email requests can fix smaller typos and mistakes. This still leaves a hefty margin for any sort of cleanup. And in practice less than half of those remaining students meet with you.
Importantly, all of this time is spent working directly with students rather than in a dark basement wielding a red pen. “Do they deserve 2 out of 5?” looking at a paper can quickly become “I’m going to give you 3 out of 5” with a little discussion and conversation with the actual student. It is also a huge boost to the quality of life of the student and the professor.
Having the student right there to see and appreciate the process can be much more rewarding for the professor. With discussion grading you are working directly with the student to determine what they know about the material rather than guessing it from what they wrote. When they get it, and have an “Ah ha!” moment you know they’ve got it. They will appreciate the work you put in to grading can share it directly with you.
The first key to discussion grading is to employ an automatic booking system, such as the one built in to Canvas. Trying to schedule meetings with students through email or over the phone will lead to confusion and missed appointments. Creating an automated system that shares your availability and allows booking will make the whole process much smoother.
In your discussions allow the student to determine why and how an answer is wrong. Typically when hand-grading paper tests the professor will point out exactly where the student went wrong, and what the correct steps are. This process takes away a great deal of the learning that can come from reviewing a testing. Allow the student to come to their own conclusions and navigate where they went wrong on their own. By listening to them and their process they may better explain what they misunderstood. Learning where they went wrong can make it easier to assign the correct grade. This process can also add significant learning for the student long after the actual test is over.
Discussion grading can seem exclusionary by only reviewing the work of the students that communicate with you. If you feel that requiring discussion is not necessary, remaining tests can be reviewed at your leisure once the keen students are seen. Part of making this successful though is creating a space where students feel comfortable approaching you.
None of this is possible without good solid rough work throughout the test. A quick check to ensure students are providing rough work, and perhaps a small grade for doing so, can go a long way. Students don’t plan to get questions wrong, so optional rough work can seem unnecessary during the test. By requiring rough work it creates a fail-safe for students. They’ll naturally do better by having notes on their process and writing out each step might avoid careless errors. And worst case they’ll have the work to discuss it after.
So next time you’re planning a test consider giving discussion grading a try. Chances are it will save you some time; worst case you’ll get to meet a lot of your students. Remember the point of a course is to learn the content; the test is just a fast and efficient way to determine what they know. Discussion grading is a happy medium between cold hard computerized grading, hand-grading, and verbal one-on-one exams.