# Summer 2012 Research, Part 1a: Bonus content for immediate feedback during an exam bonus content

This is a quick follow-up to my previous post on my research related to the effect of immediate feedback during exams.

I love it when I’m going through my “to read” pile of papers and realize that there is something in there related to one of my own research questions. There is a paper from last year (Phys. Rev. ST Physics Ed. Research 7, 010107, 2011) by Fakcharoenphol, Potter and Stelzer from University of Illinois at Urbana-Champaign that looked at how students did on matched pairs of questions as part of preparing for an exam.

There’s a lot of interesting stuff in this paper, but the result which is most relevant to my own research is the following. They developed a web-based (voluntary) exam preparation tool where students would do a question, receive feedback as just the answer or as a solution, then do a matched question where they received the other type of feedback. They divided the students into four groups so that every student had equal access to answer vs. solution feedback on the questions.  For each matched question pair (let’s call them questions A and B), the grouping of the students allowed half of the students to answer question A first and the other half to answer question B first. Within each of those groups, half of the students received answer only feedback for their first question and the other half received solution feedback for their first question.

They called the first question of a pair answered by the students their baseline and the students scored 58.8%±0.2% on those questions. Keeping in mind that they had many pairs of questions, the average performance of the students on the follow questions was 63.5±0.3% when only the answer was supplied after answering the first question and 66.0±0.3% when the solution was  provided after answering the first question. There are statistically significant differences between all of these numbers, but the gains from receiving the feedback are not overly impressive. More on this in a moment.

Back to my own research. During an exam, I used matched pairs of questions and gave the students feedback on their first question (in the form of just the answer) before they answered the second question. I saw a statistically significant improvement from the first question (65.3±6.8%) to the second one (77.5±6.0%), but due to low statistics there was not much to conclude other than it was worth pursuing this research study further. The results from the UIUC folks set the magnitude scale for the effect I will see once I am able to improve my statistics (58.8%±0.2% to 63.5±0.3% due to solutions only feedback).

I’m really not certain if I expect to see less, equal or more improvement for my “during an exam feedback” design as their “preparing for an exam feedback” design. In their design, the level of preparation of their students when using their study tool is all over the map (they look at this in more detail in their paper) so it is not known if the learning effect due to the feedback also depends on when during their overall study plan they were using the tool (e.g. as a starting point for their studying vs. to check their understanding after having done a bunch of studying).  Since both our designs use multiple-choice questions (but preparation vs assessment conditions) I am not certain how guessing would play into everything.

I have to admit that if my future research into the effect of feedback during an exam finds that I am getting only a 5% gain (like UIUC did for their solution only feedback) from this intervention that I doubt that I would continue with the practice.

Advertisements