Peer Instruction FailPosted: September 20, 2011
(This is not a criticism of Peer Instruction. It is just a tale of one of those times where the right idea failed to gain traction)
Yesterday we tackled a classic parabolic motion conceptual question in class, “three ships”, shown below.
They answered this question in their pre-class “reading” (actually I’m using smartPhysics so they watched a multimedia presentation). I am teaching two sections (35 students each) and will just combine their data here. On the first in-class vote, they were 51% correct. After discussion (5 minutes in one section, just about 10 minutes in the other), they revoted and were only 59% correct, which is a learning gain of 0.16 for those who like to use learning gains on clicker questions.
And the discussion was super animated. It seemed like it was so productive, but I guess it was really more about the two sides (B vs. C) digging their heels in.
And this does happen on occasion with Peer Instruction. You see 50% correct on the initial vote and smile to yourself thinking that the subsequent peer discussions is going to be a good one, but then sometimes the number of correct answers barely budges on the revote.
In the case of this question, it seems like I need some scaffolding clicker questions or other activities leading up to this question. Perhaps this scaffolding won’t improve the initial vote, but will perhaps give them more points of reference and examples to use in their discussions, helping the peer instruction really do its job.
Before coming to this question we spent 20-30 minutes developing position and velocity graphs in the horizontal and vertical directions starting from a motion diagram of a basketball shot, but did not talk explicitly about time at all during the sequence. Then we discussed the clicker question for the horizontal projectile vs ball drop demo and they were nearly unanimous in getting that question correct. But it seems like more scaffolding is still in order.