How I use clickers in my courses

Note: This post was originally part of a post about getting as many students as possible involved in courswide discussions supported by clicker questions. The post quickly grew way too large so I decided to start with a post on how I use clickers in my courses to set up the later post on coursewide discussions.

I have used clickers in six of my courses since 2009: five were intro-level Physics and the other one was a 3rd-year (that’s Canadian for Junior-level) Quantum Mechanics course. I am at a point where I feel like I really get how (I like) to use clickers in my classroom thanks to practice, reflection and helpful resources along the way (Mazur’s PI book, CWSEI resources, Derek Bruff’s blog and book) . I have collected a ton of questions (my favorites: Mazur’s PI questions, CU-SEI, OSU PER) and for my most commonly taught courses I now know what sort of response distributions to expect for different questions and can now use this to move the class in different directions. I have also developed the salesmanship needed to generate student buy-in (“the research shows that teaching method X will help you learn more”), which makes everything a lot easier.

There are a ton of resources out there discussing the why and how of using clickers, so I won’t go into it here. The resources I listed above are some good starting points.

Modified Peer Instruction:

Most of the time I use a flexible/lazy/modified version of Mazur’s Peer Instruction where I get the students to initially discuss the question at their table (usually 3 students) and then vote. If there are roughly 40-70% correct answers,  I get them to find somebody not at their table who voted differently than them and discuss their answer with that person. I usually don’t show them the histogram after the first vote but sometimes I will if two or more answers have a similar vote count and I think that seeing the distribution will help guide their discussion by focusing it on only a couple of choices. Then I get them to revote. Either way, once I get over roughly 70% correct answers I will tell them that most of them agree and then solicit student explanations for their answers.

Common types of questions that I use:

Overall, the most common type of question I use is what Mazur calls a ConcepTest: a question that tests the application of one or more concepts and which has only one correct answer. Typically the ConcepTests from Mazur’s book are too challenging for my students to be able to answer correctly without some bridging questions. Fortunately I came across the OSU PER group’s clicker question sequences, which are sequences of 3-5 conceptual questions that start with a relatively easy application of a concept and build toward a challenging question asking the students to apply the same concept on each question. The challenging questions tend to be of similar difficulty to Mazur’s questions and sometimes actually are Mazur’s questions.

Some of the other most common types of questions that I use are discussed briefly below. Like the clicker question types you wouldn’t put on a test, discussed by Derek Bruff, many of these question types wouldn’t make much sense as a multiple-choice question on an exam, but they have their own specific purposes in my classroom:

  • Predict what will happen questions before doing a demo – Based on Sokoloff and Thornton’s Interactive Lecture Demonstrations, I set up and explain the idea behind a demo and then get them to predict what will happen when I run the demo. It is a well known issue that they don’t always see what we want them to see when we show a demo, and they will even misremember what they see in a way that matches up with their existing conceptual (mis)understanding of the phenomenon being demonstrated. Basically if they have to flex a bit of mental muscle predicting what will happen in the demo (the clicker question), they will be better primed to interpret the results of the demo correctly and revise their conceptual understanding appropriately.
  • Clicker-based examples – This is my hybrid of working an example on the board and asking them to work through an example-type problem on whiteboards. I give them a reasonably challenging example and ask them to work on it in groups with whiteboards. I develop clicker questions to help them work through each of the critical steps in the problem and then leave them to work out the more trivial details leading up to the next major step.
  • How many of the following apply? – This is a type of question that is usually meant to not have *ONE* correct answer and is meant to provoke discussion. I first came across this type of question in an AJP article from Beatty et al. Their example was to identify the number of forces being exerted on a block being pulled up a rough inclined plane, while the block was also attached to a spring. Their are multiple correct answers because, among other reasons, you can treat the normal and friction forces as a combined reaction force. Ambiguity rules here!
  • Clicker-assisted derivations – I used these a lot when I taught 3rd-year Quantum Mechanics and they saved me from the drudgery (and their boredom) of my working through long derivations on the board. These are similar to the clicker-based examples in that I use clicker questions to get THEM to work through the critical steps of the derivation. These questions either ask them to determine the next step in the derivation (when the textbook is “kind” enough to leave steps out) or ask them to decide on the reasoning that leads from one step in the derivation to the next. I would typically work through the derivation, but use these clicker questions to get them to really pay attention to the critical steps in the derivation.

I’m also planning to do a post where I will discuss most of these question types in more detail as well as provide some examples that I have used in class.

Grading:

So far I have only ever given participation marks for clicker questions. Answer some fraction (it has varied from course to course) of the questions in a given lecture and you get a participation point. It doesn’t matter if you get the clicker questions right or wrong, you will still get the participation point at the end of the day. I usually let them miss up to 10% (rounded up) of their participation points/lectures and still get full clicker participation marks in their overall grade. My courses max out at approximately 36 students so it is very easy for me to wander and do my best to make sure everybody is putting in an honest effort to think about the questions and answer them. I have made the clicker participation points count for between 2% and 5% of their final grade. In my most recent course (intro Calculus-based E&M) only 8 of the 37 student DIDN’T get full clicker participation marks, and the average clicker participation mark was 95.2%.

Last thoughts:

I mention in this post at least two other posts that I hope to write to follow this one. But there is another. The mention of Interactive Lecture Demonstrations always reminds me that I would like do a post about the Physics Education Research community slowly moving away from the Elicit-Confront-Resolve type of questions that are central to Interactive Lecture Demonstrations. In my experience, students get tired of being asked questions where they know that their intuition or current understanding is going to give them the wrong answer. It seems that the work being done on measuring and improving student learning attitudes toward Physics (measured by CLASS, MPEX) is leading us away from the Elicit-Confront-Resolve pedagogies.

I have also used coloured cards instead of clickers and prefer clickers because, among other reasons, they let me keep track of how the voting went (useful in many ways) and they preserve student anonymity. If you’re interested Derek Bruff has a post where he discusses Mazur and Lasry’s paper that compares flashcards to clickers in terms of student learning.

Advertisements

4 Comments on “How I use clickers in my courses”

  1. This is great and I’m looking forward to the future posts. A few comments/questions:
    I don’t give any points for participation but I still feel like my students take it seriously (clicker questions, I mean). Do you feel the points are necessary?

    I’ve also noticed something really interesting about using flash cards. I tell my students that the height above their head indicates their confidence level. The cards are white on the back so they don’t influence each other (much) with their votes but they can see the relative confidence in the room when choosing who to go talk with during the peer discussion part. Essentially the answer is anonymous while the confidence isn’t. For more on this I’ve got a blog post on it: http://andyrundquist.blogspot.com/2010/01/clickers-vs-cards.html

  2. Joss Ives says:

    @Andy Rundquist – Thanks for the comments.

    I saw no real difference in points (for most students) when I went from them being worth 5% of total grade to 2% from one year to the next. But there are two things that make me want to continue giving them some credit for answering the questions. First, even if they are all taking the questions seriously, I like to give them credit for everything that they do that contributes to their learning. It communicates to them that the course is about their learning. Second, I’m always trying to find ways to help the bottom quartile do better in my courses and this often involves dangling really easy marks in front of them so that they will do enough work/learning to not fail the exams.

    I like your post on flash cards. I sometimes ask a supplementary confidence question after a clicker question, but your method of height of the cards to indicate confidence is much more elegant. One of my research and teaching interests is how to help the lower-achievement students perform better. So I always want as much data as possible to help identify these students early. One place where the clicker data helps with this is that I (and others: http://www.dillgroup.ucsf.edu/%7Ealucas/iclicker_paper_final.pdf) have found that fraction of correct responses correlates pretty well with exam performance.

  3. […] class.  I use a modified version of Mazur’s Peer Instruction and have blogged about my specific use of clickers in my class in the past. Many folks have implemented vanilla or modified peer instruction with cards and had […]

  4. […] compelling and very short PI teacher cheat sheet. I was already curious because Andy Rundquist and Joss Ives were blogging about interesting ways to use PI, even with small groups.  I hadn’t looked […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s