Disclosure: my colleague, Georg Rieger, and I are currently in the process of securing post-doc funding to evaluate the effectiveness of Learning Catalytics and that position would be paid in part by Pearson, who owns Learning Catalytics.
I have been using Learning Catalytics, web-based “clickers on steroids” software, in a lecture course and a lab course since the start of September. In this post I want to focus on the logistical side of working with Learning Catalytics in comparison to clickers, and just touch briefly on the pedagogical benefits.
I will briefly summarize my overall pros and cons of with using Learning Catalytics before diving into the logistical details:
- Pro: Learning Catalytics enables a lot of types of questions that are not practical to implement using clickers. We have used word clouds, drawing questions (FBDs and directions mostly), numerical response, choose all that apply, and ranking questions. Although all of these question types, aside from the word clouds, are possible as multiple-choice if you have or are able to come up with good distractors, Learning Catalytics lets you collect their actual answers instead of having them give you best guess from a selection of answers that you give to them.
- Con: Learning Catalytics is clunky. The bulk of this post will discuss these issues, but Learning Catalytics has larger hardware requirements, relies heavily on good wifi and website performance, is more fiddly to run as an instructor, and is less time-efficient than just using clickers (in the same way that using clickers is less time-efficient than using coloured cards).
- Pro: The Learning Catalytics group tool engages reluctant participants in a way that no amount of buy-in or running around the classroom trying to get students to talk to each other seems to be able to do. When you do the group revote potion of Peer Instruction (the “turn to your neighbor” part), Learning Catalytics tells the students exactly who to talk to (talk to Jane Doe, sitting your your right) and matches them up with somebody that answered differently than them. Although this should not be any different than instructing them to “find somebody nearby that answered differently than you did and convince them that you are correct,” there ends up being a huge difference in practice in how quickly they start these discussions and what fraction of the room seems to engage in these discussions.
Honestly, the first two points make it so that I would favour clickers a bit, but the difference in the level of engagement thanks to the group tool is the thing that has sold me on Learning Catalytics. Onto the logistical details.
As you can see from the picture, I use a lot of devices when I teach with Learning Catalytics. You can get away with fewer devices, but this is the solution that meets my needs. I have tried some different iterations, and what I describe here is the one that I have settled on.
- In the middle you will find my laptop, which runs my main slide deck and is permanently projected on one of the two screens in the room. It has a bamboo writing tablet attached to it to mark up slides live and will likely be replaced by Surface Pro 3 in the very near future.
- At the bottom is my tablet (iPad), which I use to run the instructor version of Learning Catalytics. This is where I start and stop polls, choose when and how to display results to the students and other such instructorly tasks. The screen is never shared with the students and is analogous to the instructor remote with little receiver+display box that I use with iclickers. Since it accesses Learning Catalytics over wifi and is not projected anywhere, I can wander around the room with it in my hand and monitor the polls while talking to students. Very handy! I have also tried to do this from my smartphone when my tablet battery was dead, but the instructor UI is nowhere near as good for the smartphone as it is for larger tablets or regular web browsers.
- At the top is a built-in PC which I use to run the student version of Learning Catalytics. This displays the Learning Catalytics content that students are seeing on their devices at any moment. I want to have this projected for two reasons. First, I like to stomp around and point at things when I am teaching so I want the question currently being discussed or result currently being displayed to be something that I can point at and focus their attention on instead of it just being on the screens of their devices. Second, I need the feedback of what the students see at any moment to make sure that the question or result that I intended to push to their devices has actually been pushed to their devices. For the second point, it is reasonable to flip back and forth between instructor and student view on the device running Learning Catalytics (this is what one of my colleagues does successfully), but I find that a bit clunky and it still doesn’t meet my stomping around and pointing at stuff need. The instructor version of Learning Catalytics pops up a student view and this is what I use here (so technically I am logged in as an instructor on two devices at once). The student view that pops up with the instructor version makes better use of the projected screen real estate (e.g., results are shown along the side instead of at the bottom) than the student version that one gets when logging in using a student account.
The trade-off when going from clickers to Learning Catalytics is that you gain a bunch of additional functionality, but in order to do so you need to take on a somewhat clunky and less time-efficient system. There are additional issues that may not be obvious from just the hardware setup described above.
- I am using 3 computer-type devices instead of a computer and clicker base. Launching Learning Catalytics on a device takes only a bit longer than plugging in my iclicker base and starting the session, but this is still one or two more devices to get going (again, my choice and preference to have this student view). Given the small amount of of time that we typically have between gaining access to a room and the time at which we start a class, each extra step in this process introduces another possible delay in starting class on time. With 10 minutes, I find I am often cutting it very close and sometimes not ready quite on time. In two of approximately twelve lectures where I intended to use Learning Catalytics this term, there was a wifi or Learning Catalytics website problem. Once I just switched to clickers (they have them for their next course) and the other time the problem resolved quickly enough that it just cost us a bit of time. When I remember to do so, I can save myself a bit of time by starting the session on my tablet before I leave my office.
- The workflow of running a Learning Catalytics question is very similar to running a clicker question, but after six weeks of using Learning Catalytics, clickers feel like they have a decent-sized advantage in the “it just works” category. There are many more choices with the Learning Catalytics software, and with that a loss of simplicity. Since I did have the experience a few weeks ago of using clickers instead of Learning Catalytics, I can say that the “it just works” aspect of the clickers was reinforced.
- Overall, running a typical Learning Catalytics question feels less time-efficient than a clicker question. It takes slightly longer to start the question, for them to answer and then to display the results. This becomes amplified slightly because many of the questions we are using require the students to have more complicated interactions with the question than just picking one of five answers. All that being said, my lecture TA and I noted last week that it felt like we finally got to a point where running a multiple-choice question in Learning Catalytics felt very similar in time from beginning to end as with clickers. To get to this point, I have had to push the pace quite a bit with these questions, starting my “closing the poll” countdown when barely more than half of the answers are in. So I think I can run multiple choice questions with similar efficiency on both systems now, but I am having to actively force the timing in the case of Learning Catalytics. However, having to force the timing may be a characteristic of the students in the course more than the platform.
- Batteries! Use of Learning Catalytics demands that everybody has a sufficiently charged device or ability to plug their device in, including the instructor. This seems a bit problematic if students are taking multiple courses using the system in rooms where charging is not convenient.
- Preparing for class also has additional overhead. We have been preparing the lecture slides in the same way as usual and then porting any questions we are using from the slides into Learning Catalytics. This process is fairly quick, but still adds time to the course preparation process. Where it can become a bit annoying, is that sometimes the slide and Learning Catalytics versions of the question aren’t identical due to a typo or modification that was made on one platform, but accidentally not on the other There haven’t been a ton of these, but it is one more piece that makes using Learning Catalytics a bit clunky.
- In its current incarnation, it seems like one could use Learning Catalytics to deliver all the slides for a course, not just the questions. This would be non-ideal for me because I like to ink up my slides while I am teaching, but this would allow one to get rid of the need for a device that was projecting the normal slide deck.
An instructor needs to be willing to take on a lot of overhead, inside the class and out, if they want to use Learning Catalytics. For courses where many of the students are reluctant to engage enthusiastically with the peer discussion part of the Peer Instruction cycle, the group tool functionality can make a large improvement in that level of engagement. The additional question types are nice to have, but feel like they are not the make or break feature of the system.
We are a couple of weeks away from our one and only term test in my intro calc-based electricity and magnetism course. This test comes in the second last week of the course and I pitch it to them as practicing for the final. This term test is worth 10-20% of their final grade and the final exam 30-40% of their final grade and these relative weights are meant to maximize the individual student’s grade.
Today I asked them how they feel the major course components are contributing to their learning:
How much do you feel that the following course component has contributed to your learning so far in this course?
This is a bit vague, but I told them to vote according to what contributes to their understanding of the physics in this course. It doesn’t necessarily mean what makes them feel the most prepared for the term test, but if that is how they wanted to interpret it, that would be just fine.
For each component that I discuss below, I will briefly discuss how it fits into the overall course. And you should have a sense of how the whole course works by the end.
The smartphysics pre-class assignments
The pre-class assignments are the engine that allow my course structure to work they way I want it to and I have been writing about them a lot lately (see my most recent post in a longer series). My specific implementation is detailed under ‘Reading assignments and other “learning before class” assignments’ in this post. The quick and dirty explanation is that, before coming to class, my students watch multimedia prelectures that have embedded conceptual multiple-choice questions. Afterward they answer 2-4 additional conceptual multiple-choice questions where they are asked to explain the reasoning behind each of their choices. They earn marks based on putting in an honest effort to explain their reasoning as opposed to choosing the correct answer. Then they show up to class ready to build on what they learned in the pre-class assignment.
The smartphysics online homework
The homework assignments are a combination of “Interactive Examples” and multi-part end-of-chapter-style problems.
The Interactive Examples tend to be fairly long and challenging problems where the online homework system takes the student through multiple steps of qualitative and quantitative analysis to arrive at the final answer. Some students seem to like these questions and others find them frustrating because they managed to figure out 90% of the problem on their own but are forced to step through all the intermediate guiding questions to get to the bit that is giving them trouble.
The multi-part end-of-chapter-style problems require, in theory, conceptual understanding to solve. In practice, I find that a lot of the students simply number mash until the correct answer comes out the other end, and then they don’t bother to step back and try to make sure that they understand why that particular number mashing combination gave them the correct answer. The default for the system (which is the way that I have left it) is that they can have as many tries as they like for each question and are never penalized as long as they find the correct answer. This seems to have really encouraged the mindless number mashing.
This is why their response regarding the learning value of the homework really surprised me. A sufficient number of them have admitted that they usually number mash, so I would have expected them not to place so much learning value on the homework.
Studying for quizzes and other review outside of class time
I have an older post that discusses these in detail, but I will summarize here. Every Friday we have a quiz. They write the quiz individually, hand it in, and then re-write the same quiz in groups. They receive instant feedback on their group quiz answers thanks to IF-AT multiple-choice scratch-and-win sheets and receive partial marks based on how many tries it took them to find the correct answer. Marks are awarded 75% for the individual portion and 25% for the group portion OR 100% for the individual portion if that would give them the better mark.
The questions are usually conceptual and often test the exact same conceptual step needed for them to get a correct answer on one of the homework questions (but not always with the same cover story). There are usually a lot of ranking tasks, which the students do not seem to like, but I do.
I have an older post that discusses these in detail, but I will again summarize here. For the quiz correction assignments they are asked, for each question, to diagnose what went wrong and then to generalize their new understanding of the physics involved. If they complete these assignments in the way I have asked, they earn back half of the marks they lost (e.g. a 60% quiz grade becomes 80%).
I am delighted to see that 42% of them find that these have a large contribution to their learning. The quizzes are worth 20% of their final grade, so I would have guessed that their perceived learning value would get lost in the quest for points.
I am a full-on interactive engagement guy. I use clickers, in the question-driven instruction paradigm, as the driving force behind what happens during class time. Instead of working examples at the board, I either (A) use clicker questions to step the students through the example so that they are considering for themselves each of the important steps instead of me just showing them or (B) get them to work through examples in groups on whiteboards. Although I aspire to have the students report out there solutions in a future version of the course (“board meeting”), what I usually do when they work through the example on their whiteboards is wait until the majority of the groups are mostly done and then work through the example at the board with lots of their input, often generating clicker questions as we go.
The take home messages
Groups quizzes rule! The students like them. I like them. The research tells us they are effective. Everybody wins. And they only take up approximately 10 minutes each week.
I need to step it up in terms of the perceived learning value of what we do in class. That 2/3rds number is somewhere between an accurate estimate and a small overestimate of the fraction of the students in class that at any moment are actively engaged with the task at hand. This class is 50% larger than my usual intro courses (54 students in this case) and I have been doing a much poorer job than usual of circulating and engaging individual students or groups during clicker questions and whiteboarding sessions. The other 1/3 of the students are a mix of students surfing/working on stuff for other classes (which I decided was something I was not going to fight in a course this size) and students that have adopted the “wait for him to tell us the answer” mentality. Peter Newbury talked about these students in a recent post. I have lots of things in mind to improve both their perception and the actual learning value of what is happening in class. I will sit down and create a coherent plan of attack for the next round of courses.
I’m sure there are lots of other take home messages that I can pluck out of these data, but I will take make victory (group quizzes) and my needs improvement (working on the in class stuff) and look forward to continuing to work on course improvement.
This post is in response to Chad Orzel’s recent post about moving toward a more active classroom. He plans to get the students to read the textbook before coming to class, and then minimize lecture in class in favour of “in-class discussion/ problem solving/ questions/ etc.” At the end of the post he puts out a call for resources, which is where this post comes in.
There are three main things I want to discuss in this post, and (other than some links to specific clicker resources) they are all relevant to Chad or anybody else considering moving toward a more active classroom.
- Salesmanship is key. You need to generate buy-in from the students so that they truly believe that the reason you are doing all of this is so that they will learn more.
- When implementing any sort of “learn before class” strategy, you need to step back and decide what you realistically expect them to be able to learn from reading the textbook or watching the multimedia pres
- The easiest first step toward a more (inter)active classroom is the appropriate use of clickers or some reasonable low-tech substitute.
I also realized early on in my career that salesmanship is key. I need to explain why I want them to do the reading, and the 3 JiTT (ed. JiTT = Just-in-Time-Teaching) questions, and the homework problems sets, etc. My taking some time periodically to explain why it is all in their best interest (citing the PER studies, or showing them the correlation between homework done and exam grades), seems to help a lot with the end of term evals.
And I completely agree. I changed a lot of little things between my first and second year of teaching intro physics, but the thing that seemed to matter the most is that I managed to generate much more buy-in from the students the second year that I taught. Once they understood and believed that all the “crazy” stuff I was doing was for their benefit and was backed up by research, they followed me down all the different paths that I took them. My student evals, for basically the same course, went up significantly (0.75ish on a 5-point scale) between the first and second years.
A resource that I will point out for helping to generate student buy-in was put together for Peer Instruction (in Computer Science), but much of what is in there is applicable beyond Peer Instruction to the interactive classroom in general. Beth Simon (Lecturer at UCSD and former CWSEI STLF) made two screencasts to show/discuss how she generates student buy-in:
- Introduction to PI for Class: In this screencast Beth runs though he salesmanship slides in the same way that she does for a live class. “You don’t have to trust the monk!”
- Overview of Supporting Slides for Clickers Peer Instruction: In this screencast Beth discusses informally some of the supporting slides which discuss the reasons and value for using Peer Instruction.
Reading assignments and other “learning before class” assignments
This seems to be a topic that I have posted about many times and for which I have had many conversations. I will briefly summarize my thoughts here, while pointing interested readers to some relevant posts and conversations.
When implementing “read the text before class” or any other type of “learn before class” assignments, you have to establish what exactly you want the students to get out of these assignments. My purpose for these types of assignments is to get them familiar with the terminology and lowest-level concepts, anything beyond that is what I want to work on in class. With that purpose in mind, not every single paragraph or section of a given chapter is relevant for my students to read before coming to class. I refer to this as “textbook overhead” and Mylene discussed this as part of a great post on student preparation for class.
I have tried reading quizzes at the beginning of class and found that it was too hard to pitch them at the exact right level that most of the students that did the reading would get them and that most of the students that didn’t do the reading wouldn’t get them.
Last year I used a modified version of the reading assignment portion of Jitt (this list was originally posted here):
- Assign reading
- Give them 3 questions. These questions are either directly from the JiTT book (I like their estimation questions) or are easy clicker questions pulled from my collection. For the clicker questions I ask them explain their reasoning in addition to simply answering the question.
- Get them to submit via web-form or email
- I respond to everybody’s submissions for each question to try to help clear up any mistakes in their thinking. I use a healthy dose of copy and paste after the first few and can make it through 30ish submissions in just over an hour.
- Give them some sort of credit for each question in which they made an effortful response whether they were correct or incorrect.
I was very happy with how this worked out. I think it really helped that I always responded to each and every one of their answers, even if it was nothing more than “great explanation” for a correct answer. I generated enough buy-in to have an average completion rate of 78% on these assignments over the term in my Mechanics course last time I taught it. I typically weight these assignments at 8-10% of their final grade so they have pretty strong (external) incentive for them to do them.
As I mentioned previously, my current thinking is that I want the initial presentation (reading or screencast) that the students encounter to be one that gets them familiar with terminology and low-level or core concepts. As Mylene says “It’s crazy to expect a single book to be both a reference for the pro and an introduction for the novice.” So that leaves me in a position where I need to generate my own “first-contact” reading materials or screencasts that best suit my needs and this is something that I am going to try out in my 3rd-year Quantum Mechanics course this fall.
It turns out that for intro physics there is an option which will save me this work. I am using smartPhysics this year (disclaimer: the publisher is providing the text and online access completely free to my students for the purposes of evaluation). To explain what smartPhysics is, I will pseudo-quote from something I previously wrote:
For those teaching intro physics that are more interested in screencasting/pre-class multimedia video presentations instead of pre-class reading assignments, you might wish to take a look at SmartPhysics. It’s a package developed by the PER group at UIUC that consists of online homework, online pre-class multimedia presentations and a shorter than usual textbook (read: cheaper than usual) because there are no end-of-chapter questions in the book
, and the book’s presentation is geared more toward being a student reference since the multi-media presentations take care of the the “first time encountering a topic” level of exposition.My understanding is that they paid great attention to Mayer’s research on minimizing cognitive load during multimedia presentations. I will be using SmartPhysics for my first time this coming fall and will certainly write a post about my experience once I’m up and running.
Since writing that I have realized that the text from the textbook is more or less the transcript of the multimedia presentations so in a way this textbook actually is a reference for the pro and an introduction for the novice. They get into more challenging applications of concepts in their interactive examples which are part of the online homework assignments. For example, they don’t even mention objects landing at a different height than the launch height in the projectile motion portion of the textbook, but have an interactive example to look at this extension of projectile motion.
The thing with smartPhysics is that their checkpoint assignments are basically the same as the pre-class assignments I have been using so it should be a pretty seamless transition for me from that perspective. I still haven’t figured out how easy it is to give students direct feedback on their checkpoint assignment questions in smartPhysics, and remember that I consider that to be an important part of the student buy-in that I have managed to generate in the past.
(edit: the following discussion regarding reflective writing was added Aug 11) Another option for getting students to read the text before coming to class is reflective writing, which is promoted in Physics by Calvin Kalman (Concordia). From “Enhancing Students’ Conceptual Understanding by Engaging Science Text with Reflective Writing as a Hermeneutical Circle“, CS Kalman, Science & Education, 2010:
For each section of the textbook that a student reads, they are supposed to first read the extract very carefully trying to zero in on what they don‘t understand, and all points that they would like to be clarified during the class using underlining, highlighting and/or summarizing the textual extract. They are then told to freewrite on the extract. “Write about what it means.” Try and find out exactly what you don‘t know, and try to understand through your writing the material you don‘t know.
This writing itself is not marked since the students are doing the writing for the purposes of their own understanding. But this writing can be marked for being complete.
Clicker questions and other (inter)active physics classroom resources
Chad doesn’t mention anywhere in his post that he is thinking of using clickers, but I highly recommend using them or a suitable low-tech substitute for promoting an (inter)active class. I use a modified version of Mazur’s Peer Instruction and have blogged about my specific use of clickers in my class in the past. Many folks have implemented vanilla or modified peer instruction with cards and had great success.
Clicker question resources: My two favourite resources for intro physics clicker questions are:
- The Ohio State clicker question sequences and,
- The collections put together by the folks at Colorado.
I quite like the questions that Mazur includes in his book but find that they are too challenging for my students without appropriate scaffolding in the form of intermediate clicker questions which can be found in both the resources I list above.
Clicker-based examples: Chad expressed frustration that “when I do an example on the board, then ask them to do a similar problem themselves, they doodle aimlessly and say they don’t have any idea what to do.” To deal with this very issue, I have a continuum that I call clicker-based examples and will discuss the two most extreme cases that I use, but you can mash them together to produce anything in between:
- The easier-for-students case is that, when doing an example or derivation, I do most of the work but get THEM to make the important mental jumps. For a typical example, I will identify 2-4 points in the example that would cause them some grief if they tried to do the example completely on their own. When I work this example at the board (or on my tablet) I will work through the example as usual, but when I get to one of the “grief” points I will pose a clicker question. These clicker questions might be things like “which free-body diagram is correct?”, “which of the following terms cancel?” or “which reasoning allowed me to go from step 3 to step 4?”
- The other end of the spectrum is that I give them a harder question and still identify the “grief” points. But I instead get them to do all the work in small groups on whiteboards. I then help them through the question by posing the clicker questions at the appropriate times as they work through the problems. Sometimes I put all the clicker questions up at the beginning so they have an idea of the roadmap of working through the problem.
An excellent resource for questions to use in this way is Randy Knight’s 5 Easy Lessons, which is a supercharged instructor’s guide to his calculus-based intro book. The first time I used a lot of these questions I found that the students often threw their hands up in the air in confusion. So I would wander around the room (36 students) and note the points at which the students were stuck and generate on-the-fly clicker questions. The next year I was able to take advantage of those questions I had generated the previous year and then had all the “grief” points mapped out and the clicker questions prepared for my clicker-based examples.
Not related to clicker questions, but they are related to the (inter)active class: group quizzes are something that I have previously posted about and I have also presented a poster on the topic. I give the students a weekly quiz that they write individually first, and then after they have all been handed in they re-write the quiz in groups. Check out the post that I linked to if you want to learn more about exactly how I implement these as well as the pros and cons. Know that they are my single favourite thing that happens in my class due to it being the most animated I get to see the students being while discussing the application of physics concepts. It is loud and wonderful and I am trying to figure out how to show that there is a quantifiable learning benefit.
This is a collection of things that tickled my science education fancy in the past couple of weeks or so.
Reflections on Standards-Based Grading
Lots of end-of-year reflections from SBG implementers
- SBG with voice revisions – Andy Rundquist only accepts (re)assessments where he can hear the student’s voice. When they hand in a problem solution, it basically has to be a screencast or pencast (livescribe pen) submission. The post is his reflections on what worked, what didn’t and what to do next time.
- Standards-Based Feedback and SBG Reflections – Bret Benesh has two SBG-posts one after the other. I was especially fond of the one on Standards-Based Feedback where he proposes that students would not receive standards-based grades throughout the term but would instead produce a portfolio of their work which best showed their mastery for each standard. This one got my mind racing and my fingers typing.
- A Small Tweak and a Feedback Inequality – Dan Anderson posts about providing feedback-only on the first assessment in nerd form: Feedback > Feedback + Grade > Grade. This is his take on the same issue which lead Bret Benesh to thinking about Standards-Based Feedback, when there is a grade and feedback provided, the students focus all their attention on the grade. He also has a neat system of calculating the final score for an assessments.
- Reflections on SBG – Roger Wistar (computer science teacher) discusses his SBG journey and the good and bad of his experience so far.
- Modeling Workshop: Week 1 (Fear and Respect The Hestenes) – Shawn Cornally tells us about his first week at a summer modeling workshop and he seems to be loving it.
Flipped classrooms and screencasting
- Lecturing, Screencasting, Flipped Classrooms – Mylene posts some thoughts about lecturing after attending a recent webinar on flipped classrooms. Great conversation ensues in the comments.
- How I make screencasts: The whiteboard screencast – Robert Talbert continues on with his how-to screencast series.
- Why should I use peer instruction in my class? – Peter uses a study on student (non)learning from video by the Kansas State Physics Education Research Group to help answer this question. The short answer is “Because they give the students and you to ability to assess the current level of understanding of the concepts. Current, right now, before it’s too late and the house of cards you’re so carefully building come crashing down.”
The tale of sciencegeekgirl’s career
- How a Scientist Became a Freelance Science Writer – Stephanie Chasteen (sciencegeekgirl) talks about how she earned her physics PhD while also developing as a science writer.
Getting them to do stuff they are interested in
- The Future of Education Without Coercion (Video) – Shawn Cornally (Think Thank Thunk blog) talks about how to rethink what exactly productive student work is. And it all starts with getting them to do stuff that they’re interested in.
- Angry Birds in the Physics Classroom – Speaking of things most people are interested in, Frank Noschese posts about some physics-based investigations students can do using Angry Birds.
John Burk gets busy
- John Burk (Quantum Progress blog) has been a very busy blogger over the past couple of weeks. Highlights include a couple of Rhett Allain-esque Google doodle analyses (here and here), some Arduino fun (stay tuned for my post on DAQ systems which includes arduino), “The time has come to stop playing defense and change education” (let’s not just sit there and criticize Khan Academy, let’s go out and show what can be done that is better), and a first vPython assignment for high school students.
Note: This post was originally part of a post about getting as many students as possible involved in courswide discussions supported by clicker questions. The post quickly grew way too large so I decided to start with a post on how I use clickers in my courses to set up the later post on coursewide discussions.
I have used clickers in six of my courses since 2009: five were intro-level Physics and the other one was a 3rd-year (that’s Canadian for Junior-level) Quantum Mechanics course. I am at a point where I feel like I really get how (I like) to use clickers in my classroom thanks to practice, reflection and helpful resources along the way (Mazur’s PI book, CWSEI resources, Derek Bruff’s blog and book) . I have collected a ton of questions (my favorites: Mazur’s PI questions, CU-SEI, OSU PER) and for my most commonly taught courses I now know what sort of response distributions to expect for different questions and can now use this to move the class in different directions. I have also developed the salesmanship needed to generate student buy-in (“the research shows that teaching method X will help you learn more”), which makes everything a lot easier.
There are a ton of resources out there discussing the why and how of using clickers, so I won’t go into it here. The resources I listed above are some good starting points.
Modified Peer Instruction:
Most of the time I use a flexible/lazy/modified version of Mazur’s Peer Instruction where I get the students to initially discuss the question at their table (usually 3 students) and then vote. If there are roughly 40-70% correct answers, I get them to find somebody not at their table who voted differently than them and discuss their answer with that person. I usually don’t show them the histogram after the first vote but sometimes I will if two or more answers have a similar vote count and I think that seeing the distribution will help guide their discussion by focusing it on only a couple of choices. Then I get them to revote. Either way, once I get over roughly 70% correct answers I will tell them that most of them agree and then solicit student explanations for their answers.
Common types of questions that I use:
Overall, the most common type of question I use is what Mazur calls a ConcepTest: a question that tests the application of one or more concepts and which has only one correct answer. Typically the ConcepTests from Mazur’s book are too challenging for my students to be able to answer correctly without some bridging questions. Fortunately I came across the OSU PER group’s clicker question sequences, which are sequences of 3-5 conceptual questions that start with a relatively easy application of a concept and build toward a challenging question asking the students to apply the same concept on each question. The challenging questions tend to be of similar difficulty to Mazur’s questions and sometimes actually are Mazur’s questions.
Some of the other most common types of questions that I use are discussed briefly below. Like the clicker question types you wouldn’t put on a test, discussed by Derek Bruff, many of these question types wouldn’t make much sense as a multiple-choice question on an exam, but they have their own specific purposes in my classroom:
- Predict what will happen questions before doing a demo – Based on Sokoloff and Thornton’s Interactive Lecture Demonstrations, I set up and explain the idea behind a demo and then get them to predict what will happen when I run the demo. It is a well known issue that they don’t always see what we want them to see when we show a demo, and they will even misremember what they see in a way that matches up with their existing conceptual (mis)understanding of the phenomenon being demonstrated. Basically if they have to flex a bit of mental muscle predicting what will happen in the demo (the clicker question), they will be better primed to interpret the results of the demo correctly and revise their conceptual understanding appropriately.
- Clicker-based examples – This is my hybrid of working an example on the board and asking them to work through an example-type problem on whiteboards. I give them a reasonably challenging example and ask them to work on it in groups with whiteboards. I develop clicker questions to help them work through each of the critical steps in the problem and then leave them to work out the more trivial details leading up to the next major step.
- How many of the following apply? – This is a type of question that is usually meant to not have *ONE* correct answer and is meant to provoke discussion. I first came across this type of question in an AJP article from Beatty et al. Their example was to identify the number of forces being exerted on a block being pulled up a rough inclined plane, while the block was also attached to a spring. Their are multiple correct answers because, among other reasons, you can treat the normal and friction forces as a combined reaction force. Ambiguity rules here!
- Clicker-assisted derivations – I used these a lot when I taught 3rd-year Quantum Mechanics and they saved me from the drudgery (and their boredom) of my working through long derivations on the board. These are similar to the clicker-based examples in that I use clicker questions to get THEM to work through the critical steps of the derivation. These questions either ask them to determine the next step in the derivation (when the textbook is “kind” enough to leave steps out) or ask them to decide on the reasoning that leads from one step in the derivation to the next. I would typically work through the derivation, but use these clicker questions to get them to really pay attention to the critical steps in the derivation.
I’m also planning to do a post where I will discuss most of these question types in more detail as well as provide some examples that I have used in class.
So far I have only ever given participation marks for clicker questions. Answer some fraction (it has varied from course to course) of the questions in a given lecture and you get a participation point. It doesn’t matter if you get the clicker questions right or wrong, you will still get the participation point at the end of the day. I usually let them miss up to 10% (rounded up) of their participation points/lectures and still get full clicker participation marks in their overall grade. My courses max out at approximately 36 students so it is very easy for me to wander and do my best to make sure everybody is putting in an honest effort to think about the questions and answer them. I have made the clicker participation points count for between 2% and 5% of their final grade. In my most recent course (intro Calculus-based E&M) only 8 of the 37 student DIDN’T get full clicker participation marks, and the average clicker participation mark was 95.2%.
I mention in this post at least two other posts that I hope to write to follow this one. But there is another. The mention of Interactive Lecture Demonstrations always reminds me that I would like do a post about the Physics Education Research community slowly moving away from the Elicit-Confront-Resolve type of questions that are central to Interactive Lecture Demonstrations. In my experience, students get tired of being asked questions where they know that their intuition or current understanding is going to give them the wrong answer. It seems that the work being done on measuring and improving student learning attitudes toward Physics (measured by CLASS, MPEX) is leading us away from the Elicit-Confront-Resolve pedagogies.
I have also used coloured cards instead of clickers and prefer clickers because, among other reasons, they let me keep track of how the voting went (useful in many ways) and they preserve student anonymity. If you’re interested Derek Bruff has a post where he discusses Mazur and Lasry’s paper that compares flashcards to clickers in terms of student learning.