Pre-class homework completion rates in my first-year courses

In my mind it is hard to get students to do pre-class homework (“pre-reading”) with much more than an 80% completion rate when averaged out over the term. It usually starts higher than this, but there is a slow trend toward less completion as the term wears on. After taking a more careful look at the five introductory courses in which I used pre-class assignment I have discovered that I was able to do much better than 80% in some of the later courses and want to share my data.

Descriptions of the five courses

The table below summarizes some of the key differences between each of the five introductory physics courses in which I used pre-class assignments. It may also be important to note that the majority of the students in Jan 2010 were the same students from Sep 2009, but not much more than half of the Jan 2013 students took my Sep 2012 course. For Jan 2013 only two of the students had previously taken a course with me.

Course Textbook Contribution to overall course grade Median completion rate (the numbers in brackets show the 1st and 3rd quartiles)
Sep 2009 (Mechanics) Young & Freedman – University Physics 11e Worth 8%, but drop 3 worst assignments. No opportunities for late submission or earning back lost marks. 0.73 (0.62,0.79)
Jan 2010 (E&M) Young & Freedman – University Physics 11e Worth 10%, but drop 2 worst assignments. No opportunities for late submission or earning back lost marks. 0.78 (0.74,0.89)
Sep 2011 (Mechanics) smartPhysics Worth 8%. Did not drop any assignments, but they could (re)submit at any point up until the final exam and earn half marks. 0.98 (0.96,0.98)
Jan 2012 (E&M) smartPhysics Worth 8%. Did not drop any assignments, but they could (re)submit at any point up until the final exam and earn half marks. 0.94 (0.93,0.98)
Jan 2013 (E&M) Halliday, Resnick & Walker – Fundamentals of Physics 9e & smartPhysics multimedia presentations Worth 10%. Did not drop any assignments, but they could (re)submit at any point up until the final exam and earn half marks. 0.93 (0.87,0.97)

Overall the style of question used was the same for each course, with the most common type of question being a fairly straight-forward clicker question (I discuss the resources and assignments a bit more in the next paragraph). I have not crunched the numbers, but scanning through results from the Jan 2013 course shows that students are answering the questions correctly somewhere in the 65-90% range and the questions used in that course were a mishmash of the Jan 2010 and Jan 2012 courses. Every question would have an “explain your answer” part. These assignments were graded on completion only, but their explanation had to show a reasonable level of effort to earn these completion marks. Depending on class size, I did not always read their explanations in detail, but always scanned every answer. For the first couple of assignments I always made sure to send some feedback to each student which would include an explanation of the correct answer if they answered incorrectly. Each question would also be discussed in class.

A rundown of how the resources and assignments varied by class:

  • For Sept 2009 and Jan 2010 I used a Blackboard assignment to give them the three questions each week and told them which sections of the textbook to read, and I didn’t do much to tell them to skip passages or examples that weren’t directly relevant.
  • For Sept 2010 and Jan 2012 I used smartPhysics (link to UIUC PER group page, where they were developed). These consist of multimedia presentations for each chapter/major topic, which have embedded conceptual questions (no student explanations required for these). After they are done the multimedia presentation, they then answer the actual pre-class questions, which are different from those embedded in the multimedia presentation. For the most part, the questions in their pre-class assignments were similar to the ones I was previously using except for the smartPhysics ones were often more difficult. Additionally, my one major criticism of smartPhysics is that I don’t feel they are pitched at the appropriate level for a student encountering the material for  the first time. For more on this, have a look at the second bullet in the “Random Notes” section of this post I did on pre-class assignments (link). One of the very nice things about smartphysics is that everything (the regular homework, the pre-class assignments and the multimedia presentations) all used the same web system.
  • For January 2013, I was back on assigning the pre-class assignments through Blackboard. The preamble for each of the pre-class assignments pointed them toward a smartPhysics multimedia presentation and the relevant sections of the textbook we were using. Students could use one, the other or both of these resources as they felt fit. I don’t think I ever surveyed them on their use of one over the other, but anecdotally I had the sense that way more were using the multimedia presentations.

The data

I present two graphs showing the same data from different perspectives. Figure 1 shows how the fraction of the class completing a given pre-class assignment varies over the course of the term. There is a noticeable downward trend in each course. Figure 2 shows the fraction of assignments completed by each student in each class.

This is the caption

Figure 1. For the first five graphs, the x-axis represents time going from the start of the course (left side) to the end of the course (right side). The box and whiskers plot compares the five courses according to the previously used colours, where the line in the boxes shows the median and the boxes show the 1st and 3rd quartiles. The whiskers are the matplotlib default; they extend to the most extreme data point within the 1.5*(75%-25%) range.

This is the caption

Figure 2. Histograms showing the fraction of assignments completed by each student. An ambiguous looking artifact appears in the 0 bin for Sept 2011, but all students in that course completed 70% or more of the pre-class assignments

Discussion

There is clearly a large difference between the first two courses and the final three in terms of the rates at which students were completing these pre-class assignments. The fact that I saw 98% of these assignments completed one term is still shocking to me. I’m not sure how much each of the following factors contributed to the changes, but here are some of the potential factors…

  1. Multimedia presentations – students seem to find these easier to consume than reading the textbook. There is a study [Phys. Rev. ST Physics Ed. Research 8, 010103 (2012)] from
    Homeyra Sadaghiani at California State Polytechnic University where she did a controlled study comparing the multimedia presentations to readings in textbooks, and used the same pre-class assignments for both. In addition to finding that the multimedia presentation group did slightly better on the exams, she also found that the students had a favorable attitude toward the usefulness of the multimedia presentations, but that the textbook group had an unfavorable attitude toward the textbook reading assignments. But she also mentions that the multimedia group had a more favorable attitude toward clicker questions than the textbook section, and this alone could explain the difference in test performance as opposed to it having to do with the learning which takes place as part of the pre-class assignments. If the students in one section are buying into how the course is being run more than another, they are going to do a better job of engaging with all of the learning opportunities and as a result should be learning more. There are a variety of reasons why reading the textbook may be preferred to have them watching a video or multimedia presentation, but you can’t argue with the participation results.
  2. Generating buy-in – I have certainly found that, as time wears on, I have gotten better at generating buy-in for the way that my courses are run. I have gotten better at following up on the pre-class assignments in class and weaving the trends from their submissions into the class. However, for the Sep 2009 and Jan 2010 courses, that was the most personal feedback I have ever sent to students in an intro course on their pre-class assignments so I might have expected that getting better at generating buy-in might cancel out the decreased personal feedback.
  3. Changes in grading system – This may be a very large one and is tied to generating buy-in. For the first two courses I allowed them to drop their worst 3 or 2 pre-class assignments from their overall grade. In the later courses, I changed the system to being one where they could even submit the assignments late for half credit, but were not allowed to drop any. In the latter method I am clearly communicating to the students I think it is worth their time to complete all of the assignments.

In poking around through the UIUC papers and those from Sadaghiani, that 98% completion rate from my Sept 2011 course is really high, but is going to be an overestimate of how many people actually engaged with the pre-class material as opposed to trying to bluff their way through it. The smartPhysics system also gave them credit for completing the questions embedded in the multimedia presentations and I’m not presenting those numbers here, but when I scan the gradebooks, those that received credit for doing their pre-class assignments also always received credit for completing the embedded questions in the multimedia presentations. But, it is possible to skip slides to get to those so that doesn’t mean they actually fully consumed those presentations. Based on reviewing their explanations each week (with different degrees of thoroughness) and then docking grades accordingly, I would estimate that maybe 1 or 2 students managed to bluff their way through each week without actually consuming the presentation. That translates to 2-3% of these pre-class assignments.

Sadaghiani reported “78% of the MLM students completed 75% or more of the MLMs”, where MLM is what I have been calling the multimedia presentations. Colleagues of mine at UBC found (link to poster) that students self-reported to read their textbooks regularly in a course that used a quiz-based pre-class assignment (meaning that students were given marks for being correct as opposed to just participating, and in this case were not asked to explain their reasoning). 97% of the students actually took the weekly quizzes, but there is a discrepancy in numbers between those that took the quizzes and those that actually did the preparation.

With everything I have discussed here in mind, it seems that 80% or better is a good rule of thumb number for buy-in for pre-class activities, and that one can do even better than that with some additional effort.


Student collision mini-projects from my summer 2013 comp-phys course

The last course that I taught at UFV before taking a job at UBC was an online Computation Physics course. I previously posted the mini-project videos from when I ran the course in the fall and you can check that previous post to learn more about the context of these mini-projects. The overall level of creativity seems like it was a bit lower this time than last and that might be in part due to the online nature of the course, where it was in person the last time the course ran. Last time, people would show up to the computer lab and see what others were working on and the stakes would become raised. I think if I did this type of course online again, I would get people to submit regular progress videos so that there was the virtual equivalent of people showing up in the lab and feeling the stakes be raised.


Mathematica vs. Python in my Computational Physics course

This past year I taught two sections (fall and summer) of Computational Physics at UFV, which is quite a rare thing to happen at a school where we typically run 3rd and 4th year courses every other year. The course was structured so that students would get some initial exposure to both Python and Mathematica for the purposes of physics-related computational tasks, and then as the course went on they would be given more and more opportunities to choose between the two platforms when completing a task. This post looks at the choices made by the students on a per task basis and a per student basis.  From a total of 297 choices, they chose Mathematica 62.6 ± 2.8% (standard error) of the time. Two things to note. First, there is a small additional bias toward Mathematica because two of the tasks are more Mathematica-friendly and only one of the tasks is more Python-friendly, as will be discussed. Second, there were students coming into the course with some previous experience using Mathematica or a strong computing background in Java or C++ and those students usually gravitated toward Mathematica or Python, respectively. But some students did the opposite and took advantage of the opportunity to become much more familiar with a new-to-them platform and tried to do as much of their work as possible using the new platform.

Task by task comparison

The general format of the course is that, for the first three weeks, the students are given very similar tasks that they must complete using Python and Mathematica. Examples include importing data and manipulating the resulting arrays/lists to do some plotting, and basic iterative (for/do) loops. Each of those weeks there is a third task (1.3 and 2.3 in Figure 1), where they build on those initial tasks and can choose to use either Python or Mathematica. In subsequent weeks, being forced to use both platforms goes away and they can choose which platform to use.

Figure 1 shows their choices for those tasks in which they could choose. In week 3 they learn how to do animations using the Euler-Kromer method in Mathematica and using VPython. Although Mathematica works wonders with animations when using NDSolve, it is really clunky for Euler-Kromer-based animations. Task 4.1 is a mini-project (examples) where they animate a collision and since they have only learned how to do this type of thing using Euler-Kromer at this point, VPython is the much friendlier platform to use for the mini-project. But as you can see in the third column in Figure 1, some students were already quite set on using Mathematica. Conversely, for tasks 8.2 and 8.3, Mathematica is the much more pleasant platform to be using and you can see that only a few diehards stuck to Python for those ones.

Comparison between Mathematica and Python usage by task. Each column represents submissions from approximately 25 students (typically 23) or student groups (typically 2). There is some variation in the total counts from column to column due to non-submissions. Final projects were done individually, but for each of the two groups, members made the same choice with regards to platform for their final projects.

Figure 1. Comparison between Mathematica and Python usage by task. Each column represents submissions from approximately 25 students (typically 23) or student groups (typically 2). There is some variation in the total counts from column to column due to non-submissions. Final projects were done individually, but for each of the two groups, members made the same choice with regards to platform for their final projects.

Student by student comparison

Figure 2 shows the breakdown of these choices by student or group. For those seven students that only used Python by choice once, it was for the mini-project. And the two students that only used Mathematica once used it for the Linear Algebra exercises.

Comparison of Mathematica and Python usage on a student by student (or group by group) basis. They have been put in order from maximum Mathematica usage to maximum Python usage.

Figure2. Comparison of Mathematica and Python usage on a student by student (or group by group) basis. They have been put in order from maximum Mathematica usage to maximum Python usage. Variation in the total counts from column to column is due to non-submissions. Final projects were done individually, but for each of the two groups, members made the same choice with regards to platform for their final projects.

Student feedback

After the course was completed, I asked the students for some feedback on various aspects of the course and one of the questions I asked was

How did you find that the mix of Mathematica and Python worked in this course?

Approximately half of the students submitted feedback and their responses were as follows

I would have preferred more emphasis on Mathematica 1 14%
I would have preferred more emphasis on Python 0 0%
The mix worked well 6 86%

And their comments

  • Kinda hard to choose one of the above choices. I felt like I wish I could’ve done more stuff in python but I was happy with how proficient I became at using Mathematica. So I think the mix worked well, though I kinda wish I explored python a bit more.
  • I think it was very valuable to see the strengths and weaknesses of both Mathematica and Python as well as to get familiar with both of them.
  • It was cool to see things done in two languages. I’m sure that people less enthusiastic about programming would probably have preferred one language, but I thought it was good.
  • Doing both to start followed by allowing us to pick a preferred system was great.

In closing

I ended up being quite happy with how the mix worked. I learned a ton in the process because I came in being only mildly proficient with each platform. I was also really happy to have the students come out of the course seeing Python as a viable alternative to Mathematica for a wide range of scientific computing tasks, and conversely getting to see some places where Mathematica really excels.


75 vs. 150

As previously mentioned, a significant component of my new job at UBC this year is curriculum design for a first year cohort that will be taking, among other things, Physics, Calculus and Chemistry, and will have English language instruction somehow embedded into the courses or the support pieces around the courses. These students are mostly prospective Math, Physics and Chemistry majors. An interesting discussion we are having right now relates to class size; specifically class sizes of 75 versus 150.

To me, there are two main possible models for the physics courses in this program:

  1. Run them similar to our other lecture-hall first-year courses, where clickers and worksheets make up a decent portion of the time spent in class. In this case, 75 vs. 150 is mostly the difference in how personal the experience is for the students. Based on my own experience, I feel like 75 students is the maximum number where an instructor putting in the effort will be able to learn all the faces and names. With TAs embedded in the lecture, the personal attention they get in the lecture could be similar when comparing the two sizes, but there is still the difference of the prof specifically knowing who you are.
  2. Rethink the space completely and have a studio or SCALE-UP style of classroom, where students sit around tables in larger groups (9 is common) and can subdivide into smaller groups when the task require it. This would mean that 75 is the only practical choice of the two sizes. This type of setup facilitates transitioning back and forth between “lecture” and lab, but it is not uncommon for “lecture” and lab to have their own scheduled time as is the case in most introductory physics classrooms.

Going with 75 students is going to require additional resources or for program resources to be re-allocated, so the benefits of the smaller size need to clearly outweigh this additional resource cost. My knee-jerk reaction was that 75 is clearly better because smaller class sizes are better (this seems to be the mantra at smaller institutions), but I got over that quickly and am trying to determine specifically what additional benefits can be offered to the students if the class size is 75 instead of 150. But I am also trying to figure out what additional benefits could we bring to the student if we took the resources that would be needed to support class sizes of 75 and moved them somewhere else within the cohort.

What do you think about trying to argue between these two numbers? Have you taught similar sizes and can offer your perspective? I realize that there are so many possible factors, but I would like to hear which things might be important from your perspective.


Help me figure out which article to write

I have had four paper proposals accepted to the journal Physics in Canada, which is the official journal of the Canadian Association of Physicists. I will only be submitting one paper and would love to hear some opinions on which one to write and submit. I will briefly summarize what they are looking for according to the call for papers and then summarize my own proposals.

Note: My understanding is that the tone of these would be similar to articles appearing in the Physics Teacher.

Call for Papers

Call for papers in special issue of Physics in Canada on Physics Educational Research (PER) or on teaching practices:

  • Active learning and interactive teaching (practicals, labatorials, studio teaching, interactive large classes, etc.)
  • Teaching with technology (clickers, online homework, whiteboards, video- analysis, etc)
  • Innovative curricula (in particular, in advanced physics courses)
  • Physics for non-physics majors (life sciences, engineers, physics for non-scientists)
  • Outreach to high schools and community at large

The paper should be 1500 maximum.

My proposals

“Learning before class” or pre-class assignments

  • This article would be a how-to guide on using reading and other types of assignments that get the students to start working with the material before they show up in class (based on some blog posts I previously wrote).

Use of authentic audience in student communication

  • Often, when we ask student to do some sort of written or oral communication, we ask that they target that communication toward a specific imagined audience, but the real audience is usually the grader. In this article I will discuss some different ideas (some I have tried, some I have not) to have student oral and written tasks have authentic audiences; audiences that will be the target audience and actually consume those communication tasks. This follows on some work I did this summer co-facilitating a writing across the curriculum workshop based on John Bean’s Engaging Ideas

Making oral exams less intimidating

Update your bag of teaching practices

  • This would be a summary of (mostly research-informed) instructional techniques that your average university might not be aware of. I would discuss how they could be implemented in small and large courses and include appropriate references for people that wanted to learn more. Techniques I had in mind include pre-class assignments, group quizzes and exams, quiz reflection assignments, using whiteboards in class, and clicker questions beyond one-shot ConcepTests (for example, embedding clicker questions in worked examples).

Your help

And where you come in is to provide me with a bit of feedback as to which article(s) would potentially be of the most interest to an audience of physics instructors that will vary from very traditional to full-on PER folks.


I have a new job at UBC

Dear friends. I am very excited to let you know that at the end of this week I will have officially started my new job as a tenure-track instructor in the department of physics and astronomy at the University of British Columbia.

This is the department from which I received my PhD, so it is sort of like going home. The department has a great nucleus of Physics Education Research researchers, dabblers and enthusiasts, and thanks mostly to the Carl Wieman Science Education Initiative, there is also a large discipline-based science education research community there as well. I have a lot of wonderful colleagues at UBC and I feel very fortunate to start a job at a new place where it should already feel quite comfortable from the moment I start.

A major portion of my job this coming year is going to be curriculum development for a new first-year international student college (called Vantage). I will be working with folks like myself from physics, chemistry and math, as well as academic English language instructors to put together a curriculum designed to get prepare these students for second-year science courses. I will be teaching sections of the physics courses for Vantage College and bringing my education research skills to bear on assessing its effectiveness as the program evolves over the first few years. Plus I will be teaching all sorts of exciting physics courses in the department of physics and astronomy.

The hardest part about leaving UFV is leaving my very supportive colleagues and leaving all my students that have not yet graduated. Fortunately it will be easy for me to head back for the next couple of years to see them walk across the stage for convocation (and not have to sit on stage cursing that coffee that I drank). 

Stay tuned for some new adventures from the same old guy.


Looking at the number of drafts submitted for project reports in the Advanced Lab

In this post I take a quick look at the number of drafts of their project papers that students submitted in my January 2012 Advanced Lab course. This course had a minimum bar for the paper grades and the students were allowed to revise and resubmit as many times as needed to get there, with an average of 3.22 drafts needed. I decided to look at these numbers for the purpose of communicating realistic expectations to students currently registered for my fall section of the course and thought I would share those numbers.

I am starting to prepare for my fall Advanced Lab course. Here is a quick overview of this course from a previous post:

This type of course, a standard course in most physics departments, is a standalone lab course without any associated lecture course. There is an amazing amount of variability from one Advanced Lab course to the next and they range in format from one experiment per week with everything already set up and cookbook procedures ready to be followed, to a single student-developed project over the entire term (or year!).

In my specific incarnation, we spend the first month doing some introductory activities to build up some foundational skills which are mostly related to data analysis and presentation. For the rest of the course pairs of students work on two month-long experimental physics projects. The students are guided to work on projects that can be viewed as being part of a larger research line, where they build on the work of previous students and future students will build on their work. Thus no two groups will ever perform identical experiments.

A major piece of the course is that they have to write a journal-style article to communicate the results of one of their projects. To help them practice revising their own writing and impress upon them that effective writing requires many revisions, I require that students earn a grade equivalent to a B on their paper according to this rubric, and are allowed to revise and resubmit as many times as needed to reach that threshold grade.

The overall grade for these papers was calculated as 25% from the first graded draft and 75% from the final draft. They were allowed to submit an initial draft, which was not graded, where I would spend a maximum of a half an hour reading over the paper and providing feedback. Students were encouraged to have a peer read through their paper and provide some feedback before submitting this initial draft. After reaching the threshold B-grade, they were allowed to resubmit one final draft. At some point in the revision process I also had a formal process where students provided each other with some peer feedback on their papers.

A quick summary of the numbers are in order. Of the twelve students, three of them gave up at some point before reaching threshold B-grade on the journal-style article. Those students were only given partial credit for the last grade that their paper received. Of the nine students whose papers reached the threshold B-grade, five of them submitted a final draft to improve their overall paper grade.

Of the 9 papers that were accepted (met the minimum grade threshold of a B), 5 of them were revised at least one additional time .

The number of drafts in this graph includes the initial ungraded draft, but does not include the final revision that 5 of 9 students submitted after their papers reached the B-grade threshold.

What is the take-home message here? Based on this system, students should expect to submit three or more drafts of a paper in order to meet the threshold grade.

This coming fall, I plan to adopt some new feedback strategies that  take the focus off grammatical correctness and similar issues in the hopes to focus more on the ideas in the papers. As part of this, I may move to a reviewer report style of feedback (for example, this is the one for AJP) and away from detailed rubrics, but I haven’t quite made up my mind on this yet. My grading philosophy in the course this fall will be that their course grade will represent the quality of the recommendation that I would give them in a reference letter based on their work in the course, and I want to do my best to make sure all of the individual components are assessed in ways that match up with this overall grading philosophy.


Follow

Get every new post delivered to your Inbox.

Join 28 other followers