Ugh. I just had one of those moments where I lost a bunch of what I have written. I recovered what I could, but don’t feel like re-writing it all so instead will treat you to a fairly short post.
My interest in and engagement with student writing comes mostly from my use of the journal article genre for lab reports in my Advanced Lab course. Through attending a Writing Across the Curriculum workshop last month, I was invited to participate in planning a workshop built around Bean’s book “Engaging Ideas: The Professor’s Guide to Integrating Writing, Critical Thinking and Active Learning in the Classroom”. I have been skimming some parts of the book and reading other parts very carefully, and along the way I have been reflecting on the places where the student journal articles intersect with ideas from the book. What is proving to very interesting is the grey area where I can debate with myself (and at some point with others) about places where these intersections might exist, or perhaps should exist.
There is a lot of very practical information in this book. He has chapters on using rubrics, on handling the paper load and on writing comments on students’ papers. I haven’t read those yet, but in reading through some of the earlier chapter, I came across two things that he wrote or referenced that struck a chord with me.
…many teachers read student essays with the primary purpose of finding errors, whereas they read their own colleagues’ drafts-in-progress for ideas
…for many college writers, the freedom of an open-topic research paper is debilitating.
My approach to the student journal articles thus far has mostly been that they are an information dump meant to follow the guidelines of the genre. As you can imagine, this is a vastly different approach from Bean’s approach to student writing. I am interested to see where I will end up after finishing the book and after having a chance to interact more with the colleagues with whom I am planning this workshop (as well as the workshop attendees). Although it is possible that I will continue to feel that the majority of the book does not apply to my situation, the conflicting ideas whirling around in my brain suggest that I will experience a significant shift in how I approach student writing. I originally had a lot more to say about these things, but will leave it at that for now.
This term I eliminated the weekly homework assignment from my calc-based intro physics course and replaced it with a weekly practice quiz (not for marks in any way), meant to help them prepare for their weekly quiz. There’s a post coming discussing why I have done this and how it has worked, but a la Brian or Mylene, I think it can be valuable to post this student feedback.
I asked a couple of clicker questions related to how they use the practice quizzes and how relevant they find the practice quiz questions in preparing them for the real quizzes. I also handed out index cards and asked for extra comments.
Aside from changing from homework assignments to practice quizzes, the structure of my intro course remains largely the same. I get them to do pre-class assignments, we spend most of our class time doing clicker questions and whiteboard activities, and there is a weekly two-stage quiz (individual then group). I have added a single problem (well, closer to an exercise) to each weekly quiz, where in the past I would infrequently ask them to work a problem on a quiz.
Clicker Question 1
Clicker Question 2
Just from a quick scan of the individual student responses on this one, I saw that the students with the highest quiz averages (so far) tended to answer A or B, where the students with the lower quiz averages tended to answer B or C. I will look at the correlations more closely at a later date, but I find that this is a really interesting piece of insight.
Additional Written Feedback
Most of the time I ask the students for some feedback after the first month and then continue to ask them about various aspects of the course every couple of weeks. In some courses I don’t do such a great job with the frequency.
Usually, for this first round of feedback, the additional comments are dominated by frustration toward the online homework system (I have used Mastering Physics and smartPhysics), requests/demands for me to do more examples in class, and some comments on there being a disconnect between the weekly homework and the weekly quiz. As you can see below, there is none of that this time. The practice quizzes, the inclusion of a problem on each weekly quiz, and perhaps the provided learning goals, seem to do a pretty good job of communicating my expectations to them (and thus minimize their frustration).
Student comments (that were somewhat on topic)
- I feel like the practice quizzes would be more helpful if I did them more often. I forget that they have been posted so maybe an extra reminder as class ends would help.
- The wording is kind of confusing then I over think things. I think it’s just me though. Defining the terms and the equations that go with each question help but the quizzes are still really confusing…
- Curveball questions are important. Memorize concepts not questions. Changes how students approach studying.
- The group quizzes are awesome for verbalizing processes to others. I like having the opportunity to have “friendly arguments” about question we disagree on
- I love the way you teach your class Joss! The preclass assignments are sometimes annoying, but they do motivate me to come to class prepared
- I enjoy this teaching style. I feel like I am actually learning physics, as opposed to just memorizing how to answer a question (which has been the case in the past).
- I really enjoy the group quiz section. It gets a debate going and makes us really think about the concepts. Therefore making the material stick a lot better.
Last thought: With this kind of student feedback, I like to figure out a couple of things that I can improve or change and bring them back to the class as things I will work on. It looks like I will need to ask them a weekly feedback question which asks them specifically about areas of potential improvement in the course.
The lab course (digital electronics) that I am teaching right now uses a checkpoint system where students call me over to show me that their circuit is working as desired or that they have sorted out the answer to some conceptual or application question. Quite often, the raison d’etre of a given checkpoint is to provide an excuse for me to have a specific conversation with each group of students or to provide a time for telling. And sometimes the checkpoints evolve into these things as I realize that there is a key idea that they are having trouble with.
In terms of keeping good notes, what is happening is that I want to keep notes on the conversations I want to have with the students with respect to each checkpoint, and I also want to keep notes on revisions for the labs. My labs are written using Word 2010 (keep your judgements to yourself, I have my reasons). My solution, which I just sorted out this morning, is that I will use the (balloon) comments in Word to keep track of the revisions that I want to make and I will use my new Word kung-fu to make instructor notes using hidden text (which I can choose to globally show or hide) so that I always have a single document that I can give to students (sans instructor notes) but also use myself (with instructor notes).
To make a student version of my lab, I go back to File >> Options and click the Hidden Text button and then export the document to PDF.
Derek Bruff had a post today talking about digital distractions and wondered briefly at the end about note taking. I wrote a comment to his post, but it is advice that I want to make sure I follow myself so I am posting it here for my own record.
In the interactive engagement world I think that note taking is one of a suite of reflective/feedback practices that we can help our students with. After a typical “one correct answer” clicker question, you will have some combination of students that were correct/incorrect for the right/wrong reasons. after some sort of sequence (revoting, class-wide discussion, instructor explanation), the students have now all heard the correct answer. But we know that some of them still don’t understand the answer enough to do anything else with it so it is time to get them to do some reflection or feedback. Options include: writing their own understanding down in their notes, answering a follow-up clicker question, collaborating with a group to answer a question on a worksheet, etc. I see note taking as one of many options in this type of cycle and if we are not getting them to do some other type of feedback/reflection activity then we should at least be giving them a minute or two to reflect in their notes. Of course, I do a terrible job of this AND many students are highly reluctant to take notes so, in general, I prefer the other type of reflection/feedback activities.
This is part 2 (part 1 here) of my post discussing feedback I got from a couple of my students after the conclusion of my Advanced Lab course. This went long again so it looks like there will have to be a part 3.
The 8-hour work-week and filling out time sheets
The combination of this having been only my second time teaching the course and my policy on student experiments always building on, but not repeating, the project of previous groups made it very hard for me to figure out projects of appropriate scope. So my solution was to ask that the students put in a minimum of 8 hours each week into the course and then I had to make sure that the projects consisted of sequences of achievable milestones. With that in place, I was happy to accept however far along each group made it with their projects as long as those 8 hours each week were actually spent working productively on the course.
So I got them to fill out and submit time sheets. I was worried that they would perceive these as being too strict or beneath them or terrible in some other way.
Interview feedback: No complaints. The time sheets were fine and did a good job of encouraging them to dedicate an appropriate amount of time to the course even when they felt like doing something else. Yay!
Future plan: It looks like I will continue to use these time sheets. The thoughtful-assessment part of my conscience doesn’t really like having to use these, but for the most part these students have never had to budget time for a longer project and they really need the scaffolding so that they don’t fall on their faces.
Oral and written dissemination
One of my major guiding principles in this course was (and continues to be) to try to make sure that the communication of their project was directed toward authentic audiences. For the weekly group meetings, they were bringing me up to speed on their project as well as informally presenting to their work to people with less project-specific expertise (the rest of their class-mates). Since projects are always meant to build on previous projects, their formal reports are going to be part of the literature used by the next group building on that same project. Their formal oral presentations were targeted at peers that lacked project-specific expertise (again the rest of the class).
The first time I taught the course, I had the students write journal-style articles (each partner wrote one). There were two problems. First, the partner that was not writing the article ended up contributing very little to the analysis and usually didn’t dig deep enough into how everything worked from either the theoretical or the experimental side of things (which is part of why I implemented the oral assessments into the course). Second, the background and theory sections often lacked an authentic audience for multiple reasons: (A) they were often vaguely repeating the work from a source journal article; (B) if they were building on a previous group’s work, writing their own background and theory sections would be mostly redundant; and (C) the topics were often deep enough that it was not reasonable to expect them to develop the project-specific expertise to do a very good job these sections.
So in this second incarnation of the course I decided to split the journal article up into two pieces, one for each partner: a LaTeX “technote” and a wiki entry for the background and theory. The idea was that future groups could add to the wiki entry, which would eliminate the redundancy of recreating essentially the same theory and background sections for future groups working on that research line. With the theory and background stripped out of the journal article, all I asked in the technote was that all equations and important terminology be clearly defined within the technote, and no other theory was needed. I thought this would have the added benefit of having both partners invested in a writing task for each project. But the whole thing did not work very well. The technotes worked fine, but the wiki entries ended up being so disconnected from the technotes that partners often didn’t even use the same notation between the wiki entry and technote.
It is worth noting that between a time crunch and the technote+wiki not working as well as I liked, I got the students to team up and write something a bit closer to a journal article for their second project.
In addition to their technote and wiki entry, each student gave a 12-15 minute formal oral presentation on one of their projects (each partner presented on the project for which they wrote the technote) instead of writing a final exam.
One of the things I wanted to discuss in the interview was what sort of improvements could we make to this dissemination process. I had some ideas in my head to discuss such as poster presentations and articles written for a lay-audience.
Interview feedback: for each project one person would write a journal article and the other would prepare and present a poster at a research symposium. The logistics of this still need to be worked out and we discussed a number of combinations of dissemination methods before coming to a consensus on this specific one. The interviewed students saw communicating science to a lay-audience (the research symposium attendees) as an important thing for them to practice.
Future plan: I really like how this combination makes each member of the group responsible for communicating all the important pieces of their project. Targeting them at different audiences means that they will be able to work together while still ultimately having to produce their own non-redundant (relative to each other) work.
Their are a lot of logistical issues to work out here. Our university has a student research day a couple of weeks before the end of our winter term and that would be a perfect place for them to present their poster. The problem is that, with a proper revision cycle for their poster, they will essentially have had to have completed both projects a month before the end of the term. I’m not certain I can make that work. We can always have our own research symposium, but it seems ideal to get involved with an existing one that already has an audience.
The other piece here is that I will probably ask them to keep the journal articles closer to the technotes than a real journal article (meaning bare-bones theory and background).
A vague notion of a plan dawned on me while proof-reading this post. I could probably get the timing with the student research symposium to work if I reduce the scope of each project by roughly a week and then in the final month of the course I could ask each group to revisit one of their experiments and push it a bit further forward. There are all sorts of problems with this plan, such as how they will disseminate this additional work and the experimental apparatus probably having been torn down, but it is still something to consider.
The timing of peer review for their lab reports
Each technote and journal article was allowed as many drafts as needed to get the paper up to “accepted for publication with minor revisions” standards (a B-grade) based on a very picky rubric. After that, they were allowed one final draft if they wished to try earn an A grade. A typical number of drafts was 3 or 4, but there were exceptions in both directions.
For the first report, I had each person do a peer review of another student’s submission. One of the questions I had on my mind for the feedback interview had to do with the timing of the peer review in the draft cycle. The first draft of the first paper is always an extremely rough thing to slog through, even those written by very strong students. Thus, asking them to do peer review on a first draft is asking them to do something very painful. But, having to critically apply a rubric and provide constructive feedback does wonders for getting students to pay much better attention to the specifics of the writing assignment and the sooner that happens in the course, the sooner that I see those improvements in their writing.
Interview feedback: not too sure if it is best to do peer review on a first or second draft. We discussed this for a bit, decided we could see both options as equally valid, and never came to any real conclusion.
Future plan: dunno yet. I could sign my course up for the Journal of the Advanced Undergraduate Physics Laboratory Investigation tool. They have peer review calibration tasks and the added benefit of anonymous peer reviewers from other institutions, but since JAUPLI is still small, the timing all has to work out magically well.
This is a quick follow-up to my previous post on my research related to the effect of immediate feedback during exams.
I love it when I’m going through my “to read” pile of papers and realize that there is something in there related to one of my own research questions. There is a paper from last year (Phys. Rev. ST Physics Ed. Research 7, 010107, 2011) by Fakcharoenphol, Potter and Stelzer from University of Illinois at Urbana-Champaign that looked at how students did on matched pairs of questions as part of preparing for an exam.
There’s a lot of interesting stuff in this paper, but the result which is most relevant to my own research is the following. They developed a web-based (voluntary) exam preparation tool where students would do a question, receive feedback as just the answer or as a solution, then do a matched question where they received the other type of feedback. They divided the students into four groups so that every student had equal access to answer vs. solution feedback on the questions. For each matched question pair (let’s call them questions A and B), the grouping of the students allowed half of the students to answer question A first and the other half to answer question B first. Within each of those groups, half of the students received answer only feedback for their first question and the other half received solution feedback for their first question.
They called the first question of a pair answered by the students their baseline and the students scored 58.8%±0.2% on those questions. Keeping in mind that they had many pairs of questions, the average performance of the students on the follow questions was 63.5±0.3% when only the answer was supplied after answering the first question and 66.0±0.3% when the solution was provided after answering the first question. There are statistically significant differences between all of these numbers, but the gains from receiving the feedback are not overly impressive. More on this in a moment.
Back to my own research. During an exam, I used matched pairs of questions and gave the students feedback on their first question (in the form of just the answer) before they answered the second question. I saw a statistically significant improvement from the first question (65.3±6.8%) to the second one (77.5±6.0%), but due to low statistics there was not much to conclude other than it was worth pursuing this research study further. The results from the UIUC folks set the magnitude scale for the effect I will see once I am able to improve my statistics (58.8%±0.2% to 63.5±0.3% due to solutions only feedback).
I’m really not certain if I expect to see less, equal or more improvement for my “during an exam feedback” design as their “preparing for an exam feedback” design. In their design, the level of preparation of their students when using their study tool is all over the map (they look at this in more detail in their paper) so it is not known if the learning effect due to the feedback also depends on when during their overall study plan they were using the tool (e.g. as a starting point for their studying vs. to check their understanding after having done a bunch of studying). Since both our designs use multiple-choice questions (but preparation vs assessment conditions) I am not certain how guessing would play into everything.
I have to admit that if my future research into the effect of feedback during an exam finds that I am getting only a 5% gain (like UIUC did for their solution only feedback) from this intervention that I doubt that I would continue with the practice.