Summary of talk: 6 Things Scientists Can Learn From Science Journalists

“Never Say Diagonal of the Covariance Matrix: 6 Things Scientists Can Learn From Science Journalists” is a an excellent one-hour thirty-five minute talk given by Maggie Koerth-Baker (science editor for Boing Boing). As I mentioned in my most recent Weekly I took some notes that I am happy to share because not everybody will make the time to sit down and watch this talk.

Please note that these are the notes that I took as I watched (and paused) the talk and they are meant to summarize her talk as best as I could. Things in quotes were copied down verbatim and everything else is some delightful combination of what she said and what I thought she meant. If I missed any important points or misrepresented any of her points, please let me know, I am more than happy to fix my mistakes.

The main points where a reference to “you” refers to the scientist being interviewed by a journalist or otherwise trying to communicate with the public about science:

1. “Show, don’t tell.”

  • Turn it into a story.
  • Anecdotes aren’t data, but they do make data memorable.
  • Give the journalist good analogies because your analogies are going to be far more accurate than ones that the journalist would make up.
  • Use show don’t tell with the general public to counteract the pseudoscientists who are already doing this to connect memory and emotion.

2. “Don’t just talk…ask.”

  • Three questions scientists should always ask journalists:
    • “Can I see the story before you print it?”
    • “Can you send me questions ahead of time?”
    • This is actually three questions meant to probe how technical you should be and tips you off to mistakes that the journalist might make. These questions are (3a) “What got you interested in my work?”, (3b) “What have you read so far”, and (3c) “Who else have you spoken with?”
  • You should also talk to the general public and ask them questions. Good places to do this are to blog about science, and to have more interactions between scientists and the general public at public presentations (instead of scientists on one side of the room and lay people on the other side after the talk).

3. Lay people know more (and less) than you think.

  • Scientists will learn that lay people know more than you think and are each an expert in their own thing, which sometimes can end up complimenting your research.
  • Scientists will also learn that lay people know less than you think with what you consider “elementary concepts” never having been covered in typical schooling. She stresses the importance of communicating ideas like what exactly does peer review mean or the scientific definition of a theory every time you are communicating with the public about science instead of just when discussing publicly controversial contexts such as climate change and vaccination. Otherwise the public will “think that those basic scientific ideas are just about ass-covering,” (very well-put in my opinion).

4. Not everything is news.

  • Not every discovery or every paper needs to end up in the newspaper because what is important to you and what is important to the general public are not necessarily the same. People want to know about really important discoveries, but don’t need to know every tiny thing that happens.
  • What can you do to write about your research if you don’t have something that is “news”?  You can make it “evergreen”, which means make it timeless and not tied to any specific event and you need to “find an angle”, which is connecting a simple fact to a bigger picture .
  • No matter what your field, there are topics of interest to the general public.
  • Fantastic line: “Science is bigger than single discoveries and if we can make people understand that they are going to trust scientists a lot more and are going to be a lot more interested in science.”

5. Nobody is critical enough of their own work.

  • Hey, this is why peer review exists. She gives an example of being overenthusiastic about your own work in a press release. She also talks about how poorly understood the time-frame is for a discovery to make it from basic science to the public sector (and often they never make it to the public sector).
  • She suggests to attend public talks given by people outside your field and to apply the questions that you might ask and the skepticism that you might have to your own research to help filter what you communicate to the public.
  • “Don’t just pontificate, curate” – don’t just talk about your own work, talk about how the cool work of others (including those not at your institution) is related to your work. This will help build your credibility and help people better understand how your work fits into the bigger picture.
  • You can contribute to making science journalism better by being the first one to critique yourself when talking to a journalist: anticipate the response of other scientists and respond to those potential critiques. She reminds us that in the current economic climate that many journalists writing about science are not science journalists and have no scientific background at all and they don’t have the background to know that they should be looking for other scientists in that field to question or comment on that paper

6. Mistakes are lasting, but pedantry kills.

  • It’s ok to dumb it down
    • “Sacrificing storytelling and understandability for extreme accuracy is often just as bad as sacrificing accuracy for the sake of storytelling.”
    • “If you are not writing about your science to the general public at a 6th grade reading level, you are probably doing something wrong and not enough people are understanding what you are talking about.”
  • It helps us to use more understandable analogies. Sacrifice some of the nuances to make it more understandable. If you are being too pedantic, you are going to miss out on opportunities to get people excited and get them to want to find out more.

Her summary: know your audience, know your message and make sure to match those up so that people understand what you are saying.

Other notes

  • Early on she mentions book “The Matchbox that ate the a Forty-Ton Truck” which is a Physics for lay people book by Marcus Chown. I had not previously heard of this book, but am always on the lookout for this type of book.

The Science Learnification Weekly (Feb 27, 2011)

This is a collection of things that tickled my science education fancy in the past week or so.


This idea was the brainchild of Frank Noschese (Action-Reaction Pseudoteaching page) and John Burk (Quantum Progress Pseudoteaching FAQ) and was inspired by Dan Meyer’s pseudocontext posts. Early in the week a whole bunch of posts dropped at the same time discussing the idea of pseudoteaching:

Pseudoteaching is something you realize you’re doing after you’ve attempted a lesson which from the outset looks like it should result in student learning, but upon further reflection, you realize that the very lesson itself was flawed and involved minimal learning.

For me pseudoteaching (#pseudoed on Twitter) seems to show up most often when an activity, demo, derivation or mini-lesson goes down and afterward I realize that there’s no way that the students gained any insight into the topic at hand from what just went down in class. I think a lot of typical physics demos (I can’t really comment on other disciplines) fall in this category, with the monkey and the hunter demo being an example of something I have tried to show a few times and afterward am always left wondering what on Earth I was expecting them to learn from that demo. Pretty much any “conceptual” question I have ever seen related to the concept behind that demo ends up being a recall question pure and simple that in no way tests their understanding of the independence of horizontal and vertical motions. I ramble a bit more about this in the form of comments here.

What can scientists learn from science journalism (and vice-versa)

This was two different pieces of internet consumables brought to my attention on twitter.

“Never Say Diagonal of the Covariance Matrix: 6 Things Scientists Can Learn From Science Journalists” is a one-hour thirty-five minute talk given by Maggie Koerth-Baker (science editor for Boing Boing). I’m about half-way through the talk and it is great so far. Since it is an hour long, I don’t expect that everybody will fit watching it into their schedule so I will post a summary based on the notes that I’m taking. I posted my notes as a summary of the talk. The #scichat daily linked to this post discussing the talk, and which lists the six main points.

“The scientist-journalist divide: what can we learn from each other?” is a post by Anne Jefferson which contrasts the first lines of features discussing climate change as written by a scientist and by a journalist. Some good food for thought on writing better science blog posts in here.

Flipped/Inverted classrooms in higher ed

I stumbled across a few inverted classroom implementations (ranging from barely inverted to very inverted) this past week. I’m in the middle of writing my own blog post about how I use pre-lecture reading assignments in my introductory Physics courses and how I’ve managed to get to a point where a decent fraction of the students complete these reading assignments. To the links…

“Flipped SBG with voice so far”: Andy Rundquist talks about his experience as a first-time Standards-Based Grading implementer. More to the theme of this little section, he talks about how he uses screencasting to put mini-lectures online for the students to consume before coming to class. He has multiple posts on how he uses screencasting this way.

“How the inverted classroom saves students time”: Robert Talbert’s post title is self-explanatory.

“Learn before Lecture: A Strategy That Improves Learning Outcomes in a Large Introductory Biology Class,” M. Moravec et al., CBE Life Sci Educ 9(4): 473-481 2010. This journal article was brought to my attention by my old reading group (and seems to be available for free). I will probably write a short post on this paper, but I will sum it up here. They off-loaded four to five slides worth of content from three separate lectures over the term to the students to consume before coming to class. This gave them a bit more time in class to get students to grapple with higher-level concepts related to this material, which resulted in an average improvement of 21% on six exam questions compared to six similar questions covering the same material on previous years’ exams. They were already using active-learning exercises, such as clicker questions, small group problem solving and interactive demos, in all their lectures so as far as I can tell the only real change here is that they got the students to spend more time on task by getting them to work through the low-level stuff before coming to class. This additional time on task makes it not at all surprising that they saw improved learning outcomes in those topics. In the end I feel like I am missing the take-home message from this paper.

Video abstracts being published by the Institute of Physics

John Burk starts a conversation about how video abstracts for IOP-published papers could be used in the classroom.

How I use clickers in my courses

Note: This post was originally part of a post about getting as many students as possible involved in courswide discussions supported by clicker questions. The post quickly grew way too large so I decided to start with a post on how I use clickers in my courses to set up the later post on coursewide discussions.

I have used clickers in six of my courses since 2009: five were intro-level Physics and the other one was a 3rd-year (that’s Canadian for Junior-level) Quantum Mechanics course. I am at a point where I feel like I really get how (I like) to use clickers in my classroom thanks to practice, reflection and helpful resources along the way (Mazur’s PI book, CWSEI resources, Derek Bruff’s blog and book) . I have collected a ton of questions (my favorites: Mazur’s PI questions, CU-SEI, OSU PER) and for my most commonly taught courses I now know what sort of response distributions to expect for different questions and can now use this to move the class in different directions. I have also developed the salesmanship needed to generate student buy-in (“the research shows that teaching method X will help you learn more”), which makes everything a lot easier.

There are a ton of resources out there discussing the why and how of using clickers, so I won’t go into it here. The resources I listed above are some good starting points.

Modified Peer Instruction:

Most of the time I use a flexible/lazy/modified version of Mazur’s Peer Instruction where I get the students to initially discuss the question at their table (usually 3 students) and then vote. If there are roughly 40-70% correct answers,  I get them to find somebody not at their table who voted differently than them and discuss their answer with that person. I usually don’t show them the histogram after the first vote but sometimes I will if two or more answers have a similar vote count and I think that seeing the distribution will help guide their discussion by focusing it on only a couple of choices. Then I get them to revote. Either way, once I get over roughly 70% correct answers I will tell them that most of them agree and then solicit student explanations for their answers.

Common types of questions that I use:

Overall, the most common type of question I use is what Mazur calls a ConcepTest: a question that tests the application of one or more concepts and which has only one correct answer. Typically the ConcepTests from Mazur’s book are too challenging for my students to be able to answer correctly without some bridging questions. Fortunately I came across the OSU PER group’s clicker question sequences, which are sequences of 3-5 conceptual questions that start with a relatively easy application of a concept and build toward a challenging question asking the students to apply the same concept on each question. The challenging questions tend to be of similar difficulty to Mazur’s questions and sometimes actually are Mazur’s questions.

Some of the other most common types of questions that I use are discussed briefly below. Like the clicker question types you wouldn’t put on a test, discussed by Derek Bruff, many of these question types wouldn’t make much sense as a multiple-choice question on an exam, but they have their own specific purposes in my classroom:

  • Predict what will happen questions before doing a demo – Based on Sokoloff and Thornton’s Interactive Lecture Demonstrations, I set up and explain the idea behind a demo and then get them to predict what will happen when I run the demo. It is a well known issue that they don’t always see what we want them to see when we show a demo, and they will even misremember what they see in a way that matches up with their existing conceptual (mis)understanding of the phenomenon being demonstrated. Basically if they have to flex a bit of mental muscle predicting what will happen in the demo (the clicker question), they will be better primed to interpret the results of the demo correctly and revise their conceptual understanding appropriately.
  • Clicker-based examples – This is my hybrid of working an example on the board and asking them to work through an example-type problem on whiteboards. I give them a reasonably challenging example and ask them to work on it in groups with whiteboards. I develop clicker questions to help them work through each of the critical steps in the problem and then leave them to work out the more trivial details leading up to the next major step.
  • How many of the following apply? – This is a type of question that is usually meant to not have *ONE* correct answer and is meant to provoke discussion. I first came across this type of question in an AJP article from Beatty et al. Their example was to identify the number of forces being exerted on a block being pulled up a rough inclined plane, while the block was also attached to a spring. Their are multiple correct answers because, among other reasons, you can treat the normal and friction forces as a combined reaction force. Ambiguity rules here!
  • Clicker-assisted derivations – I used these a lot when I taught 3rd-year Quantum Mechanics and they saved me from the drudgery (and their boredom) of my working through long derivations on the board. These are similar to the clicker-based examples in that I use clicker questions to get THEM to work through the critical steps of the derivation. These questions either ask them to determine the next step in the derivation (when the textbook is “kind” enough to leave steps out) or ask them to decide on the reasoning that leads from one step in the derivation to the next. I would typically work through the derivation, but use these clicker questions to get them to really pay attention to the critical steps in the derivation.

I’m also planning to do a post where I will discuss most of these question types in more detail as well as provide some examples that I have used in class.


So far I have only ever given participation marks for clicker questions. Answer some fraction (it has varied from course to course) of the questions in a given lecture and you get a participation point. It doesn’t matter if you get the clicker questions right or wrong, you will still get the participation point at the end of the day. I usually let them miss up to 10% (rounded up) of their participation points/lectures and still get full clicker participation marks in their overall grade. My courses max out at approximately 36 students so it is very easy for me to wander and do my best to make sure everybody is putting in an honest effort to think about the questions and answer them. I have made the clicker participation points count for between 2% and 5% of their final grade. In my most recent course (intro Calculus-based E&M) only 8 of the 37 student DIDN’T get full clicker participation marks, and the average clicker participation mark was 95.2%.

Last thoughts:

I mention in this post at least two other posts that I hope to write to follow this one. But there is another. The mention of Interactive Lecture Demonstrations always reminds me that I would like do a post about the Physics Education Research community slowly moving away from the Elicit-Confront-Resolve type of questions that are central to Interactive Lecture Demonstrations. In my experience, students get tired of being asked questions where they know that their intuition or current understanding is going to give them the wrong answer. It seems that the work being done on measuring and improving student learning attitudes toward Physics (measured by CLASS, MPEX) is leading us away from the Elicit-Confront-Resolve pedagogies.

I have also used coloured cards instead of clickers and prefer clickers because, among other reasons, they let me keep track of how the voting went (useful in many ways) and they preserve student anonymity. If you’re interested Derek Bruff has a post where he discusses Mazur and Lasry’s paper that compares flashcards to clickers in terms of student learning.