How to write and evaluate effective questions: Best Practices in Peer Instruction

Authors

One of the most frequently asked questions among Peer Instruction Network members (PINm) is “How do I write good questions?”

This ubiquitous question is posed across the spectrum of Peer Instruction implementations – from expert to novice users, from faculty to instructional designers, among different disciplines, and within varying institutional types.

PINms David Vakil an expert Peer Instruction user who teaches Astronomy at El Camino College asks, “How do you write good questions?” Matthew Kaplan who helps faculty implement Peer Instruction at the University of Michigan reports that a common question among faculty trying to use PI is: “How do you develop good questions?” Jill Ronstadt who teaches AP Biology at Lutheran High School of Orange County wants some “ideas and tips on how to write good questions for the in class time.”

When I tried Peer Instruction for the first time in my graduate seminar on educational theory, I had no existing questions to work from. So, I simply looked at some examples from other disciplines, identified key concepts for that class period, wrote questions and then tried them out in class. I also used students’ prior work and their responses to Just-in-Time Teaching questions to help give me ideas.

Most of these questions were immediately effective in that I observed some learning gains right away with almost every question (pre-post vote).

One of the best tips I got from Eric for developing new questions was to give open-ended prompts in class and then use students’ responses as the answer choices. I would pose an open-ended question, get possible choices from the students, and write those choices on the board and then use clickers for voting and Peer Instruction.

In his first semester using PI, with questions he used for the very first time, Eric observed a doubling of the gain in students’ conceptual understanding on the FCI. Eventually, through improving the questions over time, he tripled the gain.

Simply put, writing effective questions is easier than it might seem. You will more often than not observe gains from the very act of engaging your student in the mind tasks of metacognition and retrieval practice and then peer discussion. The questions will of course improve once you get feed back from students and make tweaks.

Still, tips are helpful. Before we dig into this, the Peer Instruction Network has some resident experts in Derek Bruff – whose book, Teaching with Classroom Response systems has extensive tips as well as case examples on creating clicker questions. And Stephanie Chasteen, recently blogged on this very question at sciencegeekgirl.

This guide from UC-Boulder also has some great tips.

For our take – check out this video or read the transcript posted here for tips and tricks for writing effective questions in Peer Instruction. In particular, we include easy strategies for using student data to evaluate which questions were the most and least effective at promoting student understanding.

Click to play. 

How To Write Effective Questions Video Transcript 

Professor and PINm Judith Herzfeld at Brandeis University: The trick in designing concept tests is to think of them like designing your learning goals for a lesson, or for the chapter that they’re reading. To decide what are the ideas that you really want them to get out of that material. Then, to go through and ask questions that will reveal the kinds of uncertainties they might have about that material. That to have at least two of the choices in the questions, the ones that students might at their level might plausibly think are correct.

[Annotation from TTYN]: This is perhaps the most important and deceptively simple step in creating an effective question — identifying the core concept or idea you want students to learn, and developing the question to test if they have reached an appropriate level of understanding.

Rebecca Younkin: Sometimes it can be hard to pick an effective concept test, and even harder to write one. Because it’s important that the answer not be so obvious to everyone in the class that there is not going to motivate any discussion. You have to pick something that it’s one the one hand, a little bit subtle, but you don’t want to just be tricky. If you just ask the students a tricky question then they’re going to say, “Well, I understand the material, but I don’t like these stupid tricky questions.”

It’s important to pick something where the difficulty in the question actually corresponds to some underlying physical concept that’s difficult.

Narrator: To help you decide which ConcepTest questions work well you can compare your students’ performance on the ConcepTests before and after peer discussion. As an example, let’s compare the pre and post discussion performances on a set of ConcepTest from an introductory physics class. If none of the students were to change their minds during the discussion, then the post discussion percentage would be the same as the pre discussion percentage. Therefore the data points would lie on the diagonal. We want our data points to lie well above this line.

Now let’s look at a single successful ConcepTest. Before discussion, 39 percent of the class voted for the correct answer on this question. After discussion, 70 percent of the class answered it correctly. This means that 31 percent of the students changed their minds to the right answer after they discussed it with their peers, putting this data point well above the line.

Effective ConcepTests provoke peer discussions that help students arrive at the correct answer. The higher the data point is above the line, the more effective the concept test.

Now, let’s plot the remainder of a semester’s worth of ConcepTest data to identify the optimal range for effective questions. First, data points that lie below the diagonal indicate that fewer students voted for the correct answer after discussion. The data show that these questions are not working well and should be reviewed, modified, or eliminated.

Data points that are close to the line indicate a minimal gain between pre and post discussion answers. These concept tests should be reviewed to make them more effective.

Questions that result in a high percentage of correct answers before a peer discussion do not challenge the students enough, and leave little room for conceptual gains.

At the opposite end of the distribution very few students arrived at the correct answer the first time. The gain on these questions is low because there weren’t enough students who understood the concept well enough to explain it to their peers. Questions like these are too difficult to be effective.

Evaluating the effectiveness of clicker questions

The middle of the distribution, between 30 and 70 percent is the optimal range for ConcepTests . The data show that questions in this range yield the largest gain in the percentage of correct answers. ConcepTests that generate data points outside this range aren’t working well and should be modified or eliminated.

Professor Eric Mazur: We tend to forget the type of questions we had when we were learning the material. That’s a direct consequence after learning. Therefore, in general it’s very difficult to find an instructor to come up with the type of questions that the beginning learner has. For that reason I will often use the students as a source of material for ConcepTests. I’ll go over all the exams, tabulate the type of mistakes that they’ve made and then turn those mistakes, misconceptions, into ConcepTests that I use in class. Or I will use the feedback I get on the reading in their Just‑in‑Time Teaching as material for a ConcepTest.

Summarizing, I think that there are two key points. One is to have the right type of questions to elicit the students’ misconceptions and to stimulate discussions in class. The second point, as evidenced by the data, is to make the questions neither too easy, nor too hard so that the students can actually help one another in class.

Narrator: To summarize, for a ConcepTest to be effective it must address your students’ misconceptions. It should also challenge them appropriately by being neither too easy, nor too hard, that is, somewhere between 30 and 70 percent of the students should answer it correctly before discussion.

Transcription by CastingWords

3 Comments

Comments RSS
  1. Madeleine Schultz

    I have just been through the process of writing “clicker” questions for the first time in my first semester chemistry subject (around 300 students attend the lectures). I used a few from Judith Herzfeld’s site (http://people.brandeis.edu/~herzfeld/alphabetical.html) and some from colleagues at other universities here in Australia (e.g. Chris Fellows at University of New England). I never spoke for more than 30 minutes without a break (usually less), and then usually had more than one question on the question slide. Note that clickers were not available so I used GoSoapBox.com.
    I started each topic with very simple conceptual questions and avoided anything requiring a calculator or numerical answer. Some questions that I asked were slightly too easy (70% of students had correct before discussion) and could be re-worded next time. Interestingly, although the students were (supposedly) working in silence, often the incorrect students were a group sitting together – so I had to make sure the discussion mixed the groups.
    I have never seen a class of students sitting so quietly, intently looking at the screen and writing on paper. I walked around as they were working and although I expected to see a few on facebook or otherwise occupied, I was amazed that almost all were attempting my questions. I also had less students chatting than in previous semesters while I was talking – probably because they knew that I was going to let them talk very soon in any case.
    I am aware that many of my students struggle with maths, so a lot of my questions indirectly probed their maths skills (eg rearranging equations, use of exponentials, unit conversions).
    Some of my questions were shown to be slightly ambiguous and student questions during class probed this and allowed us to discuss the possible interpretations and which answer would be correct in which circumstance.

  2. When writing questions I start with a well structured Google and Google Scholar search to find papers, articles and/or blogs that report the common misconceptions in the topic I am teaching.

What do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 632 other followers

%d bloggers like this: