Thursday, May 14, 2009

Question wording


The second paid workshop I went to this week at the public opinion research conference overlapped in content with one of the short courses I took three years ago in Ann Arbor. The topic for both: drafting good survey questions. A University of Wisconsin professor who I’d heard of (Nina Schaeffer) led this one. She whipped through what sometimes seemed like a semester of material quickly, but this topic comes a little more naturally to me and I kept up. The course was geared more towards person-to-person interview surveys (on the phone or in person) than “self-administered” (paper and pencil or Web-based surveys – like what we do in Research Services), but many things apply across and others of us in the workshop sometimes asked questions about self-administered surveys.

In general, Schaeffer suggested spelling things out – writing more detailed questions and more questions. Put information important to answering the questions – definitions, time periods, BEFORE the actual question, she said (and don’t capitalize or underline key words – boldface them). After people hear what they think is the question, they start formulating a response and don’t necessarily listen to the rest. This may also apply to self-administered questionnaires. People may just not read the extra text.

Partly with that in mind, Schaeffer recommended against doing what we occasionally do. Instead of including a conventional stand-alone screening question – have you ever visited another country? – before asking which countries – we might sometimes ask the which foreign country or countries have you visited by including a parenthetical phrase – before the response options – giving people a chance to check a box if they haven’t ever left the United States. We do this because it takes up less space and helps respondents avoid skip fatigue (getting tired of being asked screening and being skipped around the survey through sometimes complex skip instructions).

(In Web surveys, we can embed the skips into the actual program and so respondents don’t have to think much about it – If they say No – they haven’t been to another country on a separate screening question, we can program the Web survey so they never see the Which country? Question.)

Don’t do the parenthetical phrase check box, Schaeffer said – because many respondents won’t look at this phrase and if people haven’t checked it we don’t know if they have in fact been to other countries or whether they just didn’t look at the phrase. (Respondents are more likely to look at stand-alone questions, Schaeffer implied.)

In some cases, however, Schaeffer advocated leaving out screening questions and building them into a second question. For example: Instead of asking have you ever visited a country outside of the United States? And then how many countries? - simply ask How many countries outside the United State have you visited, ever? and make sure that respondents they should write in zero (0) if they’ve never been abroad. Schaeffer suggested making these kinds of response options in this order None or Never then fill in the blank numbers. She suggested in general avoid frequency and rate questions and when possible avoid fixed response options with numbers grouped – try to get people to fill in the blank the exact numbers. Instead of how often in a regular week, ask how many times in the past seven days starting on Sunday, May 2, and ending on Saturday, May 8, did you do the following?

If you have to, she said, you can group numbers (like if you think there’s no way people will remember exact numbers) or even: words describing frequency.

Schaeffer said research suggests that it’s important to order “Yes” and “No” options in -that order. She said in general going in reverse order for other types of questions – especially against social desirability – helps counteract the tendency of people to answer positively and to check the first response or two. So for example – in a question about activity – if one didn’t switch to a fill in the blank numbers question – she suggested going to Not at all active, a little active, somewhat active, very active, and extremely active – in that order. Researchers have tested all of these options – to make sure they don’t overlap too much. Very just doesn’t seem intense enough to people to top off a response list with.

Generally, Schaeffer said – never have “True” as part of a question with lots of responses. She also said: Avoid questions about agreement unless it’s clearly something that’s a should question – your opinion about a policy issue.

Also – mainly write “unipolar” questions - not at all satisfied, a little satisfied, somewhat satisfied, very satisfied, extremely satisfied- instead of “bipolar” questions – very dissatisfied, somewhat dissatisfied, in between, somewhat satisfied, very dissatisfied (although she didn’t entirely ban these). In general, label the response options for all such questions with words. Don’t use numeric scales – like “1” through “7” – partly because these can mean different things to different people. Use 5-7 labeled with words options with unipolar questions and five or seven response options – always with an option in the middle – with bipolar questions. Usually, with bipolar questions, make the options “symmetrical” – “strongly” or “somewhat” agree – with an option in the middle – and then “somewhat” or “strongly” disagree.

Schaeffer debated with herself about whether to put “in the middle” options in fact in the middle, or leave it as the last option. She argued that in everyday speech, people sometimes ask each other – do you like this, or dislike it, or have mixed feelings. She also referred to different options. She apparently preferred “Mixed Feelings” or “In the Middle” or “Not Applicable” to “No Opinion” or “Don’t Know” (the ones we use most). She argued that our favorites were too general or vague.

Schaeffer also agreed with pollsters such as those from Gallup who push people to express opinions. In live interview surveys, she suggested that interviewers not present No opinion-type options, but record them if people volunteer No opinion responses. After hedging some, she said this wasn’t really possible with self-administered surveys. Web survey participants can’t write things in. And it’s messy when respondents to paper and pencil surveys do so.

She and I talked about this more after the session. Our office has long held a different view from that that Gallup and apparently Schaeffer hold. We believe that respondents don’t already have opinions on all issues. If they don’t really have opinions, trying to force them to state an opinion would produce faulty data. The argument Schaeffer made is this. I gave her the example of the proposal to set up a “cap and trade” system in which the government would limit how much carbon emissions companies could release (in the Presbyterian Panel environmental issues survey that’s about to go to print). But the government would also help set up a market for permits to release a certain amount of carbon emissions. If a company really wanted to release a lot of carbon emission, it rong) could buy permits from companies that weren’t going to emit carbon much. In an actual question I helped write about this, we don’t even use the cap and trade lingo and I tried to explain the proposal (or piggy-backed off of language from a staff person in the Presbyterian Washington office). But in the real world – said Schaeffer – people all the time get – and sometimes take – opportunities to express opinions about things like the cap and trade proposal – even if they know very little or nothing about it and can just guess what’s it about or vaguely connect it to something they know a little about and/or care about. Like maybe it has something to do with trading baseball caps. In this case, pushing people to express an opinion about it without giving them any more information is in fact more realistic, she said. Extreme examples of this are when survey researchers have asked people about fictitious proposals or about measures so obscure – the Trade Adjustment Act is something she cited as an example – that hardly any respondents really know what it is. What some survey researchers then do – after asking about whether people support or oppose cap and trade or Trade Adjustment – is spend a whole survey filling in bits of detail and arguments and counterarguments and see how respondents reply to those different elements. Schaeffer said what I was really pushing for people not to report any uninformed (or possibly strong opinions). Most people have at least some kind of vague opinions about most things. By including a No opinion or Don’t know category, we are encouraging people to assess how strong and how well informed their opinion is and – if it doesn’t seem that strong – to report it as “No opinion.”

Schaeffer and I probably still disagree about this. I hope to try many of the things she suggested, but am not sure I want to give up on No opinion options and on starting with the negative first on scale questions.

Two other points: Schaeffer gave us a thick handout with the slides for her PowerPoint presentation and many slides she didn’t have time to talk about. Several weeks ago she also e-mailed us some a bunch of references. She said that in the semester-long version of the course she spends a lot of time talking about the research findings that support her recommendations. The 2 ½ -hour version of the course is practical – it focuses on suggestions/guidelines (by the way, Schaeffer also says get rid of abbreviations and symbols – including e.g. and slashes – in survey questions) without supplying all of the supporting research findings. When asked she sometimes referred to studies.

A session I went to this afternoon underlined some of my unease about eliminating the No opinion response option. A study of the polling about the presidential election polls during the couple of months before the November election – and a little before the New Hampshire primary elections – showed the problems with either not allowing respondents to respond No opinion or not including this as a stated response category or with not reporting No opinion responses to the public or press. Although by immediately before the general election most of the polls converged and relatively accurately predicted Senator Obama’s margin of victory over Senator McCain, in September and October the polls disagreed a lot. And in the months before the primaries, pollsters reported – as it turns out, quite incorrectly – that Mayor Giuliani and Senator Clinton had consolidated leads and would probably win the New Hampshire primary (Clinton did win; Giuliani fared very poorly). This second example really throws in clear relief the No opinion issue. Standard pre-election polling ask people who they would vote for if the election were held today. But the election was months off. If you looked at the data closely, something like 75 percent of the Republican electorate was actually undecided. So when the pollsters reported Giuliani as the frontrunner, they weren’t explaining that in fact only a small minority of NH Republicans supported him. Somewhat ditto during fall 2008. The polls varied so much until the last few weeks before the election, this researcher argued, because many voters were in fact undecided but the survey interviewers pushed them to say who they would vote for. In this situation, it shouldn’t be a surprise that – when survey respondents were replying somewhat randomly because they didn’t really yet have an opinion – that people would tell different pollsters at different times different things. Aggressively discouraging people not to respond No opinion – and not reporting No opinion responses – can produce very misleading results that don’t really accurately represent actual opinion

-- Perry

1 comment:

Perry said...

Later evidence in favor of including "No opinion" reponse options. A survey researcher who has researched PA politics says that the poll that really drove would-be Senator Specter donors and GOP leaders to want to steer clear of supporting Specter in the last couple of months was faulty. It showed former Club of Growth head honcho Pat Toomey polling twice as much as Specter. But the survey did not include a No opinion option and - even though Specter is a very long-term incumbent and many people are familiar with him - seems very unwise some 12 months before the primary election. Polls this survey researcher did - with the No opinion option - showed bad news for Specter, but were nowhere near that bad (he polled at 33 percent). But the other poll became a kind of self-fulfilling prophecy to the extent that the media picked it up, scaring would-be supporters off, and therefore making Specter's situation even more desperate. This researher said Specter has his own internal polls - but the release of the Toomey polling at 50 plus and Specter at half of that results heavily influenced the dynamics of the race. The Independence Party candidate for governor in MN - when we were there - actually sued the Twin Cities papers in 2003 after he thought the papers published results of the fall 2002 race, which artificially reduced his numbers and caused supporters of his third-party candidacy to flee to the two major party candidates because they feared they would be wasting their votes

Another nifty thing about the Schaeffer presentations is that she was able to play a couple of clips: digital recordings of actual phone survey interviews - so you could listen to real life respondents verbalizing problems they were havin with real life questions.

-- Perry