Saturday, August 16, 2008

Workshops


For the third or fourth year, I spent part of my time at the American Sociological Association annual meeting at paid training workshops. One of the workshops was with a woman whose market research training workshops I had participated in at previous meetings. It’s partly interesting to see and hear how business researchers talk and think – this overlaps with academic, non-profit, and religious researchers – but is still somewhat different (and overlaps with how my friend and former New School classmate Chris, a business researcher, talks). Leora Lawton talked explicitly about market segmentation and ways of thinking and analyzing that may apply to some of the quasi-market research that I do on my job: like evaluations of PC(USA) publications and curriculum (and even study of Presbyterian identity). Corporations have been at the forefront of segmenting – but the Republican Party and even Senator Hillary Clinton’s strategists were big on dividing the electorate into demographic groups and trying to send different messages to different groups. This is also how some new church development specialists at the January Daytona Beach NCD coaches training thought – don’t try to start a church for everyone: start out pitching to a demographic niche – then build out from there (this was a little controversial even in Daytona – isn’t the Gospel and the church supposed to speak to everyone, irrespective of class, race, age, etc.?).

(Differentiating attenders, we’ll see later, is also what the megachurch Willow Creek and its researchers will try to do – but on different dimensions.)

When I was TA-ing a Quantitative Methods class for Diane Davis at the New School, Diane asked me to teach factor analysis. I ended up doing a sample cluster analysis and factor analysis on (state-by-state) data from the 1860 census and results from the 1860 presidential election (an interesting election in which four different presidential candidates won electoral votes) (see two graphics below). Using factor analysis (which Mom also used in her dissertation), which clumps variables into a handful of factors that one then has to describe, I identified three or four key factors. I remember that one of the factors I labeled economic modernization. (Factor analysis is difficult to describe to laypeople because each factor is only tied loosely to a group of variables – it describes only tendencies.) Once I’d done factor analysis, I employed cluster analysis, which groups cases (in this case, states) based (in this case) on their factor scores. I remember that, when I clustered the states into two groups, the two groups that emerged were not the Northern states and Southern states, but Louisiana vs. everyone else. Once I unpacked the factors, I realized that this was because Louisiana was an outlier, a very unusual state. It was a Southern state (with a large African American slave population), but it - connected with its French, Spanish, and Cajun roots - it had a large Catholic population (very unusual for a Southern state) (the Census Bureau used to - believe ir or not - ask about religious affiliation) and a large population of free African Americans (very unusual for any state). This unusual combination of characteristics made Louisiana stand it, even grouping the North and the South (minus Louisiana) together when I ran a cluster analysis for ust two factors.


After showing us cluster and factor analysis, Dr. Lawton also showed us how to use an (SPSS Version 13.0 and above?) function called CHAID (I forget what it stands for) which allows for even simpler market segmentation by according to what researchers call categorical variables. For example, one could take all of the survey respondents who said they were very likely or somewhat likely to buy CDs that year in one particular genre (for example, rap) and those that were not. (In order to first do multivariate analysis like the cluster and factor analysis, Lawton had actually used 7-point scales. (For example: Rate how likely you are to buy a CD in the next year with this kind of music, from “1” to “7,” with “1” being not at all likely and “7” being definitely. In this case, she might have collapsed “1” through “4” into not very likely and “5” though “7” into likely.) Each respondent would have a value of “1” or “0” (or “1” or “2”) for a whole series of questions about different genres then. Then we already had information on the respondents’ scores on other categorical variables (men vs. women, Republican vs. Democratic vs. independent, Anglo vs. African American vs. Latino vs. Asian American) and we could turn interval-level variables (like age) into categorical variables (15 to 40 vs. 41 to 65 vs. 66 or older). What CHAID does is a version of what we do when we run cross-tabulations by more than one level (except that I believe CHAID uses statistical analysis to decide for itself - instead of you deciding - which variables to run first). For example, we could figure out what percentages of people of different age groups said they were likely to buy rap CDs. We would likely find that a larger percentage of people between 15 and 40 than people older would be likely to buy rap CDs. Doing multiple levels of cross-tabulations, we might find, for example, that it was Anglo and African American young adults – if we ran rap likelihood by age by race- in particular – more than Latinos and Asians – who were likely in great proportions to buy rap records.



Two problems with running these cross-tabulations in SPSS. It’s very difficult to read and interpret this complicated multiple-level cross-tabs. Plus the measures of association SPSS generates for these cross-tabs (unlike in the business research-oriented WinCross package) only tells you how significantly different values are for the breakdowns as a whole (rather than telling you how significant the differences between each element are – so for example all we could find out is that differences in values for the race by age group by rap CD buying likelihood are statistically significant on the whole – rather than finding out that it was the difference between Anglo and African American young people – vs. everyone else – in particular that was significant).

CHAID overcomes most of these challenges. If I understand it right, first, it looks at the values for all of the variables and then starts breaking them down, starting with the ones with the most significant differences. It keeps going – and displays it in graphic fashion – until the differences in values aren’t significant. And it stops sooner with some categorizations than others, depending on how significant the differences are.

For examples, it might do the following segmentation (loading all of the musical taste variables) (though CHAID would arrange these vertically):


Older Anglo whites who like country and bluegrass

Middle-aged African Americans and Anglo Democratic whites who like jazz and blues

Young African Americans and Anglo independent whites who like rap and heavy metal

Middle aged Republican white men who like classic rock

- or something like that.

(Using drop-down menus – which Ida and I don’t usually use with SPSS – Lawton actually performed the CHAID analysis right in front of us, as we spoke. First, she catalogued people’s responses to question about their musical tastes/buying habits, and then combined these into four or five factors, to which she gave easy-to-remember variables names such as HeavyRap (in this case, for a combination of Rap and Heavy Metal), then she included some demographic information independent who likes rap and heavy metal, but doesn’t like country; etc., etc.)

Dr. Lawton actually showed us some ads that high-end TV producers had initially used that had not worked.
Her market segmentation research helped identify several different types of people who were likely high-end TV buying customers. The most gung-ho were younger men who were sports fans, were outgoing, and were “early adopters” – liked the idea of being the first on the block to have some up and coming technology (and many were in the process of setting up households whose interior they could design to make room for that new technology.) She and her colleagues then even gave putative sample people in each culster names, market research style. "Jack," for example, might be a prototypical person within the CHAID-generated group outlined above.

Dr. Lawton then showed us the much more successful TV ads that resulted. The ads showed people watching a close golf game. There were a bunch of young people – including lots of men - watching sports (and exchanging glances and remarking to each about the game) on a big, high-end TV that completely dominated the room in a house that it looked like folks might have just moved into. There it all was: socializing, young men, new homes with newly designed interiors, sports, and high-end TVs. Subsequent ad campaigns pitched to market segments that were likely to buy the TVs later.

(Lawton suggested how we might use market research-type data in our own research – asking people how likely they are to do something – for example, to order a Presbyterian Christian education resource? – using also other data on the people – to do this kind of analysis, mixing responses to market-research questions with information about people’s other attitudes and actions (usually self-reported) and demographic information about them.)

Four days later, I heard about a slightly different use of market segmentation and factor and cluster analysis.

One of the most well known Protestant evangelical mega-churches in the United States is the suburban Chicago Willow Creek Community Church, with senior pastor Bill Hybels. Recently, Willow Creek staff have helped develop a REVEAL survey and congregational transformation tool which is somewhat of a competitor to the U.S. Congregational Life Survey, which I work on (though the majority of congregations that self-select and pay to take our survey are mainline Protestant congregations, which Willow Creek probably wouldn’t go to first).



A draw for the CCSP meeting was a presentation Tuesday morning by Willow Creek staff/researchers who have developed REVEAL. They apparently have a random sample only of Willow Creek-affiliated congregations, they don’t make public their data, and their surveys are Web-based – instead of paper and pencil – in worship (like ours) and so the response rates are much lower than what we get. Still, learning a little more about the process and findings was interesting – and connected directly with Dr. Lawton’s market segmentation talk. The centerpiece of the REVEAL survey is not so much congregation’s strengths as the individual’s spiritual development. With questions connected with an evangelical Protestant world view, the Willow Creek folks surveyed thousands of Willow Creek and Willow Creek-affiliated-congregation folks and used factor analysis, cluster analysis, and other market segmentation strategies to divide individuals into four spiritual development categories: exploring Christianity, growing in Christ, close to Christ, and Christ-centered. Between each of these four categories they identified three types of transitions. Instead of clustering people by musical tastes and demographics, they clustered them by their personal theology, reported practices (from worship attendance to Bible reading to social service activity), and satisfaction with congregational ministries. Apparently, Hybel et al. (who sat on the results for more than a year) have NOT taken the results as a reason to market segment Willow Creek (it could just be a “seeker” church catering to people in the first two stages of spiritual development, and it could let more spiritually mature individuals go on to other congregations (for example, apparently a bunch of Willow Creek veterans have moved on to a smaller rival church, Harvest – they know this partly because they have done other research, including exit interviews).) The implication is that congregations must adopt different strategies with people at different levels of spiritual maturity, especially if they want to attract and maintain people in all stages of their spiritual journey. (They found, for example, that participating in small groups helps people make the jump from stage 1 to stage 2 but is irrelevant for later transitions.) (This is all based on what research call cross-sectional data, information about different people supposedly at different stages on this putative spiritual development path. Tracking the same individual over time – a group of such individuals we call a panel – would be preferable.) This seemed to link up with a conference talk I’d heard over this weekend, which explained how Willow Creek had moved from an affinity group model – in which Willow Creek folks (in addition to weekly arena-sized worship and perhaps other church activities) met in small groups of people with similar interests – to a parish model in which people who lived in the same geographical area met together (like regular medium-sized congregations?) – and then to a hybrid model. Apparently, Willow Creek leaders worried that the affinity groups could become too insular, too cliquish, and too self-satisfied, and they figured that the parishes could be responsible for evangelism and outreach to people within their geographic area who did not go to church. But people apparently missed the affinity groups, and so pressure built (perhaps along with finding that there may be some logic to putting together people at similar places along the spiritual growth path a la the REVEAL findings??) to go back to the affinity group model. Anyway, it was interesting to see a different kind of religious research application of market segmentation principles (even if the REVEAL folks didn’t give the putative example spiritual development stage individuals fake names).

I belatedly added a second paid workshop on focus group research, which came near the end of my conference participation Sunday afternoon. I’ve led focus groups - mainly on the telephone - for several years (see, for example, “Tech/off-site problems”). Before that I went to a multi-day training in Bloomington, MN, with Richard Krueger, the focus group guru who had my colleagues had brought to Louisville earlier to train them for focus group leadership (most of my colleagues have led these at one time or another, but I have increasingly become the department specialist in this). Later I got to work with Krueger and listen in to several phone groups he led as part of the U.S. Congregational Life Survey, with leaders of congregations that had already participated in the survey. At a Toastmasters meeting several months ago I had also led a focus group how-to training – complete with a sample focus group in which you invite training participants to participate in a focus group – which trainers always seem to do – which is something we actually tell presbyteries we can do (lead focus group trainings).

These two training workshops (market segmentation and focus group research) actually overlapped, in that these two women have had actually had some similar clients. The focus group woman (Janet Billson) told us more about research she’s done for non-profits – for example, focus groups with villagers in a town in which the World Bank had installed a well, to find out who was using and not using the well and why – But she had also done product research for private companies, much as Dr. Lawton had done with the record store company. The focus group woman also talked about de facto informal market segmentation issues, as she tried to figure out how we might target villagers for incorporation into the focus groups and how we might divide them up (separate men from women and perhaps users from non-users). (She also talked about time and money cost constraints on doing zillions of focus groups.)

This woman had actually studied small group processes at Brandeis in the 1960s and 1970s, and so she was an expert in small-group dynamics. I was surprised that one of the few things I disagreed with her was this: A fellow trainee asked her essentially about the bandwagon effect. Wouldn’t some people go along with what the group said, and not say what they’d think on their own? This woman leads not phone focus groups, but in-person focus groups, which typically can get larger, because you can monitor and manage them partly through body language and you sometimes have more than the one hour that phone focus groups are usually limited to. I still don’t like big focus groups, because it’s much harder for everyone to get to speak. I typically limit my phone groups to fix or six people. In-person groups I’ve led have averaged around eight people. She talked about leading in-person groups with 8-12 people. An advantage of larger groups is that you hear from more people and you’re less likely to go through long gaps in which no one has anything to say. She had a whole sample diagram that showed a line for very comment from one person to the next. She showed a diagram for a focus group gone awry in which most comments are made through the moderator (instead of to each other), a couple of individuals have dominated the discussion, and two people have withdrawn (in one case – literally – having pulled his or her chair back away from the table). On the one hand, the focus group trainer lobbied against doing a round robin where you go person by person asking everyone to speak (except with introductions – she said it’s important that everyone introduce themselves – If they don’t start talking, they’re likely to keep staying silent). I do this longer in the first few questions of a phone focus group, in part because body language is not a tool I can use (for example, in person a focus group leader can look at someone s/he wants to speak next and look away from someone you want to stop speaking.) After a couple of questions, I’m more likely to throw a question open and let anyone talk who wants to. That means that – unfortunately – focus group participants talk through me longer, until later questions. But it also means that she has to spend more energy through other means trying to get everyone to talk. I will now do (after going through this training) something she does during my opening spiel – which is to state out loud the importance of everyone getting to talk. To my mind, the problem with her answer to the person who worried about the bandwagon effect – which was – she said that a good strong moderator who will make sure that all participants get a chance to speak their mind – that they don’t get rolled over by others. It surprises me that a small group specialist would say this: When it comes up occasionally, I will tell clients: We want these focus groups to be generative, creative, dynamic. We want people – not to say thing they don’t really mean – but – pondering interesting questions and going back and forth with other people – to think of things they haven’t thought of before and tell us about them, even to challenge themselves and learn and grow. Sure, participating in a conventional survey can be a learning experience. But the back and forth of even a good focus group may mean that people end up expressing different opinions than they would in a survey. This may mean that certain individuals are dominating the group and that others feel brow-beaten to go along or simply want to please the dominators. But, ideally, new ideas come out of the discussion as everyone ends up with slightly different views than they started out with. One of my New School profs always said that one of the few decisions that one makes by one’s self (which is what conventional surveys try to replicate) in the real world is voting in a voting booth. Most other decisions – from what kind of music to buy to what TV show to watch to whether to help out a neighbor – one makes with and around other people. So a research design that forces people to make choices without regards to other people (like someone answering a paper-and-pencil survey) is actually not very realistic.

One other very interesting thing that this woman does is this: After every focus group she and any other folks who weren’t regular participants – an assistant moderator, even the client (she wasn’t big on letting them in – though in an in-person focus group when you’re not in a fancy focus group lab with one-way mirrors – it’s very obvious that a client is there (if a client is “sitting in” on a phone focus group, I always let the participants know but – usually – since they can’t see the person and the person rarely speaks – they tend to forget about the person being there after a while – With these NJ focus groups, one of them was harsh enough it was probably good they weren’t there – not sure they would have said all of that with the client there).) - She debriefs after every group (Sheila and I have been talking a little after most of them – longer Thursday night (see “Tech problems”).) and then leave the recorder on the whole time, so the people transcribing the focus group, to whom she mails the tape that day, transcribe not only the actual focus group discussion but also the debriefing.

Although I had to leave early to make it to the Roxbury Film Festival (see “Steam”), I got to hang around for a good part of the sample focus group that most focus group trainers end with (plus then debriefing). This focus group was different than the one Krueger had run at our training (which I was in) in two respects: First, the topic (the presidential election) was even more interesting to me than the Krueger topic (airline customer service). Second, the focus group trainer asked people to talk as if they believe something EVEN IF THEY DIDN’T. I liked the topic; I had mixed feelings about asking people to act a role dishonestly (not so much because of the principle, but because it introduces an air of artificiality to the whole thing and it isn’t actually easy to carry off that well (in the training I led, people that worked at the Presbyterian Center brainstormed about the future of the Presbyterian Church (U.S.A.).).

What the trainer asked almost a dozen mostly middle-aged women trainees to do was – pretending, if necessary – to be disappointed Senator Hillary Clinton supporters for president still deciding what to do next. It seemed like – in many cases – these women really were disappointed Clinton supporters. But the other artificial feature of this was that enough time has gone by that many Clinton supporters (like us) have already figure out what they’re going to do. When participants couldn’t stick with “the script” (that is, they couldn’t stay in character) was perhaps even more interesting than when they could. For example, the two African American women in the group, it seemed pretty obvious to me, had been supporters of Senator Obama’s bid all along. They occasionally antagonizes the others with their pro-Obama statements. On the other side of the fence was a younger white woman from Georgia who said she that Senator Obama and his allies had been condescending to Clinton and she was annoyed enough that in Georgia she was going to vote for former Republican Congressperson Bob Barr, who’s running as the Libertarian presidential candidate and figures to do well – and take votes for Senator McCain – in Georgia because he’s from Georgia and used to represent Georgia (as one of Speaker Gingrich and Representative Tom Delay’s allies in the House). She said this would indirectly help the Democrats, but she couldn’t have to bring herself to vote for Obama. This irritated some others. More in the middle – but still not really staying in character – was the woman who said she was very disappointed about Clinton’s loss and was skeptical about Obama but was now coming around.

Again, I had to leave early. But this was interesting in that it was a focus group I could imagine Democratic Party researchers (or – really – Republican Party researchers too) actually running earlier this summer in order to help them figure out how to attract Clinton supporters. And I think a majority of the women in the focus group were actually in the category of disappointed Clinton supporters, and so it wasn’t entirely artificial. The focus group trainer did just about everything she said she was going to do: she only did a robin round with introductions, but she tried hard to involve all dozen of them in the conversation, using a mix of calling on people and eye contact/body language. For the most part, she stuck with her guns. She did not have people run all of their comments through her; she got them to talk with each other. One thing I sometimes do in phone focus groups that unfortunately re-centers the comments around me is ask follow-up, clarifying questions. Often I do this when people drop hints about something without being clear, when they use jargon that I’m not sure what they mean by, or when I can’t understand what they’re saying at all. But sometimes one of the times I do this is because I’m afraid that people are giving too many one-word or one-sentence responses, and it seems clear to mean that I’m going to have to draw them out, to get them to expand on their responses, and asking clarifying questions is a good strategy here. For the most part, the participants in the sample focus group quickly had lots to say, and there was no need to ask clarifying questions for this reason.

One thing I’ve come to do differently in phone focus groups than she does: provide people with a list of questions. And she said she was doing something in a training that I would never do in training. She said she was making up the questions as she went along. I would always have a script which I would memorized. To give her some credit, she was originally going to run two different sample focus groups, in which case she wouldn’t have had time to ask many questions at all (and she may have simply been fibbing – maybe she did really have a script in her head).

I had to rush off, but not before I got to hear and see about an example of another kind of market research, to see how someone other than Dick Krueger and I leads focus groups, and go to hear a fascinating discussion among (mainly) real Clinton supporters debating about how to approach the general election.

-- Perry

No comments: