Showing posts with label research. Show all posts
Showing posts with label research. Show all posts

Saturday, May 16, 2009

Keynote talks


Thursday night’s American Association for Public Opinion Research keynote speakers were both interesting. Nielsen research head honcho Paul Donato saw survey researchers going back to the future and then heading in new direction. Donato described how in-person surveys gave way to phone surveys, which costs and then missed coverage with all-cell phone households gave way to Internet surveys, which now also seem imperiled because of rather extreme duplication (same people on multiple “Internet panels”) – especially with apparent “professional panelists” – and missed coverage – with the “digital divide.” Donato said we may be going back to in-person surveys (he may have mentioned regular mail surveys in here somewhere).

But Donato also highlighted new digital research strategies (which he called "listening,
as opposed to "asking," which is what he said surveys do). Nielsen (the company that historically monitors household TV viewing) has apparently started tracking and “coding” (categorizing and counting) text in Internet chat rooms and on blogs and is assessing volume and content. He also linked viewing of CNBC business reports with consumer spending two weeks later. If a lot of people are watching CNBC, they’re worried about the economy – and that means they’ll spend less two weeks later. Another Another fascinating example Donato gave was what he called “electronic ethnography” – sending subjects cell phones and texting them every hour during the day to ask what they’re doing and asking them to take pictures of what’s in the refrigerators and cupboards every day. (Yet another example was linking social networking Web sites with Geographic Information Systems – so friends could monitor their friends’ whereabouts – like on Yahoo mapping – and researchers were monitoring both: both what friends were saying and doing and where they were at.)



Ken Prewitt, director of the Census Bureau during the Clinton Administration, not so implicitly criticized Donato for focusing on commercial issues and for not talking about areas in which universal coverage is important. For democracy and fair policy-making to work, the United States needs something like the census that covers everyone. Prewitt battled the Republican Congress back in the late 1990s about whether Census 2000 would employ sampling – and lost. (About this, Prewitt quipped: When you look out the window to see if it’s raining – do you insist on looking out all of the windows before deciding to go out with an umbrella?) Right off the top, Prewitt suggested, because of concerns about the census and immigration control, without sampling, the census is likely to miss up to half of Latinos. And when so much of government resources (from Congressional seats to Community Development Block Grant funds) is allocated on the basis of census information, this is fundamentally unfair. Ultimately – since sampling seems unlikely to win out now, too – the government may have to abandon the household as the unit of census enumeration (partly because cell phones and e-mail addresses aren’t intrinsically linked with geographically based households; and partly because household surveys – in-person, phone, or regular mail – are so expensive) and may need to turn more to administrative records (gathered for some other purpose, like Social Security records – which at this point aren’t very accurate except for certain information that is key for the program they’re gathered for) and perhaps for the kind of digital information that Donato called for.

-- Perry


Thursday, May 14, 2009

Question wording


The second paid workshop I went to this week at the public opinion research conference overlapped in content with one of the short courses I took three years ago in Ann Arbor. The topic for both: drafting good survey questions. A University of Wisconsin professor who I’d heard of (Nina Schaeffer) led this one. She whipped through what sometimes seemed like a semester of material quickly, but this topic comes a little more naturally to me and I kept up. The course was geared more towards person-to-person interview surveys (on the phone or in person) than “self-administered” (paper and pencil or Web-based surveys – like what we do in Research Services), but many things apply across and others of us in the workshop sometimes asked questions about self-administered surveys.

In general, Schaeffer suggested spelling things out – writing more detailed questions and more questions. Put information important to answering the questions – definitions, time periods, BEFORE the actual question, she said (and don’t capitalize or underline key words – boldface them). After people hear what they think is the question, they start formulating a response and don’t necessarily listen to the rest. This may also apply to self-administered questionnaires. People may just not read the extra text.

Partly with that in mind, Schaeffer recommended against doing what we occasionally do. Instead of including a conventional stand-alone screening question – have you ever visited another country? – before asking which countries – we might sometimes ask the which foreign country or countries have you visited by including a parenthetical phrase – before the response options – giving people a chance to check a box if they haven’t ever left the United States. We do this because it takes up less space and helps respondents avoid skip fatigue (getting tired of being asked screening and being skipped around the survey through sometimes complex skip instructions).

(In Web surveys, we can embed the skips into the actual program and so respondents don’t have to think much about it – If they say No – they haven’t been to another country on a separate screening question, we can program the Web survey so they never see the Which country? Question.)

Don’t do the parenthetical phrase check box, Schaeffer said – because many respondents won’t look at this phrase and if people haven’t checked it we don’t know if they have in fact been to other countries or whether they just didn’t look at the phrase. (Respondents are more likely to look at stand-alone questions, Schaeffer implied.)

In some cases, however, Schaeffer advocated leaving out screening questions and building them into a second question. For example: Instead of asking have you ever visited a country outside of the United States? And then how many countries? - simply ask How many countries outside the United State have you visited, ever? and make sure that respondents they should write in zero (0) if they’ve never been abroad. Schaeffer suggested making these kinds of response options in this order None or Never then fill in the blank numbers. She suggested in general avoid frequency and rate questions and when possible avoid fixed response options with numbers grouped – try to get people to fill in the blank the exact numbers. Instead of how often in a regular week, ask how many times in the past seven days starting on Sunday, May 2, and ending on Saturday, May 8, did you do the following?

If you have to, she said, you can group numbers (like if you think there’s no way people will remember exact numbers) or even: words describing frequency.

Schaeffer said research suggests that it’s important to order “Yes” and “No” options in -that order. She said in general going in reverse order for other types of questions – especially against social desirability – helps counteract the tendency of people to answer positively and to check the first response or two. So for example – in a question about activity – if one didn’t switch to a fill in the blank numbers question – she suggested going to Not at all active, a little active, somewhat active, very active, and extremely active – in that order. Researchers have tested all of these options – to make sure they don’t overlap too much. Very just doesn’t seem intense enough to people to top off a response list with.

Generally, Schaeffer said – never have “True” as part of a question with lots of responses. She also said: Avoid questions about agreement unless it’s clearly something that’s a should question – your opinion about a policy issue.

Also – mainly write “unipolar” questions - not at all satisfied, a little satisfied, somewhat satisfied, very satisfied, extremely satisfied- instead of “bipolar” questions – very dissatisfied, somewhat dissatisfied, in between, somewhat satisfied, very dissatisfied (although she didn’t entirely ban these). In general, label the response options for all such questions with words. Don’t use numeric scales – like “1” through “7” – partly because these can mean different things to different people. Use 5-7 labeled with words options with unipolar questions and five or seven response options – always with an option in the middle – with bipolar questions. Usually, with bipolar questions, make the options “symmetrical” – “strongly” or “somewhat” agree – with an option in the middle – and then “somewhat” or “strongly” disagree.

Schaeffer debated with herself about whether to put “in the middle” options in fact in the middle, or leave it as the last option. She argued that in everyday speech, people sometimes ask each other – do you like this, or dislike it, or have mixed feelings. She also referred to different options. She apparently preferred “Mixed Feelings” or “In the Middle” or “Not Applicable” to “No Opinion” or “Don’t Know” (the ones we use most). She argued that our favorites were too general or vague.

Schaeffer also agreed with pollsters such as those from Gallup who push people to express opinions. In live interview surveys, she suggested that interviewers not present No opinion-type options, but record them if people volunteer No opinion responses. After hedging some, she said this wasn’t really possible with self-administered surveys. Web survey participants can’t write things in. And it’s messy when respondents to paper and pencil surveys do so.

She and I talked about this more after the session. Our office has long held a different view from that that Gallup and apparently Schaeffer hold. We believe that respondents don’t already have opinions on all issues. If they don’t really have opinions, trying to force them to state an opinion would produce faulty data. The argument Schaeffer made is this. I gave her the example of the proposal to set up a “cap and trade” system in which the government would limit how much carbon emissions companies could release (in the Presbyterian Panel environmental issues survey that’s about to go to print). But the government would also help set up a market for permits to release a certain amount of carbon emissions. If a company really wanted to release a lot of carbon emission, it rong) could buy permits from companies that weren’t going to emit carbon much. In an actual question I helped write about this, we don’t even use the cap and trade lingo and I tried to explain the proposal (or piggy-backed off of language from a staff person in the Presbyterian Washington office). But in the real world – said Schaeffer – people all the time get – and sometimes take – opportunities to express opinions about things like the cap and trade proposal – even if they know very little or nothing about it and can just guess what’s it about or vaguely connect it to something they know a little about and/or care about. Like maybe it has something to do with trading baseball caps. In this case, pushing people to express an opinion about it without giving them any more information is in fact more realistic, she said. Extreme examples of this are when survey researchers have asked people about fictitious proposals or about measures so obscure – the Trade Adjustment Act is something she cited as an example – that hardly any respondents really know what it is. What some survey researchers then do – after asking about whether people support or oppose cap and trade or Trade Adjustment – is spend a whole survey filling in bits of detail and arguments and counterarguments and see how respondents reply to those different elements. Schaeffer said what I was really pushing for people not to report any uninformed (or possibly strong opinions). Most people have at least some kind of vague opinions about most things. By including a No opinion or Don’t know category, we are encouraging people to assess how strong and how well informed their opinion is and – if it doesn’t seem that strong – to report it as “No opinion.”

Schaeffer and I probably still disagree about this. I hope to try many of the things she suggested, but am not sure I want to give up on No opinion options and on starting with the negative first on scale questions.

Two other points: Schaeffer gave us a thick handout with the slides for her PowerPoint presentation and many slides she didn’t have time to talk about. Several weeks ago she also e-mailed us some a bunch of references. She said that in the semester-long version of the course she spends a lot of time talking about the research findings that support her recommendations. The 2 ½ -hour version of the course is practical – it focuses on suggestions/guidelines (by the way, Schaeffer also says get rid of abbreviations and symbols – including e.g. and slashes – in survey questions) without supplying all of the supporting research findings. When asked she sometimes referred to studies.

A session I went to this afternoon underlined some of my unease about eliminating the No opinion response option. A study of the polling about the presidential election polls during the couple of months before the November election – and a little before the New Hampshire primary elections – showed the problems with either not allowing respondents to respond No opinion or not including this as a stated response category or with not reporting No opinion responses to the public or press. Although by immediately before the general election most of the polls converged and relatively accurately predicted Senator Obama’s margin of victory over Senator McCain, in September and October the polls disagreed a lot. And in the months before the primaries, pollsters reported – as it turns out, quite incorrectly – that Mayor Giuliani and Senator Clinton had consolidated leads and would probably win the New Hampshire primary (Clinton did win; Giuliani fared very poorly). This second example really throws in clear relief the No opinion issue. Standard pre-election polling ask people who they would vote for if the election were held today. But the election was months off. If you looked at the data closely, something like 75 percent of the Republican electorate was actually undecided. So when the pollsters reported Giuliani as the frontrunner, they weren’t explaining that in fact only a small minority of NH Republicans supported him. Somewhat ditto during fall 2008. The polls varied so much until the last few weeks before the election, this researcher argued, because many voters were in fact undecided but the survey interviewers pushed them to say who they would vote for. In this situation, it shouldn’t be a surprise that – when survey respondents were replying somewhat randomly because they didn’t really yet have an opinion – that people would tell different pollsters at different times different things. Aggressively discouraging people not to respond No opinion – and not reporting No opinion responses – can produce very misleading results that don’t really accurately represent actual opinion

-- Perry

Wednesday, May 13, 2009

Nonresponse


People who respond vs. those who don’t to surveys are sometimes very different, including sometimes on key types of information that researchers are studying. Let’s say you’re studying people’s opinions about gun control. People for whom this issue is very important – let’s say, National Rifle Association members (let’s say they’re very against gun control) and relatives of gun violence victims (let’s say they’re very for gun violence) - may be much more likely to respond to questions about gun control. Because of this, surveys ostensibly of samples of the whole U.S. population may artificially suggest the public is polarized on this issue. There may, in fact, be lots of people who don’t have strong feelings for or against gun control. It may be hard to notice this if the survey research response rates among NRA members and gun violence victim relatives are much larger than response rates among other people.

But how to figure this out and what to do about it?The first paid workshop I went to at the American Association for Public Opinion Research conference on Hollywood (FL) beach this week dealt with this.

One of the key would-be presenters skipped out because the Obama Administration has just appointed him to direct the U.S. Census Bureau. A University of Nebraska professor substituted for him, joining a researcher for a private research firm.

A key point Michael Brick and Kristen Olson made was that many survey researchers focus on response rates (the percentage of people you asked to respond to a survey who actually did so) as a proxy for error due to nonresponse, to nonresponse bias. But they cited a 2008 study that showed that many studies with all different kinds of response rates can suffer from large nonresponse bias. Even with surveys with response rates of 60, 70, 80, or 90 percent, the people who respond and the people who don’t respond may be very different, even on the key variables the researchers are interested in. Conversely, surveys with response rates of 10, 20, or 30 percent may involve respondents and nonrespondents who are not very different from each other on relevant key variables. In general, this study showed that response rates (oir - inversely - nonresponse rates) are a very imperfect indicator of nonresponse bias.


A strategy for assessing nonresponse bias initially that one of my colleagues (Jack) often suggests is comparing results for a key variable on one survey with results from another another, maybe even a better survey. The workshop leaders suggested comparing results on – for example, age – for a general survey of the U.S. population that one might be working on with – let’s say, the age distribution – that comes out of the Census Bureau’s American Community Survey. In my office we can compare results for surveys of Presbyterian congregations (as answered by their leaders) with Office of the General Assembly data (that actually comes from another survey of congregations, the Session Annual Statistical Report).

I was recently looking at responses to yet another survey of Presbyterian congregations (this one from last year) and comparing responses on some similar questions on another survey, this one from 2000. I was interested in change, but I was a little suspicious of some of the changes that showed up (partly because the implications seemed so bleak: more financial problems now, fewer volunteers, fewer staff, weaker vision for the future, etc.). I wondered if too many of the responses to the current survey came from small, struggling Presbyterian congregations. I went to OGA/SASR data and found that the median average worship attendance for the congregations surveyed was pretty similar to that for all PC(USA) congregations (around 70 worshipers on Sunday). That surprised me. What may have exaggerated 2000-2008 changes, however, was that the median worship attendance for congregations whose leaders responded to the 2000 survey was significantly larger than median attendance for Presbyterian congregations in general at that time. Median worship attendance has decreased a little, but not as much as you’d think from just looking at these two surveys.

There are at least two possibilities (and it could be both): The possibility that the initial sample for the 2000 survey (around 700 congregations) was less representative (attendance size wise) than the smaller sample for the 2008 survey (200 congregations) seems intuitively implausible, because of the size difference (though it could be a factor). More plausible is differential response by congregational size leading to response bias. With the 2000 survey in particular, leaders of a smaller percentage of small congregations responded to the survey. If smaller congregations are struggling more, this gave an exaggerated picture of how financially secure and loaded with paid staff and volunteer Presbyterian congregations were in 2000. In turn it exaggerated a change in financial security and people resources among Presbyterian congregations between 2000 and 2008. For whatever reason (with the smaller 2008 sample), this response bias may have not occurred, yielding a more accurate picture now (which still isn’t that pretty).

If further analysis confirmed that response bias at work, we might adjust the 2000 survey results by weighting – counting disproportionately the responses of leaders of smaller congregations that DID respond and therefore reducing the – counting less – the responses of leaders of larger congregations. We could also recalculate the response rate to reflect our estimate of in what proportion leaders of congregations of all sizes responded – rather than a general response rate, unweighted – in other words, to weight the response rate by size to account for how much response rates varied among congregations of different sizes.

The workshop leaders also suggested finding out more about the sampled cases to try to learn more about response bias. One strategy is to try to match survey respondents with other information we have about them. We actually already did this with the 2000 and 2008 survey data. Instead of using the Sunday worship attendance figures the respondents reported in the survey, my colleague Ida and I matched the survey data to the OGA/SASR data and used the average Sunday worship attendance for those sampled congregations from the SASR survey. Using these data, we could then compare survey response rates among sampled congregations of different sizes – to assess the theory I laid out above that – among sampled 2000 congregations – fewer small congregations had leaders respond to the survey, compared with large congregations.

But there are also other sources of data about congregations. We could link to census data to see – for example - if leaders of congregations from different regions of the country responded at different rates. If we were e-mail inviting people in a group to participate in a Web-based survey, we could assess whether people for whom we apparently had personal e-mail accounts (by – let’s say – virtue of their e-mail addresses ending with endings that suggested subscription with popular Internet Service providers like Google mail, America On-Line, Earthlink, Insight Communications, and so on ) responded at different rates from people who appeared to have organizational e-mail addresses. If we were surveying people on the Presbyterian Panel or a part of our hymnal study, we’re likely to have information from previous surveys (including the Panel background survey) or short screening surveys (the Hymnal study). In that case we could use information from these earlier surveys to assess – for example – whether women and men responded at different rates or whether leaders of congregations that use vs. those that don’t use the existing “Presbyterian Hymnal” respond at different rates.

Again, there’s always the chance to re-weight the results and the response rates to try to counteract apparent biases in the results due to differential nonresponse to surveys.

The workshop leaders also talked about extraordinary efforts to try to persuade nonrespondents to participate in a survey. My colleague Jack helped get us a grant several years ago to try this with the Panel – by calling Presbyterians who weren’t participating in the Panel (A cheaper way to do this is to call a sample of nonrespondents – but then weighting is even more complex.) Using incentives – like money, sometimes sent with a survey (and offered to would-be respondents as a sign of trust and an indicator of a survey’s importance) – is another strategy. (I once got a $5 bill in an envelope with a blank survey from our magazine, “Entertainment Weekly.”) Another similar strategy we’ve talked about using is sending reply envelopes with real stamps instead of business reply envelopes with which the U.S. Postal Service only charges us if the people send the survey back in the envelope. As with the cash incentives, placing a real stamp on the return envelope is a sign of trust that the would-be respondent will indeed reply. It also makes the process look even more official and professional. And our would-be respondents might feel guilty if they’ve cost the church another stamp and haven’t completed and returned the survey.

But in general even the reminder e-mail messages, post cards, and letters with duplicate surveys we routinely send out are a form of “extraordinary effort.” So we might compare the responses on key questions of respondents who replied before reminders were sent out with those who responded only after receiving reminders – or responses of those who replied after final reminders versus responses by those who replied earlier. We might then extrapolate that responses by pure nonrespondents would in fact be similar to those of late respondents and adjust our reported results accordingly. In practice, this would amount to weighting more heavily the responses by late respondents (maybe a lot if the overall response rate was low).

In general, the workshop leaders said that response bias occurs with statistics, not with surveys. Although they did not talk much about item nonresponse (people who respond to surveys but skip some questions (as many people do)), they said that – even if people in different groups in your survey – let’s say women and men – respond at very different rates, this alone does not produce response bias if people in these two groups don’t disagree or have different experiences on the key topics of the survey. Let’s say 60 percent of women elders – but only 30 percent of men elders – of congregations in a presbytery responded to a survey. But let’s say that – in the actual congregations – women and men didn’t disagree at all as to whether the presbytery should make a particular personnel or policy change. If the centerpiece of the survey is to assess attitudes about a proposed change in the presbytery, and people of different genders from among elders in congregations of that presbytery agree on this change, the differential response rates by gender will not produce response bias that is substantively significant.

Although workshop leaders urged survey researchers to keep in mind studying and trying to counteract response bias when we are designing surveys, they spent only a little time at the end talking about research studies that include experimental designs to assess response bias. Let’s say you propose a study that employs different survey “modes” – phone surveys, regular mailed printed surveys, Web surveys with e-mail invitations, door-to-door canvassing for in-person surveys.. If it’s already apparent that door-to-door surveys usually work best, for example, most clients aren’t going to say – let’s do this four different ways, even though I know that we’d get the best results if we did them all by in-person methods. I’m willing to have you consign three-quarters of our would-be respondents to survey methods that will persuade fewer of them to participate in the survey, for the sake of generating more research about nonresponse and improving future surveys. Most clients won’t go for that. (Keep in mind that there is some evidence that people are less truthful for example about sensitive topics – urinary incontinence was a topic of one study the workshop leaders covered – in-person than they are in more anonymous surveys (like printed or Web).) Nevertheless, a few researchers have been able to do these kinds of experimental studies.



One key formula from today: base weights should be the inverse of probabilities of selection. So for example if you stratify a sample and for whatever reason you sample 20 percent of men and 50 percent of women, the base weight for responses by men should be 5 (1 divided by 1/5) and for women should be 2 (1 divided by ½.).

Another formula: an estimate of nonrespponse error is equal to the response rate multiplied by the difference between the mean response by the respondents and the mean response by the nonrespondents. The trick is to gauge that latter mean. How much nonresponse error is too much? Too much, the workshop leaders said, occurs when the nonrepsonse error is 10 percent of the sampling error (sampling error – another source of error – depends on the sample size, the distribution of responses to a key question, and the confidence with which you want to say responses by a sample represent responses by )all in a population.)

On to the topic of questionnaire construction Thursday morning.

-- Perry

Thursday, April 30, 2009

On his way out


News reports suggest that U.S. Supreme Court Justice David Souter, whom President Bush (“Poppy”) promised would be a conservative stalwart but turned out to the court’s second most liberal justices on many issues, will step down in June. A New Hampshire native connected with then White House chief of staff John Sununu, Souter, a bachelor, faced questions – though rarely explicit – about his sexual identity. (In hindsight, Souter’s nomination won Senate confirmation in part because as a lower-court judge, state court judge, and New Hampshire attorney general he had issues almost no controversial opinion.) Souter, only 69 (young by current Supreme Court standards) apparently never liked the Washington social scene. Souter cast one of the deciding votes in the landmark 1992 moderately pro-abortion rights ‘Casey vs. Reproductive Health Services” decision – which he helped read from the bench and probably in hindsight stopped some anti-abortion advances for good – which helped frame my dissertation research.

(Although Justice Souter has been no friend of the rights of the accused, he has continued to vote liberally in other matters, siding with the losers in the election imbroglio case that made the son of the man who appointed him (George W. Bush) the president. Souter was apparently so disgusted with the clear political partisanship and lack of intellectual integrity of the "Bush vs. Gore" decision that he considered resigning. Souter is apparently not close personally with his more conservative Republican colleagues on the court, and he has not been particularly impressed with their intellectual rigor - and this may have influenced him in deciding when to retire.)

Many Supreme Court justices, who have lifetime appointments, resign in some way to maintain their legacy. Justice Souter’s resignation is like that of Justice Harry Blackmun, author of the 1973 pro-abortion rights “Roe vs. Wade” decision, in that Blackmun (like Souter, a Republican) chose ideology over party and chose to resign soon after the election of a moderately liberal Democratic president (Bill Clinton) whose ideological leanings were closer to his own. (Justice Byron White, a diehard but sometimes conservative Democrat, opted for party over ideology, also waiting until President Clinton’s election.) Souter apparently went for ideology, and – unless President Obama faces problems from moderate Democrats in the Senate – President Obama will replace him with another moderate liberal (but – more likely – a Democrat - although picking a moderate Republican would be an interesting move for a president who tried - and failed - to appoint a third Republican/independent to his Cabinet).

-- Perry

Wednesday, January 28, 2009

Two meetings and a test


In October I gave a presentation at the annual Religious Research Association meeting - which this year happened to be in Louisville - and participated in a monthly meeting of our church's board - called the session - with two guests from our presbytery (mid-KY) and took Vincent which what will no doubt turn out to be his final high school era standardized testing. Above - at the RRA meeting - at Louisville's historic Seelbach Hotel - is the presenter who preceded me and Marty - a peer from the Lutheran research office. Below are people listening into this presentation.


I talked about research that myi colleague Ida and I had worked on showing that the Presbyterian congregations most likely to have female pastors were those that had had female pastors five years earlier, those that had turned over pastors in the previous years, and those with smaller memberships. Other theories about which congregations might be more likely to have female pastors - rural congregations, liberal congregations, etc. - did not pan out. Picgtured below is a presentation that followed ours.



Our session usually meets in the wonderful Fireside Room of our church's Fellowship Hall building. Below is - left to right - our pastor, Jane; Evelyn, who I've worked with who is on the Mid-KY Presbytery Committee on Ministry who praises for our church every day; and Jeff, who chaired the Outreach Council that I'm also part of for the past two years.


Pictured below are session members Laura, anita, Rachel, Ben, Elaine, and Ted (plus student pastor Carlos (to Ted's left)



Unlike FL, KY is an ACT state. Vincent did well on the mandatory ACT testing the KY paid for the spring of his junior year (with a 27). You might recall that he took the SAT on his fourth day in Denmark and - jet-lagged and confused about the instructions - did not so great. His teachers and de facto we pushed her to review, and his math score ended up going up by just 1 point - but two other section scores declined and he wound up with a 26. (He had correctly predicted that his score would not improve.) Vincent - who had trouble getting up after that South End party (see "Friday night out") - actually complained subsequently that I had not signed him up for the ACT plus writing. His 27 would have been good enough - with decent grades - to get him in the Honors College of the college he supposedly wants to go to - Western KY University. But he seems unlikely to graduate from high school at this point - so that score won't help him much unless he graduates later. With a score like that, with whatever grades no matter how bad - he could have gotten into a school like Western if he just would have graudated. Vincent took the test at the nearby Catholic boys' school, Trinity, whose $5 million football stadium ends just two blocks from our house. He lingered long enough outside that I was afraid he wasn't going to go in. Friends from church - Rachel and Luke (both from the Guatemala trip) - arrived to take the ACT with writing.



Frisco and I walk past Trinity on one of our walk circuits and we frequently cut through the Trinity campus - right by the pictured area, in fact - as a way to cut through from Shelbyville Road to Westport Road - when the lights earlier on don't go our way.



Finally figuring Vincent was headed in (this is back when i was still trying to keep an eye on him) in to take the test, Frisco and I walked some more, then headed ack home. And then it was on to the Guatemala Heine Brothers meeting (see "Ready, set, go!").
-- Perry