Tuesday, October 21, 2014

SEC East vs SEC West

Everyone knows the SEC West kicks football ass this year and, until the Georgia-Arkansas game, it owned a clean slate over the SEC East. (Go Dawgs). Yeah yeah, so much for football. But what about SAT scores?

I used the data from this site to do a comparison. I dumped it into an Excel file, sorted by schools in the East and West. These are the lower numbers, the average that students in the lower regions of admission scored, the 25th percentile. As you can see from the site, just eyeballing the data, the East schools seem to do better. If you sort the teams by region and average the scores, you get:

West: 500 Reading and 519 Math
East: 557 Reading and 569 Math

An advantage, for you math non-majors out there, of 57 points for the East in Reading, and 50 points in math. The differences are a bit less stark if we look at the 75th percentile, but they still favor the East by 43 points in Reading and 40 points in Math.

"But wait," you might say. "That's not fair. Vandy is in the East. They can actually read and write there."

Good point. So I excluded Vanderbilt and there's still an East advantage. Without Vandy, the East outscores the West on the 25th percentile Reading by 35 points and the 25th percentile Math by 26 points. At the 75th percentile level, the advantage to the East is 25 points for Reading, 21 points for Math.

So maybe the West is kicking ass in football this year, but it only takes a friggin 470 in Reading to be in the 25th percentile at Mississippi State, those other Bulldogs. The 25th percentile table is below.



Team Reading 25% Reading 75%
Alabama  500 620
Arkansas  500 610
Auburn  530 630
LSU  500 610
Ole Miss  480 600
Texas A&M  520 640
Mississippi State  470 610
Average West 500 617



Florida  580 670
Georgia  560 650
Kentucky  490 610
Missouri  510 640
South Carolina  540 640
Tennessee  530 640
Vanderbilt  690 770
Average East 557 660



East vs West Diff 57 43



East Minus Vandy 535 642
Minus Vandy Diff 35 25


Monday, October 20, 2014

Methodology Matters

It's all in how you measure stuff. Yes, friends, methodology matters.

Take this list for example. In it, I'm happy to report, UGA's graduate journalism program is ranked #5 in the country. That's very cool. No doubt we'll plaster it on the web site, toss it out on Twitter, and buttonhole random strangers in the parking lot to tell 'em the news.

Okay, but what about our PR graduate program? Our PR program (don't tell them I said this) is very likely the best, or among the three best, in the country. By any measure. So how'd it do? Check out the PR list here, or just allow me to tell you it's not ranked. At all. It has:
  1. Georgetown
  2. Rowan College
  3. Mississippi College
  4. Florida A&M
  5. Miami (Fla.)
And so on to the top 15, which if you know anything about the best PR programs you'd have no choice but to say, WTF?

So we return to the question, how did they measure this? What's their methodology? Let's look at the fine print.
Graduateprograms.com reaches current and recent graduate students through scholarship entries as well as social media platforms. These program rankings cover a period from September 1, 2012 to September 30, 2014. Graduateprograms.com assigns 15 ranking categories to each graduate program at each graduate school. Rankings cover a variety of student topics, such as academic competitiveness, career support, financial aid, and quality of network.
Okay, we're clearly not talking a random, or every anywhere near random, sample. Scholarship entries? Social media?

What's interesting about the journalism list is there are no real surprises in it. It looks okay. Sure, it's missing Mizzou and Berkeley, but no odd programs pop up.

So what's happening here? My hunch is a lot of students conflate "journalism" and "public relations" and UGA's journalism program ranked higher than I might have expected, at least at the graduate level. My other hunch is the reliance on scholarship entries may bias the sample toward smaller, hungrier programs. It's impossible to say, but as always take these rankings for what they're worth -- fun, interesting, and good if you happen to come out on top.



Friday, October 17, 2014

I Take Credit

I take credit for apparently having killed the bad polling practices of our student newscast. It's been since Oct. 8 that the j-students posted a revised god-awful pseudo-poll on the Grady Newsource site. They seemed to do one of these things every week. Until now.

I wrote at length about their reporting of bad, self-selected polls. You'll find my rather colorful language here, and then here. There are others, but you get the idea, and from them you can work your way back to a more technical explanation of why such SLOPs suck and represent bad, misleading journalism. What's odd and a bit troubling, though, is not a single j-prof who oversees the newscast, nor a student who actually puts it out, came to talk to me. I did get a weird phone call, mentioned in one of the posts linked to earlier in this graph, but that's it. I am possibly the most up-to-date faculty member in the building when it comes to polling. Hell, I teach our graduate-level public opinion class. Plus I've taught classes in public opinion reporting.

If nothing else, perhaps I've killed this practice. I try to watch the newscast every day, plus I always check the site and, especially, follow Newsource's excellent Twitter feed. We'll see. Yes, Newsource, I've got my eye on you.

Thursday, October 16, 2014

What People Know ... about Ebola

Kaiser has a new poll out that includes asking what people know about Ebola. The graphic sums it up, and the results? Not comforting.


Also see the report's Table 1, which looks at a set of questions and the education level of respondents. As you'd expect, the greater the education the more accurate the responses to health questions about Ebola.

Wanna Be a Department Head?

The University of Florida is seeking a chair of its journalism department. I'd usually not bother writing about this, but UF is my alma mater (masters and PhD, finishing in 1991), and thus it's a program I always watch. They ran this search a few years ago and Wayne Wanta, who has worked nearly everywhere, got the gig. He's stepping down, so a new search is under way. Here's an interesting bit of the job description:
A master’s degree is required for this 12-month position. 
Not a doctorate, just a masters, and "the successful applicant will (1) hold the rank of professor or meet the University of Florida’s criteria for full professor upon hire and (2) be eligible for tenure upon hire." In other words, either be a full professor or have a significant enough background to justify that rank. A bit unusual for a Research 1 university like UF, but not so unusual in a journalism department.

I could not for the life of me find the job description from a few years ago, so I can't say for certain whether it required a doctorate for the job back then. If it previously did require a doctorate (and all of the finalists for the job back then held one), then I wonder whether this job description is written with someone in mind, perhaps someone already in the department. I have no idea, and it's the dean* who ultimately decides these things. Coincidentally, she has a bachelor's degree -- from UF -- so it's hardly a surprise that she'd open up the search for a department head to include someone without a doctorate. The professional and academic fields are changing.



* in full disclosure, the present UF j-school dean,
Diane McFarlin, was my boss while I was a reporter
at a Florida newspaper.

Wednesday, October 15, 2014

Movement in Georgia U.S. Senate Race?

A post today at the AJC, based on a new poll, suggests there may be some movement toward Democrat Michelle Nunn against Republican David Perdue.

As they write (bold face by me):
Some caveats: The poll is a mixture of auto-dialing and online responses that showed Jack Kingston with a huge lead in the U.S. Senate primary runoff, but what matters here is the movement. A week ago the same poll had Perdue ahead by one percentage point.

I beg to, slightly, differ. It's dangerous to over-analyze "change" between two polls using the same error-prone methodology. Essentially, you have two robo-type polls with "change" in both instances lying within the margin of error. In other words, I'd argue there's not much change at all. They remain in a statistical tie.

But -- this is important -- while there isn't much change, that's no fun to write about. That doesn't sell papers, or get clicks, or draw viewers. So of course this gets more attention than it probably, mathematically, statistically, deserves.

You can see more of the poll, with crosstabs, here. You can see the earlier poll here. A few interesting differences emerge in the makeup of the two samples. For example, the previous poll had 30 percent black. The more recent poll has 27 percent black "likely voters" in its sample.

These polls call landlines or reach smartphones to provide a questionnaire. No live interviews, no humans talking to humans (the gold standard of polling). That said, at least they're trying to reach people other than via landlines, though it's preferable to call cells (using humans, as it's illegal to robo-call cells).

Are there fundamentals that favor a Nunn upset? Not really. Still, it's possible, for instance, for very technical reasons, many of the polling models are underestimating black turnout by a percentage point or three. The reasons are nerdy and PhDweeby, and I don't want to spend pixels explaining, but it has to do with Census weighting and the use of 2012 data to estimate 2014 turnout. Those teeny tiny percentage points, however, can make all the difference in the world, in a close race. Would I bet on Nunn? Nope, not straight up, but if you give me 7 points, I'll take some of that action.





Tuesday, October 14, 2014

Who Will Win the Election?

I'm working on a longish paper to possibly be presented in a few months on my new favorite topic -- surprised losers in elections. These are folks who expected their candidate to win an election, but who actually lost, and  the consequences of being surprised by the outcome.  In preparation, I'm looking at presidential elections from 1952 to 2012 and who people predicted would win the election.

Below, a sneak peek. The blue bars represent respondents who favored a Democratic candidate who also predicted that candidate would win. The red bars do the same for those who preferred the Republican candidate who predicted that candidate would win. As you'd expect, in runaway elections the gap is wide (look at 1972, for example). In closer elections, both expect their preferred candidate to be victorious (2000 being a good case study). I've got a lot more to do with this rather large data set, but this gives you a hint of where I'm going.  Plus I have to go back and validate the data some more to ensure nothing weird is going on, but just eyeballing it -- all looks okay.