Friday, February 27, 2015

The Surprised Loser

In May I'm scheduled to participate in an international panel of scholars who will discuss electoral expectations. My own part of the panel is on surprised losers. In fact, as I look at it, I'm up first. Oh great. Anyway, I'm cranking some initial data for the presentation and thought I'd share a little of it here for my tens of readers worldwide.

My presentation will:
  1. Briefly discuss how previous studies establish electoral losers are more negative about government and elections than are electoral winners. Democracy rests on the consent of the losers.
  2. Argue that losers can be divided into two types -- those who expected to lose, and those surprised by the loss.
  3. Further argue that surprised losers may explain the winner-loser differences seen in previous studies. In other words, surprised losers are more pissed by the election results, and thus more negative.
  4. Analyze data from 1952 to 2012 to examine this point.
  5. Provide a deeper analysis of the 2004 and 2012 elections in which incumbents ran for re-election, one from each party.
  6. Look briefly at the news media's role in all this.
  7. Leave and go to the beach (conference is in San Juan, Puerto Rico).
 Okay, so a few data points for your enjoyment. If we pool all the data from 1952 to 2012, we find that:
  • Of those before an election who said they would vote Democratic, 74.5 percent expected a Democrat to win.
  • Of those before an election who said they would vote Republican, 78.4 percent expected a Republican to win.
So it's safe to say people expect their own candidate to win, some years more so than others.  The graph below shows you, over time, the percentage of surprised losers, expected losers, and winners. In close election years, like 1960 or 2000, there are a lot more surprised losers in proportion to expected losers. In runaway election years, the result is more obvious and, therefore, fewer are surprised. Also, note the trend in the last few elections appears to be for more surprised folks. That's interesting, and a good hook.

On another day I'll continue this, breaking down whether surprised losers indeed differ from expected losers in terms of trust in goverment, the fairness of the election, and trust in democracy. Stay tuned.

Polls and the Factually Challenged

The Republican presidential race is the interesting one. I got to wondering how well a candidate was doing in the polls compared to how factually challenged that candidate happens to be. So I took data from PolitiFact's "personality" section and counted up the total number of evaluations and calculated two percentage scores for the key GOP candidates. These scores were:
  • Percentage False -- essentially, the percent of all evaluations that were judged "Mostly False," "False," or the ever-popular "Pants on Fire."
  • Pants on Fire -- the percent of all evaluations that were judged as the worst, the "Pants on Fire."
I compared these to recent polls conducted in Iowa, New Hampshire, and South Carolina (links to the polls here). After all, it's useful to know what states prefer the more factually challenged and, as a consequence, which should be voted off democracy island.

Before we get to the results, a coupla caveats. Two possible candidates, Ben Carson and Donald Trump, were excluded. Carson I excluded because he's never had any statements evaluated by PolitiFact, making it hard to score him. Trump I excluded because he's a clown, but also because his name wasn't used in any polls.  But mostly because he's a clown. And as a final caveat, these are statements judged in large part because they were so out there, so it's not really a measure of a candidate's honestly but more a measure of his or her likelihood to say stupid things.

Basic Results

Rick Perry had the most statements evaluated (158, as of 8 a.m. February 27, 2015), followed by Scott Walker (126). Lindsey Graham had the fewest (9). The table below shows the candidates, the percent of false statements, and the percent of "pants on fire" statements. As you can see, in terms of total falsity Ted Cruz holds a reasonably comfortable lead over Rick Santorum, and Scott Walker and Mike Huckabee are tied in their ability to say false things. Also Huckabee leads the GOP pack in terms of having his pants on fire, followed by Rick Perry. Everyone after that is in single digits.

Name %False %PantsFire
Ted Cruz 64.3 9.5
Rick Santorum 53.8 9.6
Scott Walker 48.4 7.9
Mike Huckabee 48.4 12.9
Rick Perry 46.8 11.4
Marco Rubio 38.6 2.4
Rand Paul 33.3 6.1
Chris Christie 31.5 7.6
Jeb Bush 27.3 4.5
Lindsey Graham 11.1 0

(As an aside, 69.2 percent of all Trump statements were some form of false, and he led by far in terms of percent of statements earning a "Pants on Fire" judgment (30.8 percent ... wow).

Polls and Falsity

So how does this compare to the individual state poll numbers? Not well. First, for the statistically inclined, some correlations -- basically the measure of how good a relationship exists, beweeen -1.0 (perfectly negative) to 1.0 (perfectly positive). For example, In Iowa, the correlation between the poll rankings and percent of false statements is a paltry .04, which is damned close to being zero. Luckily the percent of "pants on fire" judgements comes to the rescue, with a correlation of .30. What's that mean? The more likely a candidate was to make really really factually incorrect statements, the better he did in the Iowa poll. Huckabee drives this relationship as he leads in "pants on fire" and polls in first place in Iowa.

We don't see much in New Hampshire in terms of relationships, so let's skip to South Carolina. In South Carolina, there is a -.73 relationship between percent of false statements and how well a candidate is doing in the polls, and a -49 correlation on the "pants on fire" measure. In other words, the better you did on the polls, the lower your percentage of factually wrong statements.
So, are South Carolinians less forgiving of the factually challenged? Perhaps. More likely it's the favorite son status of Lindsey Graham skewing the data. To test this, I excluded him from the analysis. The relationships remained negative, but not as strong (-.55 on all false, -.18 on "pants on fire").

The graphic below gives you a visual display.

Percentage False (x-axis) by Support in South Carolina (y-axis)

So what can we take from all this, other than I need a better hobby? The PolitiFact data seems to be a lousy (so far) predictor of popularity among early GOP voters. I frankly expected more. I figured the candidates more likely to toss red meat out to the early voters would be more popular, and that would be reflected in the "pants on fire" or false evaluations. Of course the PolitiFact data relies on which statements the PolitiFact staff choose to examine, and it's still very early in the primary season, so this is probably an analysis better done later in the year.

Thursday, February 26, 2015

Priming, Anyone?

So there's this meaningless story on a TV station site, which is really a story about how an expert says you shouldn't pay your brat for chores.  And then they ask people, so do you do thing we just said sucks?

Are you surprised, then, that 71.43 percent said "no" and only 28.57 percent said "yes"?

And what the hell is up with statistics to the hundredth to the right of a decimal point? Talk about false precision given this is one of those BS, non-scientific polls that are really more about audience engagement than measuring opinion.  My rant here is really about priming. You have a "financial expert" saying it's a bad thing, then you ask whether people are doing that bad thing. Duh. Of course more are gonna say "no" than "yes." You've primed them to do so. Cooked the data. Skewed the results.

Finally -- and this is the best part -- that 71.43 percent? Based on five of seven respondents. Lemme say that again ... based on seven total respondents. 

Okay, now eight total respondents. I voted twice. Vote early and often, they say, usually in Chicago, or on "news" sites.

Tuesday, February 24, 2015

How To Report a SLOP

I hate SLOPs, as regular readers know, those self-selected opinion polls. But sometimes I come across someone who, while using such a device, actually do it right. Here's an example, and below I include the opening graphs.
Business Journal readers have given new Oregon Gov. Kate Brown a little bit of support as she completes her first week on the job.

However, more respondents to a non-scientific poll asking whether Brown "will be good for Oregon" either don't think she'll do right by the state or simply don't know much about her.

See how right off in the lede "...readers have given..."? That's good. Makes it clear that we're not talking about a random sample generalizable to the public at large. No, it's readers. And in the second graf, even more important, is "...a non-scientific poll...". Again, good. Makes it clear this is not a real poll.

So, while I hate SLOPs, points here for making it clear to the reader that this isn't a traditional, scientific poll. Now if only other journalists would get the message.  

Monday, February 23, 2015

Fox vs Deer

The real political battle in Georgia, the one that really matters, is whether the gray fox or the whitetail deer should be named the state's official mammal.

As this story explains, some kids at a school realized Georgia was one of the few states without an official mammal. Now you'd think maybe humans could be the official state mammal, but that's not the direction the kids took. They went fox, but the state leaned on them to go with deer instead.

In an "only in Georgia" moment, we have this:

First off, only in Georgia would an elected official think being the state (fill in the blank, such as official state reptile, or legislator) provides some legal protection. Sheesh.

So, isn't the fox superior? It certainly is if you check out how often "fox" characters appear in film and literature (scroll down here to see "fox" in popular culture). Other than Bambi, what cool deer characters come immediately to mind?  And here's another vote against deer -- they're apparently popular in hipster culture? Big. Vote. Heck, even Disney recognizes the superiority of foxes. One got to be Robin Hood (below).

But let's explore the "nuisance" argument. Deer certainly win. I've hit one deer in Georgia. They're like really large rodents that can destroy cars. By making them the official mammal, we're declaring war on all those late-night drivers who end up with Bambi splattered across their grill.

The only mammal more annoying than deer? People. I would put legislators, but there's some debate about whether they're mammals.

Thursday, February 19, 2015

UGA Student

Interesting as to whether or not (as officials, including the University president, have said, and the local hospital has said as well) a UGA student died of bacterial meningitis. I'm guessing not. But I'm a bit baffled now.

Tuesday, February 17, 2015


It's an old complaint of mine -- SLOPs.

That stands for self-selected opinion polls. They're entertaining, but they're bogus. And the AJC has one up and running for your journalistic enjoyment. I saw it via Twitter.
To be fair, this is a blog, not a news story, so I'm not gonna complain too much. The choices are
  • Shorter wait times
  • Routes closer to my home
  • Extended hours of service
  • More transit cops on board
  • Better connections to trains
You can only vote once, which is good. I voted for routes closer to my home because, well, I live in Athens. Fairly sure MARTA does not have a route close to my home.

Oh, and on Twitter there were a few funny responses to the question of what MARTA can do. My favorite?  Not smell like pee