How Many Puppy Votes: Breaking Down the Hugo Math

The dust is just beginning to settle on the 2015 Hugo nominations. Here’s the official Hugo announcement and list of finalists. If you’re completely in the dark, we had two interacting slates—one called Sad Puppies led by Brad Torgersen, another called Rabid Puppies led by Vox Dax—that largely swept the 2015 Hugo nominations.

The internet has blown up with commentary on this issue. I’m not going to get into the politics behind the slates here; instead, I want to look at their impact. Remember, Chaos Horizon is an analytics, not an editorial website. If you’re looking for more editorial content, this mega-post on File 770 contains plenty of opinions from both sides of this issue.

What I want to do here on Chaos Horizon today is look at the nominating stats. Using those, can we estimate how many Sad Puppies? How many Rabid Puppies?

For those who want to skip the analysis: my conclusion is that the total Puppy influenced vote doubled from 2014 to 2015 (from 182 to somewhere in the 360 range), and that this resulted in a max Puppy vote of 360, and a minimum effective Puppy block of 150 votes. We don’t yet have data that makes it possible to split out the Rabid/Sad effect.

Let’s start with some basic stats: there were 2,122 nominating ballots, up from 1,923 nominating ballots last year, making for a difference of (2,122-1,923) = 199 ballots. Given that Spokane isn’t as attractive a destination as London for WorldCon goers, what is the cause of that rise? Are those the new Puppy voters, Sad and Rabid combined?

If you take last year’s Sad Puppy total, you’d wind up with 184 for the max Puppy vote (that’s the amount of voters who nominated Correia’s Warbound), the top Sad Puppy 2 vote-getter. If we add 199 to that, we’d get a temporary estimate of 383 for the max 2015 Puppy vote. We’ll find that this rough estimate is within spitting distance of my final conclusion.

Here’s a screenshot that’s been floating around on Twitter, showing the number of nominating votes per category. Normally, this wouldn’t help us much, because we couldn’t correlate min and max votes to any specific items on the ballot. However, since the Puppies swept several categories, we can use these ranges to min and max the total Puppy vote in the categories they swept. With me so far?

Hugo Nominating Stats 2015

Click on that to make it bigger. As you can see, that’s from the Sasquan announcement.

The Puppies swept Best Novella, Best Novelette, Best Short Story, Best Related Work, Best Editor Long Form, and Best Editor Short Form. This means all the votes shown in these categories are Puppy votes. Let me add another wrinkle before we continue: at times, the Sad and Rabid voters were in competition, nominating different texts for their respective slates. I’ll get to that in a second.

So, if we were to look at the max vote in those six categories, we’d get a good idea of the “maximum Puppy impact” for 2015:
Novella: 338 high votes
Novelette: 267 high votes
Short Story: 230 high votes
Related Work: 273 high votes
Editor Short Form: 279 high votes
Editor Long Form: 368 high votes

Presumably, those 6 “high” vote-getters were works that appeared on both the Sad and Rabid slates. You see quite a bit of variation there; that’s consistent with how Sad Puppies worked last year. The most popular Puppy authors got more votes than the less popular authors. See my post here for data on that issue. Certain categories (novel, for instance), are also much more popular than the other categories.

At the top end, though, the Editor long form grabbed 368 votes, which was within shouting distance of the Novella high vote of 338, and even very close to the Novel high vote of 387. I think we can safely conclude that’s the top end of the Puppy vote: 360 votes. I’m knocking a few off because not every vote for every text had to come from a Puppy influence. I’m going to label that the max Puppy vote, which combines the maximum possible reach of the combined Rabid and Sad Puppies vote.

Why was there such a drop between the 368 votes for Editor Long Form and the mere 230 votes for Short Story when both of these were Puppy-swept categories? This means that not every Puppy voter was a straight slate voter: some used the slate as a guide, and only marked the texts they liked/found worthy/had read. Some Puppy voters appear to have skipped the Short Story category entirely. That’s exactly what we saw last year: a rapid falling off in the Puppy vote based on author and category popularity. This wasn’t as visible this year because the max vote was so much higher: even 50% of that 360 number was still enough to sweep categories.

Now, on to the Puppy “minimum.” This would represent the effective “block” nature of the Puppy vote: what were lowest values they put forward when they swept a category? Remember, we know that 5th place work had to be a Puppy nominee because the category was swept.

Novella: 145 low vote
Novelette: 165 low vote
Short Story: 151 low vote
Related Work: 206 low vote
Editor Short Form: 162 low vote
Editor Long Form: 166 low vote

Aside from Related Work, that’s enormously consistent. There’s your effective block vote. I call this “effective” because the data we have can’t tell us for sure that this is 150 people voting in lock-step, or whether it might be 200 Puppies each agreeing with 75% of the slate. Either way, it doesn’t matter: The effect of the 2015 Puppy campaign was to produce a block vote of around 150 voters.

If that’s my conclusion, why was the Best Related Work 206 minimum votes? That’s the only category where the Rabid and Sad Puppies agreed 100% on their slate. Everywhere else, they split their vote. As such, that’s the combined block voting power of Rabid and Sad Puppies, something that didn’t show up in the other 5 swept categories.

So, given the above data, here’s my conclusion: The Puppy campaigns of 2015 resulted in a maximum of 360 votes, and an effective block minimum of 150 votes. That ratio of 360/150 max/min (41%) is almost the same as last year’s (182 for Correia at the highest / 69 for Vox at the lowest, for a rate of 37.9%). That’s remarkable consistency. It doesn’t look the Puppy stuck together any more, just that there were far more of them. Of course, we won’t know the full statistics until the full voting data is released in August.

I think a lot casual observers are going to be surprised at that 360 number. That’s a big number, representing some 17% of the total Hugo voters (360/2122). Those 17% selected around 75% of the final ballot. That’s the imbalance in the process so many observers are currently discussing.

What do you think? Does that data analysis make sense? Are you seeing something I’m not seeing in the chart? Tomorrow I’ll do an analysis of how much the non-Puppy works missed the slate by.

Tags:

28 responses to “How Many Puppy Votes: Breaking Down the Hugo Math”

  1. Paul Weimer says :

    I’ve seen it argued by the Sad Puppies that there is no block, just hundreds of real fans taking back SF…by mostly voting their slate, straight up.

    So, yeah.

    The category I was nominated in last year, Podcast, had a doubling of the threshold of minimum nominations.(from the thirties up to 68) And so not enough nominations for my podcast, alas, were enough to get on.

    • chaoshorizon says :

      You’ve got to be a little careful. The math can’t tell us if this is a straight block (150 people voting the exact same slate straight) or a “block result” (such as 300 people voting for 50% of the slate). While the end results of those two scenarios is the same, they arrive at the result differently. We don’t have enough data to differentiate between those possibilities at this point.

  2. kastandlee says :

    As far as the number of voters participating, remember that the eligible electorate is everyone who was a member of the 2014 Worldcon as well as all of the members of the 2015 and 2016 Worldcons as of the end of January 2015. Inasmuch as Loncon 3 had something around 10,000 members, many of whom joined after the January 31 cutoff last year, I think we can safely say that the total eligible nominating electorate for 2015 is larger than it was for 2014.

    We saw something like this as well in 1993-1994. The 1993 Worldcon in San Francisco was one of the five largest ever held, followed by the 1994 Worldcon in Winnipeg, which was the smallest North American Worldcon in something like twenty years. However, the 1994 Worldcon had more Hugo nominations than 1993, thanks to all of the 1993 members being eligible to vote and participating. No internet to speak of back then, either; the 1993 Worldcon paid to send a mailing to all of its members in early 1993 with the 1994 Hugo Nominating ballot and a reminder that they were eligible to nominate even if they weren’t members of the 1994 Worldcon.

    Meanwhile, I forgot that I had a copy of that slide presentation as well, and have updated the 2015 Hugo Awards page with the entries (number of individual works/people nominated) and range (lowest and highest number of nominations between finalists).

    • chaoshorizon says :

      That’s a very fair point: the bump this year in nominators may be residual LonCon voters joining the process. If that is the case, it is a intetesting coincidence that the number of new nominators is so close to the seeming increase in Sad Puppy voters.

  3. Tudor says :

    I think your analysis is correct. I’m not surprised about the 360 number, but I’m surprised that there were only around 1750 nonSP voters, considering that there were 3137 votes for the Best Novel at Loncon. Do you think that it’s possibly that the difference comes mainly from the Wheel of Time voters that were absent this year?

    I’m looking forward to hear your take on The Goblin Emperor semi-surprise, on the new proposed rule to nominate 4 and have 6 nominees on the final list, and on the battle between SP nominees, nonSP nominees and No Award (83% nonSP voters is still a very big percentage considering the way the voting works in the final round).

    • kastandlee says :

      Tudor: 3,137 people voted for Best Novel in 2014, but only 1,595 nominated. There are always more final ballot voters than nominators. Nominating is much harder work than picking choices off a list; like essay questions versus multiple-choice tests.

    • chaoshorizon says :

      Tudor: The possible absence of Wheel of Time voters is an intriguing argument. That series had 160 nominations last year, and it may be those nominators sat out the 2015 process. I thought they might have stuck around to vote for Sanderson this year; it’ll be interesting to see where Words of Radiance is in the final tally.

      It seems like we should have seen two bumps in overall nominators this year: one being the influx of new Puppy voters, and one the influx of LonCon voters Kevin was mentioning. I’m not seeing a big enough bump to account for both, so maybe the missing WOT voters makes up for that difference. A lot of this is just speculation: raw stats can’t allow us to be that precise in determining results.

  4. Mark says :

    Makes sense to me. For what it’s worth, my one and only overlap with Sad Puppies (not counting movies in BDPLF) was in the Best Related Work category, which is anecdotal, but it also demonstrates that it’s possible that some people unknowingly nominated something from a puppy slate without actually being part of the movement. I’d be willing to bet that Jim Butcher’s Skin Game got more votes than most puppy nominees, for instance (not that puppies had no influence, just that Butcher probably has folks who nominate him for every Dresden book regardless, etc…)

  5. MadProfessah says :

    So, to be clear. I can pay $40 and vote on the current slate and also be eligible to nominate next year’s Hugo Awards?

  6. Andrew M says :

    I think the top figure in BE Long Form is almost certainly for Toni Weiskopf, who, as publisher of Baen, has a substantial following outside the puppy movement. I don’t know how to account for the figures in Best Novella, but I do wonder if there is an independent factor at work there too, as apart from those two the puppy votes never go above 300 (even in Best Related, where the two slates are in perfect agreement).

    • chaoshorizon says :

      I’d agree with your assumption on the BE Long Form being Weisskopf, and we certainly don’t want to attribute all her votes as being Puppy votes. We can’t know for sure, but I’m accounting for the drop off as certain categories being less popular than others. Not every Puppy-influenced voter marked every category, and not every Puppy-influenced voters followed the recommended slate 100%. For instance, 1827 people voted for best novel, but only 1174 for Short Story. That’s only 65% of the whole. 65% of 350 is 227; the actual Sad Puppy vote was 230, so if the Sad Puppies fell off by category in the same way everyone else did, there’s your number. 350 seems to work the best for the math, but an equally honest assessment would be to use the 300-400 range.

      • Craig says :

        Also note that some puppy candidates missed the ballot, including in categories where 71 and 94 votes were the cutoff. Not sure if some of those ran into eligibility issues.

        My personal suspicion is that part of the lack of spread in short fic is that some people may have joined based on interest in a few categories (notably novel), and voted for recommended stuff they liked in other categories and had read, but had only read a relatively limited selection of the year as a whole. I know a bunch of people that read more than 20 new novels a year. I know no one who reads more than 10 novellas a year in the year of their release.

        Effect looks similar; intent is very different.

  7. keranih says :

    I am impressed with the work you’re doing here. Data is good, and while Twain’s observation on statistics holds, I really appreciate you running the numbers. I have a question, a critique, and a couple suggestions for further investigation.

    If I’m understanding you correctly, *with the data available now*, it seems that “slate” voters did not hold to a hard line and vote for all the works suggested. (ie, there was considerable variation in the number of votes each nominated work received, rather than all slate voters voting the same.) Is this an accurate summation of your findings? If not, could you expand on this?

    Regarding your calculations on “how many to make the cut off vs last year” (for the ‘spread’) – should not the ballot counts be normalized to account for the 10% difference in ballot totals?

    Suggestions:

    One of the re-occurring concerns with the numbers is that the “slate” wins represent an out-of-size effect on the final nominations. I would like to see if that ‘30% controlled the vote’ could be mapped onto previous years, particularly 4-5 years ago, when the number of nominating ballots was half what it was this year.

    Additionally – I did some back of the envelope scratching myself, comparing number of ballots and number of entries for each category (for this year), and found the ballot/entry ratios all over the place, and not well coorelated with how well the slates did. Which seemed to me to indicate that we can’t see that the number of entries was depressed (as one would expect from ‘straight’ slate ballots.) Comparing to previous years would be helpful, I think.

    Again, thanks for your efforts in the cause of data!

    • chaoshorizon says :

      Thanks for the questions and critiques. Always happy to answer:

      I don’t have a great response for the first question. Based on my analysis of the data, I think 300-400 voters were involved in some way with the Puppy slates. I think some of those stuck to the Rabid or Sad slate very tightly, and I think others of those used it much more loosely. I couldn’t estimate the percentages in any reliable way. Maybe 100 people voted the slate down the line, 100 people used 50%, and then another 100 people used it more sparingly? Or 50 straight, 200 50%, 100 sparingly? The math can work out with several different assumptions. Once we see the full data and get a sense of how some of the Puppy choices missed the slate, we’ll have a better idea.

      I didn’t normalize because I’m thinking the 10% increase is mostly Puppy voters. Some of the other commenters like Kevin above have contested that assumption. A such, I think it’d be perfectly credible and logical to normalize. I don’t think a normalization would change the estimates that significantly. All of this gets tricky because even last year’s numbers are also inflated. If you go back three or four years, there’s a huge drop off in nominations. Attempts to model what this year would have looked like without the Puppies are very difficult.

      I agree that it’d be interesting to see what a 30% slate vote would do to previous year’s numbers. That would answer a question I have, as to whether or not things are becoming less centralized. I have a feeling there were fewer books and stories to choose from 20 years ago, and thus a slate wouldn’t have had the same impact.

      I hadn’t thought to do a number of ballots / number of entries ratio. That’d be fascinating, as it would be a good measure of how spread out each category is. If you have the data, drop it into the comments!

      • keranih says :

        Okay, I ran some numbers.

        I could only do entries vs ballots for 2013-2015 based off the pdfs at the Hugo site (evidently the data they collect has to do with what the rules are.)

        Year Average Standard deviation

        2013 0.327 0.153
        2014 0.340 0.125
        2015 0.420 0.167

        So this is the number of unique entries per category, divided by the number of valid ballots for that category. A higher number would show more unique entries per ballot, a lower number is more repeats.

        I averaged the ratios for the categories for each year and also got the standard deviations.

        So there is a difference from two years ago, but much less of a difference from last year. And the standard deviations are problematic, in that they indicate a lot of overlap and not much real difference.

        In order for me to be able to tell if there was a ‘real’ difference between the ratios/averages of different years, I would need to do a paired t-test. Which is not high school math.

        When I looked at individual categories and compared them to different years, they were all over the place. Some were up, some down. I couldn’t even get the two ‘full Puppy’ categories (novella and related) and the two ‘non Puppy’ categories (fan artist and graphic) to shift together.

        Also three years is not long enough to establish a pattern.

        The other three ratios I could use 4 years on, and the trends were even harder to pin down. And I think the numbers need to be run again, because I took the high/low ranges as if any declined works didn’t exist, which affected the ranges and the low vote-to-ballot.

        Also, I want a cvs file. :) Data entry makes a gal cranky.

        Here is the results of the calculations (I have no idea what hitting submit is going to do to the formating.)

        CAT 2015 2014 2013 Average SD
        novel 0.3213 0.4063 0.4268 0.3848 0.05593
        novella 0.1856 0.2432 0.2300 0.2196 0.03018
        novelette 0.3046 0.3984 0.4091 0.3707 0.0575
        short 0.6201 0.6682 0.8580 0.7154 0.1258
        related 0.3009 0.3471 0.3904 0.3461 0.0448
        graphic 0.4140 0.3116 0.5457 0.4238 0.1173
        bdpl 0.1471 0.1759 0.1766 0.1665 0.0168
        bpds 0.5011 0.3934 0.5963 0.4969 0.1015
        ed l 0.1742 0.2317 0.2738 0.2265 0.0500
        ed s 0.2149 0.1725 0.2917 0.2264 0.0604
        pro art 0.3985 0.4359 0.4875 0.4406 0.0447
        semi 0.1515 0.1874 0.2079 0.1823 0.0285
        fanzine 0.2813 0.3118 0.3838 0.3256 0.0527
        fancast 0.2665 0.3485 0.3815 0.3322 0.0592
        fwrite 0.3411 0.4031 0.5031 0.4157 0.0818
        fart 0.6689 0.4652 0.5495 0.5612 0.1024
        jwc 0.2585 0.2791 0.4286 0.3220 0.0928

        ave 0.326459434 0.339935363 0.420007904
        sd 0.153444519 0.125515661 0.167524779

      • chaoshorizon says :

        Fascinating. Those really are all over the place, aren’t they? I expected the Novella to be low, but the Short Story ratio is even bigger than I would have guessed. Who knew there was such little agreement in the Best Fan Artist category?

        I suspect last year was something of an outlier in terms of variety because it was in London, which brought in more British voters, who in turn voted for more British texts. I haven’t double-checked, but I wonder how many British authors/artists/editors made the list this year.

        And don’t get me started about data entry! Anything I want to use on Chaos Horizon I have to put into files myself. At least I find the data entry relaxing . . .

  8. ULTRAGOTHA says :

    I agree with Kevin that some of the increase in nominating ballots comes from the influx of LonCon3 members. Somewhere around 1000 people bought memberships last year right after Tor announced they would be including the entire Wheel of Time in the packet. And even more than that bought memberships between January 31 and the con.

    You say: “The Puppies swept Best Novella, Best Novelette, Best Short Story, Best Related Work, Best Editor Long Form, and Best Editor Short Form. This means all the votes shown in these categories are Puppy votes.”

    If I’m understanding you correctly, this cannot be correct. I nominated in all of those categories and none of my picks made it to the final ballot. So that’s at least one vote in each of those categories that was not a Puppy vote.

    What it does mean is that *most* of the votes for the *top five finalists* were voting for entries on the Puppies slates. Some of those entries would have picked up votes anyway from people who absolutely are not Puppies.

    If I’m misunderstanding you, I apologize. We will have a lot more data after Sasquan releases the detailed numbers.

    • chaoshorizon says :

      Sorry for being unclear. Since we only know the vote totals for spots 1-5, that’s what I meant by “shown” votes. Of course other voters voted for things that weren’t in the top 5. I consider those “unshown” votes, and we won’t see what they are until they release the detailed numbers. Since every candidate who made the final slate in the swept categories was a Puppy candidate, though, we can infer that the majority of their votes were from a Puppy-influence.

      I think you’re right to note that some of those votes for the Top 1-5 spots might not have been Puppy votes, but if we compare the number of votes those authors received in pre-Puppy years (Toni Weisskopf garnered only 18 votes for Best Editor in 2012 for instance) to this year, we can infer (and it’s only an inference) that most (90% 85% 95%?) are Puppy-influenced votes. That’s why I talk about an “effective” block. We’ll never know, of course, whether or not someone who voted for Weisskopf was doing so because they were voting the straight Puppy slate, were influenced by the Puppy slate, or never saw the Puppy slate at all. The same applies to an author like Jim Butcher: no matter how popular the Dresden File books were, they’d never been seriously considered for the Hugo. If you believe that Butcher managed jump from being outside of the Top 15 in previous years to being in the Top 5 by some means other than Puppy-influence, I’d love to hear your theory.

  9. Loyd Jenkins says :

    I know this is an old thread now, but reading it made me wonder: How do you know that the ‘bloc’ vote was a high percentage Puppy vote? What if there were 450 Puppy voters who only agreed 33% of the time?

    • chaoshorizon says :

      It was a long time ago—we don’t know the percentage, of course. I get into that more in the next few posts on this issue, but mathematically you can’t distinguish between 300 people voting en masse 50% of the time, or 450 people voting 33% of the slate, etc. In future math breakdown posts, I take to calling it the “effective” block vote.

Leave a Reply to Andrew M Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Rare Horror

We provide reviews and recommendations for all things horror. We are particularly fond of 80s, foreign, independent, cult and B horror movies. Please use the menu on the top left of the screen to view our archives or to learn more about us.

Space and Sorcery

Enjoying and sharing my spec-fic reading adventures

The BiblioSanctum

A Book Blog for Speculative Fiction, Graphic Novels... and more!

The Skiffy and Fanty Show

Running away from the thought police on wings of gossamer and lace...

Relentless Reading

Fantasy and Science Fiction News and Reviews

MyLifeMyBooksMyEscape

A little about me, a lot about books, and a dash of something else

SCy-Fy: the blog of S. C. Flynn

Reader and reviser of science fiction and fantasy.

Far Beyond Reality

Science Fiction and Fantasy Reviews

Andrew Liptak

a nightmare of wires

Reading SFF

Reading science fiction and fantasy novels and short fiction.

Eamo The Geek

Sci-Fi Fantasy and Horror Book Reviews by Eamon Ambrose

The Other Side of the Rain

Book reviews, speculative fiction, and wild miscellany.

Read & Survive

How- To Read Books

Mountain Was Here

writing like a drunken seismograph

The Grimdark Review

Fantasy book reviews, interviews, thoughts, and ideas.

From couch to moon

Sci-fi and fantasy reviews, among other things

Books, Brains and Beer

"Words, words. They're all we have to go on."

SFF Book Reviews

random thoughts about fantasy & science fiction books

Follow

Get every new post delivered to your Inbox.

Join 139 other followers

%d bloggers like this: