Skip to content

A “Policy Disconnect” on Centralized Election Administration?

I haven’t been reading the papers in a few months, so am short on material.

I still Rick Hasen’s Election Law Blog, however, which is always chalk full of interesting policy and causal questions. In the wake of the 2012 election, and the president’s apparently off-handed comment about long lines at the polling places, there is apparently some very tiny rumblings about nationalizing election administration in the U.S. For example, Hasen himself argued in the NYT’s “Room for Debate” forum that elections should be nationalized. In response, Doug Chapin gave some reasons why not, citing both the (apparently normative) virtues of federalism and the perceived gridlock and incompetence of the feds right now.

Slightly more interesting is why or why not reforming election administration is “a thing” or not, in Chapin’s language. That is, why is it so remote from the policy agenda? It’s easy to say that it’s a non-starter and that state and local governments really don’t want to centralize, but that seems like begging the question. Indeed it’s even more mysterious when (to crib another item from Hasen’s blog) a poll finds 88% of Americans supporting a uniform system!

Perhaps it’s not hard to dig up other items where the public seems to speak with such a loud voice, and in contrast to what policy is or what elites think it should be. But the fact is that usually we laud correlations between mass opinion and public policy as a good thing, and decry low or negative correlations as a bad thing, for democracy. So it would seem inconsistent to just write this off as an anomaly.

Effect of Medicaid on Health (part ii)

We compared three states that substantially expanded adult Medicaid eligibility since 2000 (New York, Maine, and Arizona) with neighboring states without expansions. The sample consisted of adults between the ages of 20 and 64 years who were observed 5 years before and after the expansions, from 1997 through 2007. The primary outcome was all-cause county-level mortality among 68,012 year- and county-specific observations in the Compressed Mortality File of the Centers for Disease Control and Prevention. Secondary outcomes were rates of insurance coverage, delayed care because of costs, and self-reported health among 169,124 persons in the Current Population Survey and 192,148 persons in the Behavioral Risk Factor Surveillance System.

This comes from the description of methods of a recent article in the New England Journal of Medicine. The article was mentioned by the New York Times on July 26. I like the plots in Figure 1.

The Times provides some context as to why it is seen as “controversial” whether Medicaid expansions positively impact health outcomes:

Medicaid expansions are controversial, not just because they cost states money, but also because some critics, primarily conservatives, contend the program does not improve the health of recipients and may even be associated with worse health. Attempts to research that issue have encountered the vexing problem of how to compare people who sign up for Medicaid with those who are eligible but remain uninsured. People who choose to enroll may be sicker, or they may be healthier and simply be more motivated to see doctors.

See also earlier post.

Disclosure as incumbency protection?

Via Election Law Blog, this New York Times op-ed by two ex-senators gives an unconventional justification for disclosure: to help incumbents ward off anonymous attacks:

Without the transparency offered by the Disclose Act of 2012, we fear long-term consequences that will hurt our democracy profoundly. We’re already seeing too many of our former colleagues leaving public office because the partisanship has become stifling and toxic. If campaigning for office continues to be so heavily affected by anonymous out-of-district influences running negative advertising, we fear even more incumbents will decline to run and many of our most capable potential leaders will shy away from elective office.

I suppose it makes sense to argue this if you are speaking to senators’ self
Interest, but it might not look great to voters.

Which way out of the recession?

This New York Times article today explains how states can respond to the recession by either raising taxes or cutting spending. They pick Maryland and Kansas as exemplars of each respective strategy. They assert that there is great confusion as to how well each strategy works. Kansas Governor Sam Brownback apparently did some of his own data analysis:

Gov. Sam Brownback of Kansas, who sought the Republican nomination for president four years ago, said he was persuaded that his state needed to cut its income taxes and taxes on small businesses significantly when he studied data from the Internal Revenue Service that showed that Kansas was losing residents to states with lower taxes.

Another interesting quote:

The effects of state taxes are hotly debated. This spring, when the George W. Bush Institute held a conference in New York on how to promote economic growth, panelist after panelist asserted that cutting state taxes would jolt the economy; Governor Brownback told the conference that his small-business tax cuts would be “like shooting adrenaline into the heart of growing the economy.”

But the Institute on Taxation and Economic Policy, a nonprofit research organization in Washington associated with Citizens for Tax Justice, which advocates a more progressive tax code, issued a report this year that found that the states with high income tax rates had outperformed those with no income tax over the past decade when it came to economic growth per capita and median family income.

The choices made by Kansas and Maryland could provide something of a real-time test of the prevailing political theories of taxing and spending — though it could be years before the results are in.

Fighting Fire with Experiments

Interesting New York Times article. Great metaphor for science.

Prediction or Explanation?

A couple weeks ago political scientist Jacqueline Stevens attacked her own discipline on the opinion pages of the New York Times. Her critique, as portrayed in the piece’s title, was that political scientists are “lousy forecasters.”

Many political scientists have responded to Stevens’ piece, from different angles and levels of rage, but one theme I’ve noticed is people disputing the assumption that political science’s goal is to make forecasts. For example, this recent letter to the editor of the Times by a professor emeritus of political science at the University of Iowa:

Forecasting is a very specialized field in political science, limited to predicting election outcomes, and the record in that field is impressive. But that is not what most political scientists do.

Political science, like most social sciences, seeks explanation rather than prediction. Its aim is to explain puzzling phenomena by relating them to phenomena that are well understood. Much of science consists of trying to resolve puzzles in that way.

I actually think prediction is what the majority of political scientists do. The distinction is that Stevens focuses on a couple of extraordinary historical cases, whereas most political science predictions involve more specific data sets. For example, theory might say that when we apply experimental treatment X, Y will increase or decrease. Or, someone might claim that Republicans give more to charity than Democrats, which is a prediction about what a data set of party identification and giving behavior would reveal.

It’s hard for me, in contrast, to think of a political science example of explanation that is divorced from prediction. I suppose the idea is to find a historical case–let’s take Stevens’ case of the end of the Cold War–and try to predict it retrospectively. But then why wouldn’t the prediction there apply to any future situations–why wouldn’t it also be a forecast? If the idea is that every historical case has to be considered on its own and that as a result their explanations tells us nothing about the future, I don’t know if that is science at all.

Physics Envy

With a caveat that this is all filtered through an NPR show, I found this interview on the Higgs Boson `quasi-discovery’ relevant to current discussions of what’s supposedly wrong with social science.

I transcribed parts of the interview, which took place between WAMU host Celeste Headlee and Scientific American associate editor John Matson.

Headlee: Scientists needed to find out, whether or not the standard theory held water: why does matter hold mass? And they may have gotten their proof. I say may because we don’t know exactly what they’ve found yet, other than that it’s a new subatomic particle. Remember last year scientists at CERN claimed they had discovered particles that were faster than the speed of light. You remember that? Then they had to retract that claim. So the burden of proof when you discover something new…the standard is pretty high. The folks at today’s announcement needed to be certain if they had something…

Tell me about their certainty here. What standard did they use?

Matson: Sure. So the standard, uh sort of physics measure is sigma, or standard deviations. So they say if they have 3 sigma evidence that’s good for evidence of a new factor a new particle. 5 sigma is a discovery. And 5 sigma relates to a 1 in 3 and a 1/2 million chance that it’s just a statistical fluke, you know you’re just seeing some noise that looks like something real. And in this case they made it to that, they made it to 5 sigma, so this is certainly a very strong effect that they’re seeing.

Headlee: [...] So it’s like a 1 in 3.5 million chance it’s _not_ a new particle, right?

Matson: Well assuming that they’ve gotten everything correct, and that is where the faster than light particle finding comes in. That was a very high sigma effect, but wrong in other ways. So there’s always a chance there could be something funny going on, but in this case with this particle that looks like it could be the Higgs, that’s probably not going to happen, some mundane explanation, because there are two different experiments that are seeing what looks like the same thing, it’s similar to what’s been predicted for decades, so everything sort of rings true here, whereas with the faster than light neutrino thing last year, that sort of came out of no where, there was really no other experiment that supported that, and it went against decades and decades of scientific findings that said this shouldn’t be possible.

Here’s what caught my attention in particular.

  • Arbitrary standards of significance. In social science we have p-values, and apparently in physics they have sigmas, but they both contain the same information: how likely is it that what we’ve found is a result of random chance? Matson states that 5 sigma is the standard for a “discovery” in physics, where a discovery is not quite going the whole way. And he says that the researchers just barely made it to 5. But in fact, a researcher is quoted in the piece as saying (apparently at a press conference), “We conclude by saying that we have observed a new boson with a mass of 125.3, plus or minus .6 GV, at 4.9 standard deviations.”
  • Significance is only meaningful when combined with assumptions. A precisely estimated point estimate is meaningless if it is biased. As Matson is quick to point out, the significance comes from a null hypothesis that is based on model assumptions: it’s a 1 in 3.5 million chance they are wrong, “assuming they’ve gotten everything correct.”
  • Fragility of results. I like how the story keeps circling back to the “faster than light neutrino” finding and retraction from last year. It’s instructive how they use it as a baseline: we had a huge sigma there, but it was later retracted, so how do we avoid being fooled again? The two pieces of evidence they give are (1) replication–this finding comes from not one but two experiments, and (2) theory–the previous finding contradicted decades of other findings, but this one is consistent with decades of theory. While it makes sense to believe things that are consistent with other pieces of evidence–whether another experiment, other findings, or other theory–you can’t help but worry that this type of thinking causes us to reject useful information as well.
  • Theory testing. The researchers are motivated by the desire to test (implications of) theories. Some say that social science shouldn’t bother with this, but instead focus on “thinking deeply about what prompts human beings to behave the way they do.” Substitute “particles” for “human beings”, and this sort of advice would have prevented such an announcement. I also see an affinity with the CERN director quoted in the story as saying “To know that our maths, our equations, all our Greek symbols tell us some deep truth about everything and what everything is made of, and to have that verified with the discovery of the Higgs – that is one of the great, great moments in science.”

Student loan debt “hype”?

Is student loan debt just hype? Two economists say yes (via Ideas Market):

As the Fed study showed, 43% of student borrowers have less than $10,000 in debt, and 72% have less than $25,000. And the College Board shows that, in 2009-10, 56% of those graduating with a B.A. from a public college took out a loan. The average debt of these borrowers, after adjusting for inflation, was $22,000.

Is $22,000 too much debt? Paid off over ten years, monthly payments would be $217 at an interest rate of 3.4% (the current subsidized rate) or $253 (if the rate goes up to 6.8% in July, as scheduled).

Under the graduated payment plan, the initial payment would be $140. Another policy option would be to allow students to pay this debt over 20 years instead of 10, thereby cutting the monthly bill to just $126 ($168 if rates go up).

By way of comparison, Fed data show that the average new car loan is $27,000. This corresponds to a minimum monthly payment of $500 (assuming an excellent credit score, which few students have).

Meanwhile the NYT has a post about student loan horror stories–but from those who borrowed from private organizations.

Randomly assigning health insurance in Oregon

Count me among those who didn’t know realize there was a debate about this, but apparently people have been arguing about whether Medicaid does any good for people. As described in a recent New York Times article, the state of Oregon held a lottery where they randomly assigned Medicaid coverage to 10,000 out of the 90,000 who applied. See also this post at the Monkey Cage.

I like this description of the design from the New York Times:

By assigning coverage randomly, Oregon gave researchers more confidence that they had teased out the true effects of insurance, and had not been fooled by other differences between the insured and the uninsured.

The Times apparently decided the study was not good enough to stand on its own, and decided to interview “17 insured and uninsured participants.” At least we know the treatment was random there, though!

Misc