November 15, 2005

Revisionist History

Who would have thought it, but Glenn Reynolds actually makes a good point here: Americans wouldn't have been quite so ready to lock, load, and march to Baghdad right after September 11th if the Clinton administration hadn't been busy hyping the Saddam boogeyman all through the '90s. Perhaps, although that hardly gets the Bush administration off the hook for anything. Meanwhile, Justin Logan points out that most of the United States' problems in the Middle East started with the misguided war against Iraq in 1991. Back then, as Justin notes, many Democrats opposed a "needless foreign intervention," something they apparently couldn't bring themselves to do in 2002. Nor can they bring themselves to say it now.

The debate over whether the Bush administration distorted the intelligence on Iraq's WMDs, which it certainly did, mostly misses the larger point, I think. Even if Saddam Hussein had had "reconstituted nuclear weapons," as Dick Cheney claimed, it was not worth going to war. A nuclear-armed Soviet Union couldn't control the oil supply in the Middle East, or do much of anything else, at the height of the Cold War, and neither could Iraq have. Saddam Hussein may have had connections with terrorists before 2002, but nothing worth invading over; Saddam was never so stupid as to risk annihilation over unprovoked attacks against the United States.

At best, smart sanctions by the UN would have kept Saddam boxed in; at worst, deterrence and coercive diplomacy could have been deployed if absolutely necessary. It's hard work, all of that, but that's why we pay our world leaders the big bucks. The current criticism of Bush by Democrats—that he lied about the WMDs—seems to imply that if only he hadn't been lying, the war would have been fine. Wrong, wrong, wrong.

In one sense, sure, deposing Saddam's regime and democratizing Iraq, done properly, would have made the Middle East a "nicer place." But what of it? The question of why that was ever our responsibility deserves a closer look. It certainly goes back to the 1991 Gulf War, which, as Justin Logan says, "set a precedent that the United States would be the guarantor of global security, [so] that other regions didn't need to concern themselves with mundane and archaic problems like the balance of power." How did we get to that point? And why, apparently, are both political parties in Washington still stuck on that notion?

The thinking that led to the first Gulf War, it appears, was partly the result of contorted psychology. After the Soviet Union fell, minor threats that once seemed perfectly manageable now suddenly seemed intolerable. Goals that once appeared ridiculous and out of reach—like trying to bring democracy at gunpoint to a region, or using military force to secure an entire region halfway around the world—now appeared entirely doable. Both thoughts, however understandable, are still just illusions. Threats that were manageable once upon a time are still manageable. Goals that were ridiculous once upon a time are still ridiculous.

One could also cite prospect theory—the idea that people will take greater risks to avoid losses than they will to make gains—to explain U.S. behavior. After 1990, the United States was literally at the peak of its global power, and in that situation, the theory suggests that American presidents will, more often than not, take great risks to avoid losing that status—intervening in the Middle East, say—whereas they wouldn't have taken such risks to gain that status. I think this would go for both Republican and Democratic presidents. The September 11 attacks, I think, heightened this sense. Even al-Qaeda, an organization that posed a relatively small risk to American power was suddenly worth taking very large geopolitical risks to stop. But needless to say, this is a bit irrational.

Ultimately, there are serious structural forces preventing the United States from ever reverting to a more modest foreign policy. The 1991 Gulf War was probably a mistake, but something like it was probably inevitable. Historically, state "interests" tend to grow and expand as their power does, as Robert Jervis once pointed out. The historian John S. Galbraith once noted that the British kept getting drawn deeper into Asia and Africa by the "turbulent frontier": once you occupy a certain region, suddenly the unpacified borders of that area become your concern, and so on. Perhaps this is an unavoidable state of affairs for a world hegemon like the United States. Already we have strategists like Thomas Barnett arguing that the proper for the Pentagon is to go all the way, to pacify all of those borders—with the siren song of globalization—until there is no "turbulent frontier" left. It's hard to say whether doing so is truly in the national interest or just seems like it, for all of the reasons listed above.
-- Brad Plumer 2:38 PM || ||
Not Godless Heathens After All

It's hardly the main point of his piece, but Paul Bloom argues that Europe isn't really more un-religious than the United States, it just looks that way:
[T]he religious divide between Americans and Europeans may be smaller than we think. The sociologists Rodney Stark, of Baylor University, and Roger Finke, of Pennsylvania State University, write that the big difference has to do with church attendance, which really is much lower in Europe. (Building on the work of the Chicago-based sociologist and priest Andrew Greeley, they argue that this is because the United States has a rigorously free religious market, in which churches actively vie for parishioners and constantly improve their product, whereas European churches are often under state control and, like many government monopolies, have become inefficient.)

Most polls from European countries show that a majority of their people are believers. Consider Iceland. To judge by rates of churchgoing, Iceland is the most secular country on earth, with a pathetic two percent weekly attendance. But four out of five Icelanders say that they pray, and the same proportion believe in life after death.
The point about a "free religious market" in the United States that enables churches here to "improve their product" makes sense. A while back, the New York Times Magazine had a great piece on how evangelical "megachurches" have managed to expand their congregations at a time when church attendance in the United States has either plateaued or decline lately. It's all in the marketing and innovation:
It's hard to imagine a more effective method of religious outreach, which is, after all, the goal of evangelical churches like Radiant. As McFarland told me: ''I'm just trying to get people in the door.'' To that end, Radiant has designed its new 55,000-square-foot church to look more like an overgrown ski lodge than a place of worship. ''For people who haven't been to church, or went once and got burned, the anxiety level is really high,'' McFarland says. '' 'Is it going to be freaky? Is it going to be like what I see on Christian TV?' So we've tried to bring down those visual cues that scare people off.''

In fact, everything about Radiant has been designed to lure people away from other potential weekend destinations. The foyer includes five 50-inch plasma-screen televisions, a bookstore and a cafe with a Starbucks-trained staff making espresso drinks. (For those who are in a rush, there's a drive-through latte stand outside the main building.) Krispy Kreme doughnuts are served at every service. (Radiant's annual Krispy Kreme budget is $16,000). For kids there are Xboxes (10 for fifth and sixth graders alone). ''That's what they're into,'' McFarland says. ''You can either fight it or say they're a tool for God.'' The dress code is lax: most worshipers wear jeans, sweats or shorts, depending on the season. (''At my old church, we thought we were casual because we wore mock turtlenecks under our blazers,'' Radiant's youth pastor told me.) Even the baptism pool is seductive: Radiant keeps the water at 101 degrees. ''We've had people say, 'No, leave me under,' '' McFarland says. ''It's like taking a dip in a spa.'' …

The spiritual sell is also a soft one. There are no crosses, no images of Jesus or any other form of religious iconography. Bibles are optional (all biblical quotations are flashed on huge video screens above the stage). Almost half of each service is given over to live Christian rock with simple, repetitive lyrics in which Jesus is treated like a high-school crush: ''Jesus, you are my best friend, and you will always be. Nothing will ever change that.'' Committing your life to Christ is as easy as checking a box on the communication cards that can be found on the back of every chair. (Last year, 1,055 people did so.)
That's good thinking, and not the sort of thing you're likely to see in a state-backed church, which makes one wonder why so many religious folks want to kick down the wall between church and state in the first place. (In fact, back in the day, when the Supreme Court first banned prayer from school, many religious leaders supported the court's decision for just this reason—to avoid state stultification of religion.) This also recalls the secular case for religiously-based social services, which seem to do more to break the grasp of theology than anything else. Although, as Paul Bloom notes, it probably doesn't make people any less religious personally—that seems to be an accidental evolutionary byproduct of our cognitive tendencies to a) distinguish between material objects and abstract ideas, and b) see patterns—some might say "intelligent designs"—in random events. At least that's the theory.
-- Brad Plumer 12:19 PM || ||
Avian Flu Strikes Again

It's true, there haven't been many posts around these parts of late. My excuse: I've been bed-ridden with the avian flu for the past few days. Okay, maybe not, but whatever it was, it involved vomiting up blood and mumbling deliriously in my sleep. Not fun either way. Luckily I'm back on my hourly diet of black coffee and Marlboro Reds, so I'll start posting again as soon as I catch up with work.
-- Brad Plumer 11:39 AM || ||

November 10, 2005

The Anti-Growth Abyss

The "pro-growth progressive" discussion continues over at TPMCafe. One implicit question is posed here: "Is economic growth always good?" Just for the hell of it, let's revisit some long-discarded left-wing views on this. In Foreign Affairs recently, Joseph Stiglitz reviewed Benjamin Friedman's The Moral Consequences of Economic Growth and laid out some of basic issues:
In short, the debate should not be centered on whether one is in favor of growth or against it. The question should be, are there policies that can promote what might be called moral growth -- growth that is sustainable, that increases living standards not just today but for future generations as well, and that leads to a more tolerant, open society?
Right, that's the line we're hearing from the left-of-center: "Growth is of course good, it should just be fair growth—focusing on median wage growth, say, rather than the cult of GDP." And sustainable growth is better than unsustainable growth, presumably. Along these lines, we want growth that maximizes utility—over fifty years, 4.5 percent growth that benefits all is better than 5 percent growth which leaves a fifth of the population behind. Sperling's book seems to be about laying out a bunch of hyper-technocratic policies that try to find this vaunted middle ground. Maybe he gets the details wrong (I think he does), but that's at least his aim.

But what if we aimed even further afield? Let's throw out some random numbers: right now Americans spend $28 billion per year on candy, gum, and potato chips, and some $36 billion on cigarettes. Plus billions go towards advertising. But after that's all done, we end up spending further untold billions on health care to treat "lifestyle-induced" diseases—which make up a chunk of the national medical bill—and scribble out checks worth billions to the weight-loss industry. Together, all that money increases GDP, and produces "growth". But as pudgy, emphysema-ridden human beings trying to diet and stay healthy, do we really get anywhere with all this spending? Pay to get fat, pay to lose weight; pay to tar our lungs, pay to get healed. Ignore the larger questions of human freedom. If we magically didn't have any of these things, growth might be lower, but it's not immediately clear we'd be qualitatively any worse off, right?

Of course, it doesn't quite work that way. Presumably Frito Lay links up with the rest of the economy in a way that helps "useful" (whatever that means) industries grow—helping the aluminum industry, for instance. Or it creates jobs that in turn boost consumer spending. The math behind compound annual growth is pretty compelling. So do these broader "gains," across the economy, outweigh the "harm" created by, say, the gambling and prison industries—both of which contribute mightily to American economic growth? A region in Eastern Oregon, as I recall, was "growing" rapidly thanks to a state prison and nerve gas incinerator in the area. Is that "good"?

Economists say that first world countries must focus on increasing their growth—grow, grow like the wind!—so that we can help poor countries grow and make the lives of their people better off. But what happens when the United States is growing thanks to its sprawling arms industry, which in turn fuels conflict abroad? Would a little less growth have been "better"? (Sorry for the double quotes, I'm addicted.)

On a few issues, obviously, we calculate and make these trade-offs; environmental regulation, for instance. On a deeper level, we don't really know what we're getting for all this growth. Better lives? Better compared to what? The 1900s? Is that our yardstick for progress?

We've known for a long time that rising incomes, in themselves, don't increase human happiness, since our expectations of what makes life good expand as that income rises. And one's happiness is usually relative to one's place in society, not one's absolute level of wealth. Benjamin Friedman's book, at one point, suggests that economic growth can almost be thought of as a form of social control—as long as a person's own income is rising, he or she cares (somewhat) less about his or her relative place in society. Maybe. The United States over the past 30 years is a strange data point for this—median wages have stagnated, inequality has vastly increased, but as this paper shows, poorer Americans care less about inequality than their European counterparts who, it seems, are doing better. Interpretations welcome.

B. Friedman also notes that economic growth correlates with the openness of a society, increased tolerance, democracy, etc. But we could easily have a fallacy of composition here. Which aspects of growth produce these things? Perhaps it is too difficult to separate it all out. Movements afoot to create things like the Genuine Progress Indicator seem enticing, and could potentially be of use to policymakers. Personally, I've been dazzled by the Mustache of Understanding, so never fear, in the end I'll probably come around to the "pro-growth" position. Perhaps we'll get some green economists on board at TPMCafe to wreak havoc and convince us otherwise.

Continue reading "The Anti-Growth Abyss"
-- Brad Plumer 8:13 PM || ||
Trade Heresies

All those theological disputes in the Middle Ages probably didn't generate half so much heat as the intra-Democratic debate over trade. Why is this? I can see why there are arguments between the various camps, but I've never really understood why liberal "free traders" look bug-eyed at trade-critics as if the latter were lunatics sacrificing sheep to Baal. What is trade, after all? Isn't it just a market for goods and services that happens to use multiple currencies. In that case, it cannot be "free" any more than the U.S. economy can be "free," for obvious reasons. So it's fair game for basic liberal meddling, no?

Most NAFTA liberals support regulations in the "trade" between states, like a federal minimum wage to prevent a race to the bottom, along with environmental and workplace regulations—at the price, no doubt of some "inefficiency." Obviously you want to design these regulations properly, hence the room for dispute, but they're not illegitimate as such. Liberals allow rent-seeking at home, why not across borders? Labor unions here at home, like trade barriers abroad, "distort" the market; the question is who you want to help and why, at what cost. The WTO sucks for the same reason that rule-by-lobbyist over the past thirty years in Washington has sucked. Obviously Milton Friedman would find this reasoning flawed, and maybe the analogy's imperfect here and there, but why so many liberals should find this all the height of heresy is very curious.

P.S. Kash sheds some useful light on the debate here.
-- Brad Plumer 7:48 PM || ||

November 9, 2005

Sprawl!

In Slate, Witold Rybczynski argues that urban sprawl isn't a uniquely American phenomenon—something spurred on by our automobile-heavy culture and single-use zoning laws. No, sprawl's something that happens in all cities at all times; it even happened in ancient Rome! "Sprawl is and always has been inherent to urbanization." Um, okay. But that doesn't mean the sort of sprawl you see in American cities is always inevitable, right? Clearly there are degrees and different types here. So what causes which sorts?

One of the more fun papers I've ever come across is "The Spatial Distribution of Population in 35 World Cities," by Alain Bertaud and Stephen Malpezzi, which tries to pin this down. Perhaps not very surprisingly, they find that the "density gradient" of a city flattens out, i.e., people start fleeing for the suburbs, as income and population rise—supporting Rybczynski's point that this stuff is partly inevitable and can never be totally squelched—but on the other hand, transportation costs and urban regulation do make a big difference too.

By the way, and very curiously, Bertaud and Malpezzi cite a past paper of a similar sort noting that higher crime in the central city seems to decrease sprawl while better education in the city increases it. That's a bit hard to explain, to say the least.
-- Brad Plumer 6:48 PM || ||
The Trouble with Housing Policy

James K. Galbraith argues that expanding homeownership is a good and proper policy route for helping working families to increase wealth. George Fredrickson made a similar point in a recent New York Review of Books essay, "Still Separate and Unequal," where he discussed the vast wealth gap between white and black Americans, and argued that historical disparities in homeownership were to blame:
How did this vast inequality come about? It was mainly the result of the greater white access to home mortgages that were insured and subsidized by the federal government. Before the 1930s a home buyer had to put down 50 percent of a house's price and could get only a relatively short-term mortgage, perhaps only ten years. By the 1950s, as a result of a series of federal housing programs, including the GI Bill, most Americans could get long- term mortgages—up to thirty years— with a down payment as low as 10 percent. By 1984 seven out of ten whites owned their own homes, worth on average $52,000. But only one in four blacks owned a home, worth, on average, less than $30,000. ...

The advantages of whites over blacks ... were more characteristically Northern than Southern; they manifested themselves in the growth of virtually all-white suburbs outside the major cities and virtually all-black ghettos within them. This new form of racial segregation was not simply the product of private choices, among them the refusal of white home-owners to sell to blacks, blockbusting and the racial "steering" of home buyers by real estate agents, and the personal prejudice of bankers asked to approve loans for blacks.

The urban segregation that has contributed so much to the persistence of black inequality came about in large part because between the 1930s and the 1970s federal housing agencies refused to approve mortgage loans in neighborhoods that were "redlined," which meant property values were deemed uncertain because of the presence of blacks.
True enough. All the same, modern-day housing policy to correct this imbalance, especially for those on the bottom of the scale, sometimes seems misguided. The Bush administration, like its predecessor, has made a point of offering subsidized mortgages to low-income and especially minority families, which is a great idea in theory, as Galbraith's and Frederickson's pieces might suggest. But so long as homes remain unaffordable for 80 percent of all renters, including 21 million renters who couldn't get mortgages under even the loosest of underwriting standards, these sorts of policies will only go so far.

Lower-income families that can afford homes, meanwhile, often end up with units in need of costly repairs or are located in poor neighborhoods plagued by crime and unemployment. Not the best way to create wealth, obviously, or reduce the inequality and segregation Frederickson's talking about. In Baltimore a few years ago, reporters discovered that homes basically falling apart were being "patched up" and sold to low-income families at inflated prices. In the South, 40 percent of low-income home-buyers were steered into trailer parks on leased land. Not to mention the fact that extending homeowner credit to low-income and/or minority neighborhoods usually opens the door to predatory lenders to walk on in.

Plus, it's not even clear that owning a home is always a fantastic wealth-enhancing strategy for low-income families. It's true that the median wealth of low-income homeowners is 12 times that of renters with similar incomes, and most of that comes from the home. But renters and owners tend to be very different people to begin with, at different stages of the life cycle, in different financial situations. How "good" of an investment owning a home is often depends on when an owner enters the market, how long it holds the property, local market conditions, etc. On the downside, some low-income families who buy a home can quickly find themselves assailed with all sorts of costs—insurance costs, property taxes, utility bills—and often borrow against the equity of their home in a financial pinch, erasing any wealth.

That's not to say Galbraith or Frederickson are on the wrong track; clearly they know what they're talking about. Still, we hear about policies to promote homeownership—from both parties—as a strategy for helping working families, but they deserve far more scrutiny. It's troubling, for instance, that the percent of mortgage loans that end in foreclosure have risen from 1.24 in the 1990s to 1.46 these days—a potential sign that people are being steered into homes before they're ready. A truly progressive housing policy, perhaps, would increase the stock of affordable housing and help out low-income renters until they're ready to own a home. What we have now looks more like a policy primarily intended to benefit lenders—who, these days, depend on sub-prime loans to low-income families to maintain their profits—while slashing rental-assistance programs like Section 8.

Continue reading "The Trouble with Housing Policy"
-- Brad Plumer 2:03 PM || ||

November 8, 2005

Last-Minute Wavering

One of the hidden benefits of having a scarcely-read blog is that I can change my mind about an issue at noon on Election Day and not worry that I've somehow led readers astray.

Both Kash Mansori and Mickey Kaus knock down some of my earlier objections to Proposition 77, Schwarzenegger's plan for redistricting reform here in California. Both of them argue that the propositions "compact district" criteria wouldn't cram California Democrats into urban Bantustans, as I feared, because they're already crammed as tightly as possible into those districts. In fact, Kash argues that if the proposition passes, Democrats might even gain seats. I think this depends entirely on how that panel of retired judges draws the lines, and especially on how they divvy up Los Angeles, but yes, theoretically it's possible. More likely, though, Schwarzenegger wouldn't have done this if he didn't think it would knock off enough Democratic seats to keep them away from the 2/3 supermajority needed to pass a budget.

Kaus, for his part, notes that even if the redistricting only made a dozen seats newly competitive, that's still more than is the case now—the California legislature is slightly less competitive than the old Soviet Politburo. (Fortunately our Bolsheviks have term limits.) That's true, although it's not entirely clear why we should prefer that the fate of the legislature rest in the hands of "swing voters" in a dozen random districts. Injecting a little "competition" into a broken system can sometimes just leave you with a broken system prey to the odd bouts of randomness. Is that even a partial fix? Perhaps not.

More interestingly, Kaus notes that simply taking gerrymandering power out of the hands of party bosses would curb the power of those bosses, making for a less centralized legislature. Actually, I think that Schwarzenegger's redistricting reform could have the opposite effect, partly, as the most senior members of each party would start to hail from geographically safe, and ideologically extreme, districts—whoever represents San Francisco and Orange County, say; places that will never be competitive—and hence, the polarized leadership would become even more powerful, since they're the only ones with extra cash on hand and no need to worry about re-election. That's something to think about before adopting a "compact district" system nationwide. The Congresspeople who stick around the longest will inevitably end up hailing from places like Provo, UT and Berkeley, CA. (Of course, we already have something like this under the current system.)

Kaus also suggests that redistricting would create more compromise within the legislature, which could put an end California's insane government-by-initiative, something that has certainly crippled the state. That's an important angle, although I think the temptation for governors and wealthy interest groups to appeal directly to voters is too great, and initiatives will continue. Even "competitive" legislatures will fail to act now and again, and when that happens, it's initiative time. This won't end.

So bottom line. There are much better anti-gerrymandering reforms out there. Under Schwarzenegger's proposal, redrawn districts would have to be approved by voters, after million-dollar campaigns, and if a scheme was voted down, it would still go into effect anyway. That's ludicrous. The benefits to this redistricting reform are, I think, wildly oversold. Maybe the downsides are as well. In that case, the important question is this: Would passing Proposition 77 make other, better redistricting reforms more likely (because it sets the reform ball rolling) or less likely (because enough voters will be satiated with this "reform")? The answer to this question should decide one's vote. I will probably still say "no". But if the initiative fails, as seems likely from the polls, it's important to get another, better redistricting reform on the table.

Continue reading "Last-Minute Wavering"
-- Brad Plumer 10:57 AM || ||

November 7, 2005

Soak the Oil Cartels!

Andrew Samwick has the lowdown on two provocative arguments about gas taxes by Jayanta Sen. Sen's first paper argues that a tax on imports of crude oil, far from crippling the economy, would "transfer wealth of $100 billion+ from [oil-producing] foreign governments to the U.S. consumers," along with decreasing oil use.

The second paper points out that if the major oil-buying countries—the U.S., EU, Japan, China, India—all formed their own cartel of oil importers, they could save further billions by bargaining down oil prices with OPEC. The U.S. alone would save some $200 billion a year. Good stuff, but I can see one downside to the second idea: uppity American seniors might start to wonder why economists all think "OPIC" should be able to ratchet down oil prices but Medicare shouldn't do the same with drugs.
-- Brad Plumer 2:42 PM || ||
Did Life Come From Outer Space?

Scientific American considers the evidence:
This new understanding of life's origins has transformed the scientific debate over panspermia [i.e., the theory that life originated from tiny seeds across the cosmos]. It is no longer an either-or question of whether the first microbes arose on Earth or arrived from space. In the chaotic early history of the solar system, our planet was subject to intense bombardment by meteorites containing simple organic compounds. The young Earth could have also received more complex molecules with enzymatic functions, molecules that were prebiotic but part of a system that was already well on its way to biology. After landing in a suitable habitat on our planet, these molecules could have continued their evolution to living cells. In other words, an intermediate scenario is possible: life could have roots both on Earth and in space.
That's the debate, anyway. One crucial question is whether outer-space organisms coming to Earth on the meteor express could have survived the hot trip through the atmosphere. Hard to do. Not only that, but they'd also have to have survived being rocketed out of their planet of origin. Not fun, either. Both moves seem "plausible theoretically," but eventually scientists will have to scrounge up some bacteria and put them through interstellar hell to see whether the little critters actually survive, say, a trip from Mars to Earth after meteors dislodge chunks of rock from the former. Very cool. In a related vein, I was once told that the Yucatan meteor that wiped out the dinosaurs probably deafened every living organism in the world immediately on impact. Also cool, but it's surprisingly hard to find confirmation for this on the web.
-- Brad Plumer 2:18 PM || ||
Forced Savings

I shouldn't really discuss Gene Sperling's book any more until I actually, er, read it, but here's another point he brings up. One consistent plank of Democratic economic policy is the idea of "forced savings." Liberals rebelled, rightly, against privatizing Social Security and replacing social insurance with individual stock portfolios. Nevertheless, most Democrats believe that we should still have some sort of "add-on accounts" that would force short-sighted Americans to save for retirement. Some economists even believe that a higher savings rate will lead to higher economic growth, and exhort us so. Seems fair, but let's look at the numbers here more closely.

Looking at the BLS's 2003 Consumer Survey, the people who save in this country are overwhelmingly wealthy. The bottom income quintile pulls home $8,201 a year before taxes, and spends $18,492. Meanwhile, the top quintile hauls home $127,146 a year before taxes, and spends $81,731. The poor are borrowing to the hilt and the rich are happy to oblige them. At the end of 2004, the amount of after tax income that went towards debt service was roughly 16 percent, and those numbers are much higher for low-income families. Bankruptcies are skyrocketing. So why are these families borrowing so much? Robert Pollin of EPI put out a study in 1990 arguing that the bottom 40 percent of Americans were borrowing to compensate for stagnant or falling wages. More recently, Elizabeth Warren and Amelia Tyagi's Two-Income Trap compiled similar evidence—the 6,000 percent increase in credit card debt between 1968 and 2000 didn't come about because people were buying frivolities; they're simply trying to tread water, pay for health care, that sort of thing.

Now obviously if you're in the creditor class, this state of affairs looks pretty damn good. Not only do you earn interest on your surplus funds, but mass borrowing among low-income Americans reduces pressure for higher wages, by letting them buy stuff they couldn't otherwise afford, and it certainly makes America look like a middle-class consumer society, thus staving off the angry hordes from rioting. (For neoliberals who believe that society will be "fair" when everyone can own a pretty prom gown, this is fantastic. Ditto for those who think we should measure poverty by whether or not a person can afford a refrigerator.) One might also note that workers with their Visas maxed out are much, much less likely to go on strike, agitate for social change, or do anything unseemly.

The downside, of course, is that among the lower classes, very few people have much wealth to speak of. The richest 10 percent of Americans own 79.8 percent of all financial assets. The bottom 40 percent, collectively, own as much in liabilities as in assets. (Average wealth among the bottom 10 percent has been consistently declining since the 1960s.) Among minorities, especially African-Americans and non-white Hispanics, the disparities are even worse. In 2001, the average black household had a net worth equal to about 14 percent of the average white household. It's a real problem.

So the answer, then, is forced savings, right? Well, I don't know. If real wages had been growing at a decent clip these past three decades, households might have saved much more than they did. So that's one solution right there, along with Edward Wolff's idea of a wealth tax for redistribution. In contrast, government-funded "savings incentives" usually just provide tax shelters to the wealthy, who as we've seen are the lucky few who can afford to save. Having the government drop extra pennies in the accounts of the poor will help, but would barely cancel out the staggering liabilities among the poorest 40 percent.

And what about the larger economic benefits to savings? Will boosting the savings rate in this country boost growth? Hard to say. In this congressional testimony, James K. Galbraith noted that increasing the savings rate—by government fiat, say—could just as easily depress consumption. Traditionally, economists haul out graphs showing that higher savings rates are associated with periods of higher economic growth, but it might just be that it's the latter causing the former (i.e., wages rise so people can save more). A bit of skepticism never hurts.
-- Brad Plumer 12:29 PM || ||
KKK Revival

This doesn't sound good at all:
Most ominously, the Ku Klux Klan has been increasingly active in northern Alabama and southeastern Tennessee, using immigration to revive its white-supremacist message.
That's from Clay Risen's New Republic piece on rising anti-immigration sentiment in the south. Much has been written on the ways in which racism helped to cripple the rise of the welfare state in the United States. Anger at immigrants, especially Mexicans, could easily do the same. The San Francisco Chronicle did a long story recently on the backlash against Hispanic workers coming in to New Orleans and doing work for rock-bottom wages. There's not really a happy answer to tell anybody here. I used to think that the immigration issue would split apart the Republican Party, caught between its Tom Tancredo nativist wing and its need to appeal to Hispanic voters. But that might be the wrong way to look at it.
-- Brad Plumer 11:39 AM || ||
Pro-Growth Progressives

Needless to say, liberals have been way too fond of agreeing with each other since Bush came to office, so it's time for a bit of ideological bickering. Over at TPMCafe's BookClub this week, Gene Sperling is discussing his new book, The Pro-Growth Progressive, which apparently tries to reconcile liberal and policies to promote economic growth. So, for instance, we get calls for "fiscal discipline," and individual forced savings, along with plans to strengthen health and education, which policies will supposedly make our workforce more "competitive." Oh yeah, and free trade uber alles. (By which I assume he means ending protectionism for blue-collar workers, and not for professionals or pharmaceutical companies.) In other words, DLC policies are the real pro-growth platform; accept no substitutes!

Now I haven't read Sperling's book yet—it's sitting on my desk and looks good—but color me unconvinced so far. At most, this looks like tinkering at the margins. Yes, yes, as a country we should certainly be investing in universal health care, strengthening public education, and providing other support services for working families and individuals—though not because these policies will definitely boost economic growth, but because they're the bare minimum requirements of a decent society. Any benefits that accrue from acting decent—IWPR, for instance, has argued that providing seven days of sick leave to workers would save companies $21 billion a year—are mere niceties. If they were a drag on growth, we should still do them.

Would better education be good for economic growth, and help people get the jobs they need, as Alan Greenspan would argue? Perhaps at some level, but that's not the core problem here; read "The Job Ghetto" by Katherine D. Newman and Chauncey Lennon and it becomes clear that educated workers in Harlem are being turned away from work they're perfectly qualified to do. Or read this old post. A lack of jobs and wage support, rather than a lack of education, is the thorny brush here. Meanwhile, a single-payer health care system will, of course, relieve businesses of the burden of dealing with health insurance, and that will give GM, Ford, and the rest a tidy boost in profits—and might even make them competitive, as Tom Friedman loves to say—but so long as globalization continues to further inequality, and workers see little of the benefits of economic growth, there's not much reason to care.

So here's where we part ways. Ultimately, the Democratic Party is still toeing the old Newt Gingrich line on macroeconomic policy, which is roughly: let the Fed do its job crushing inflation—and raising rates when unemployment gets "too low"—and get the budget back into balance. Then fiddle with progressive policy. This has been the ideal despite the fact that the late '90s showed that unemployment can go much, much lower than previously thought, and the Fed's rigid enforcement of 6-7 percent unemployment during the Reagan, Bush I, and early Clinton years was a swindle of colossal proportions, keeping wages stagnant. Jared Bernstein and Dean Baker are a must read on this. Meanwhile, the "fiscal discipline" obsession I don't get; if we repealed the Bush tax cuts and spent all of that money on health and education, we would still have those big deficits, but that wouldn't be a problem. Take care of people and our kids will be smart enough to figure it out. After that, it's time to reduce inequality and ensure that any future gains in productivity are shared with workers, rather than fattening corporate profits. There are other ideas but we're running out of space.

Not that the DLC platform—which, as best I can tell, is what Sperling's supporting—isn't worth fighting for. It's good. Great, even. Things like the EITC make a real difference in people's lives. I'd canvas for it. It's probably "good politics" too, for all I know. But what I'd prefer, as I wrote in a book review back in April (which has a lot more examples of the sort), is for the Democratic Party to come up with an actual economic vision, rather than an array of wonky policies to tack onto the current structure of American capitalism. As Jack Kemp always said, if you're going to go for it, you should really go for it. Especially if you're not even running for election.

Continue reading "Pro-Growth Progressives"
-- Brad Plumer 11:20 AM || ||

November 6, 2005

Liberal Interventionism: Who Needs It?

I've been trying to think of something clever to say about the fate of liberal interventionism in the wake of the Iraq war, but nothing's coming. Nadezhda, though, brings up a good point:
[I]t's a mistake to focus too much on Iraq as a centerpiece of a debate over when to intervene. Humanitarian concerns, and even democracy promotion, were simply not why the BushAdmin went to war.
That comes after her excellent explanation of how the war's key architects, especially Cheney and Rumsfeld, aren't really neocons, but rather nationalists who "want to sustain US hegemony by... by continually using US power to eliminate enemies and dissuade potential competitors." True enough. On the other hand, you'd be hard pressed to find any "intervention" by the United States done primarily because of "humanitarian concerns." Kosovo often gets trotted out as a "good" liberal intervention—done to prevent ethnic cleansing, as the story goes—but here, for instance, is how Sidney Blumenthal described the Clinton administration's thinking in 1999:
Kosovo was the central challenge remaining to full European integration after the fall of the Soviet Union. If the crisis there were allowed to fester and ethnic cleansing allowed to succeed, Europe would be inundated with refugees. The human tragedy would be appalling. This might well demoralize the center-left political parties, but right-wing ones would seize on the developments to gain influence, exploiting fears about increased immigration and asylum seeking. NATO would seem a feckless, purposeless organization: If it could not be mobilized to ward off this new threat in Europe, what use was it? The incentive for former Warsaw Pact countries to join it would be drastically reduced; NATO expansion would become an empty exercise.

Moreover, the absence of U.S. power would trigger traditional rivalries among the European countries and hamper Britain's influence, given its link to the United States. Reform in Russia would be slowed down or derailed, as conservative political forces there would be galvanized by Serbian defiance of the West. And without the Balkan puzzle solved, Turkey and Greece might also be propelled into renewed conflict.
Notice that the fate of the actual people being massacred ranks far below strategic concerns—saving NATO, stopping all those filthy refugees from flooding Europe, and making sure the conflict doesn't spread outwards. Indeed, in his first radio address after his Senate trial, Clinton focused on the national security threat posed by Serbia, rather than humanitarian issues: "Bosnia taught us a lesson: In this volatile region, violence we fail to oppose leads to even greater violence we will have to oppose later at greater cost." Human rights, obviously, played some part in the selling of the war, but the fact that Kosovo really was important strategically—at least in Clinton's mind—explains why we went. It was a direct threat to the "liberal internationalist" order, much like Saddam Hussein was a threat to Dick Cheney's world order.

Now fair enough, the U.S. can't stop humanitarian crises everywhere, and if we're going to choose between, say, Kosovo and the Congo, we may as well factor in "national interest" to make that choice. And as world orders go, I somewhat prefer Bill Clinton's to Dick Cheney's (which is why, among other things, I think the Iraq war was a much worse idea). Still, as with Iraq, the humanitarian reasons for the Kosovo intervention were mostly a happy gloss to make people feel better about the war. Meanwhile, the "Clinton doctrine"—which stated that the United States has the right to intervene, without UN approval, when countries commit gross human rights violations—does little more than provide the United States an extra excuse, in case it needs one, to use military force abroad for other purposes.

For liberal interventionists, all of that might be okay so long as human rights are still served—who cares why we're stopping genocide so long as we're stopping it, right?—but in practice, that doesn't always happen. In Kosovo, NATO failed, for starters, to deploy adequate security after the bombing campaign ended, and as a result, ethnic Albanians began retaliating against non-Albanians—kidnappings, looting, murder, the works. "Oops." Maybe that was just a mistake in the execution, but funny how we seem to hear that excuse a lot. The United States also "blundered" by failing to secure Baghdad in early 2003. "Oops." Etc. A long string of mishaps ensued in both places. Was that just because rebuilding a country is extremely difficult? Well, yes, but it's also true that interventions that aren't carried out primarily for humanitarian reasons will, more likely than not, end up making a lot of these sorts of mistakes.

So what does all this mean? Even an administration that truly cares about using military force to promote democracy and human rights abroad—something we're never likely to get, mind you—will still have to pick and choose where to intervene. So it will likely choose based on strategic concerns, or other ulterior motives. (Public opinion is another big factor—it's hard to sell a war on morality alone.) Those "other" concerns will often end up dominating the conflict, and could thwart the humanitarian focus, or even make the intervention counterproductive from a human rights standpoint. This certainly won't always be the case—Britain's military presence in Sierra Leone in 2000 wasn't exactly carried out with the noblest of intentions, but it still did a great deal of good—but the logic of intervention is pretty grim. At the very least, it's reason to be wary. Phrased another way, while I think that, for instance, NATO should stop the genocide in Darfur, I would also be very suspicious if, in an alternate universe, NATO actually was interested in sending troops into Darfur. If that makes sense.

At any rate—and this post is dragging out, I know—we should also realize that in the years since the Cold War ended, war and genocide have been declining dramatically, and this hasn't come about because of humanitarian interventions by the U.S. and other First World powers. No, it's come about, in part, because colonial empires and Cold War rivals have stopped inciting war in the Third World, and because the international order has become more "activist" in all sorts of non-military ways. According to the 2005 Human Security Report, since 1990 we've seen:
  • A sixfold increase in the number of preventive diplomacy missions between 1990 and 2002

  • A fourfold increase in peacemaking activities between 1990 and 2001

  • An elevenfold increase in the number of economic sanctions in place against regimes around the world between 1989 and 2001

  • A fourfold increase in the number of UN peacekeeping operations between 1987 and 1999
  • One could go on. And the HSR provides evidence that these things, along with economic development and human rights law, all work as far as making the world less violent. Meanwhile, the rapid spread of democracy in the 1990s has helped to curb war, and most of that has been accomplished without American invasions. The basic lesson here, it seems, is that "humanitarian interventions" are really a somewhat minor issue in the grand scheme of things. (Although I still worry that allowing things such as, say, genocide in Darfur undermines that order by setting a bad example for other would-be genocidaires.) So while I doubt that liberal interventionism can ever serve as a workable basis for U.S. foreign policy, the positive side is that it may not matter much. The hard evidence suggests that the most effective things the U.S. can do for human rights are, first, to stop fueling conflict abroad—regulating the global arms trade would be a nice start—and second, to bolster the international order that genuinely has done a lot for world peace of late.
    -- Brad Plumer 6:10 PM || ||

    November 3, 2005

    Pensions, Socialism, Catastrophe

    Over at Tapped, Ezra Klein points out that, as the Pension Benefit Guaranty Corporation starts bailing out more and more troubled companies by taking over their pensions, it will start controlling more and more stocks, which means that Congress will technically "own" a greater share of corporate America. Bam! Instant socialism! He also points out that privatizing Social Security would have had a very similar effect—if Congress could choose the index funds in which workers invested—thus "potentially wreaking all sorts of havoc."

    Interesting thought, though it's hard to see how worried we should be about all of this. Here in California, the two big pension funds—CalPERS ($180 billion) and CalSTRS ($125 billion)—have, under Angelides, engaged in a limited bit of activism, dumping tobacco stocks and the like, but it never seems to go anywhere. Divesting doesn't have much effect on a company's share price. On the other hand, a government-run pension fund could acquire enough shares in a company to influence the vote on this or that. Maybe this is cause for concern, though I have a hard time believing that activist pension-funds could do any more damage to the economy than hedge funds that regularly buy up shares of a company, tip the vote in favor of bad mergers, and then reap the profits at the expense of shareholders—as probably happened with the Compaq-HP deal. So I'm conflicted. William Greider's "The New Colossus" made a decent case for activist public pension funds like CalPERS, but there also won't always be progressive activists at the helm, obviously.

    At any rate, reading Roger Lowenstein's "The End of Pensions" reminded me of yet another way in which America's pension problem is related to Social Security privatization—or any mandatory savings plan. For years, many companies have been predicting wildly optimistic rates of returns for their pension-fund stock holdings so that they could scale back contributions to the fund and use the cash for other purposes. Which, in turn, drives up the price of their own stocks, many of which were held by... pension funds. Can we all say "Ponzi"? Right. But the system's falling apart now that those rates of return have failed to materialize, and there's no way out for corporate pensions, which are under-funded by some $450 billion.

    The Bush administration, to its credit, wants to tighten the rules for pension funding. But if firms were required to set aside even more money for pensions, many might go bankrupt, or stock prices might decline, which would in turn further endanger pensions, and on and on. Conversely, if the PBGC started bailing more funds out—with taxpayer money—that would only increase the "moral hazard," causing more firms to make risky investments. Either way, disaster. One conceivable exit strategy, then, is for Congress to create mandatory savings accounts for all workers and hence pour all that taxpayer money—or the Social Security Trust Fund—into the stock markets, creating a bubble which could help some of those rickety pension funds out. It's not clear that this would actually work, though. So disaster's probably inevitable, unless someone dreams up a clever exit strategy.
    -- Brad Plumer 3:33 PM || ||
    Blond Sumos

    Now and again I say to myself "You know what this world needs? More blog posts on sumo." So here's a good Washington Post piece about the rise of—wait for it—blond sumo wrestlers, who, it's hoped, will "become a metaphor... for a reluctantly globalizing Japan." But is it? Everyone's always wondering, after all, when Japan plans on opening the floodgates and letting in immigrants, so as to avoid becoming a country filled only with senior citizens—and the pension burden that comes with that—but it's doubtful sumo will drive the needed cultural change here. Japan has had foreign-born wrestlers for years, including Mongolian and Hawaiian yokozunas, but that doesn't seem to have changed much, substantially. Still a fairly xenophobic place. This is a problem too.
    -- Brad Plumer 10:35 AM || ||

    November 2, 2005

    Pedantry

    Usually this stuff just deserves mockery, but oh, what the hell. It looks like every conservative on the planet has his/her knickers in a knock because black Maryland Democrats have been making "racially-tinged attacks" on Lt. Gov. Michael Steele, a black Republican running for Senate. Well, what of it? When it comes down to it, I don't think throwing Oreos at a politician—as Morgan State students did to Steele in 2002—is ever that productive. That's just me. But conservatives are calling it "racist". The trouble is that everyone seems to mean something different by the word. So...

    Just hash out the background assumptions. Many liberals, give or take, believe some variation of the following: 1) power and inequality in this country matters a great deal, especially economic power; 2) black Americans, as a group, have very little power—economically and socially; 3) "racially-tinged" remarks are vile mainly insofar as they reinforce unjust power relations. So long as you believe these three things, then no, a black progressive calling a black politician "Sambo" won't always be considered racist. It just doesn't necessarily follow. Especially when he did it after Steele tacitly endorsed Gov. Ehrlich's appearance at an all-white country club. (Actually, Steve Gilliard's post assumes two more things: 4) black Americans have never had anything handed to them, and can only eliminate racial inequality by sticking together; and 5) any black person who defects from that cause, like Steele, and joins a party that actively perpetuates racial inequality, is by definition hurting black Americans much more than, say, a white person doing the same.)

    Anyway, so much for semantics, I guess. Any of those five points would be a good starting place for discussion. (#5 is the fun one, obviously.)
    -- Brad Plumer 6:27 PM || ||
    Militiamen

    Psst. Quick tip. If you're keen on joining your neighborhood right-wing militia, you definitely don't want to look like an amateur. Like these guys:
    Their faces were streaked with green and black paint, and they listened closely to their training instructor, Super Six, an infantry veteran of the Persian Gulf war of 1991, who motioned them off the range. Moments later, the now-rested marchers took their positions at the range, and Six sat back beneath a tree to watch. "For some of these guys it's just fun and games; they just aren't serious," Six said, idly thumbing rifle rounds and snickering at the gaudy firearms and projectile launchers fastened to a few of the shooters' guns. "Scopes are fine for hunting, but for shooting people, they're distracting," he said. "They keep you from seeing the guy sneaking up beside you."
    Right, then. Leave the scope at home. That's from a Legal Affairs piece about the rise—well, continued rise—of homegrown militias since 9/11, a phenomenon the FBI seems content to ignore, more or less, at least so long as unhinged "eco-terrorists" still roam the streets, torching SUVs and chaining themselves to chemical plants or whatever it is they do. As for these militias, usually they don't do much harm in themselves—they swill beer, waste ammunition, and plan their defense against the UN invasion set to sweep in from Canada. Except that every now and again they inspire some lonely Army private to pile a few tons of fertilizer in his truck and go blow up a government building. Not so hilarious then.
    -- Brad Plumer 5:06 PM || ||
    Snooze

    Jerome Siegel of UCLA asks the baggy-eyed question: why do animals need sleep? Why does anyone need sleep? Why do different species vary so widely in the amount of sleep they need? Here's his evolutionary theory:
    "The analogy I make is between hibernation and sleep," [Siegel] said. "No one says, 'What is hibernation for? It is a great mystery.' . . . It's obvious that animals hibernate because there is no food, and by shutting down the brain and body they save energy." Sleep, Siegel suggested, may play much the same role. As evidence, he cited research that has found systematic differences in the way carnivores, omnivores and herbivores sleep: Carnivores sleep longer; herbivores, shorter; and omnivores, including humans, are somewhere in the middle.

    "If animals have to eat grass all day, they can't sleep a lot, but if they eat meat and are successful at killing an antelope, why bother to stay awake?" he asked.

    On the other hand, mammals at greater risk of being eaten -- such as newborns -- spend large amounts of time asleep, presumably safe in hiding places devised by their parents. Supporting the evolutionary explanation, Siegel's own research has shown that when the luxury of safe hiding places is unavailable -- in the ocean, for instance -- baby dolphins and baby killer whales reverse the pattern found among terrestrial mammals. These marine mammals sleep little or never as newborns and gradually increase the amount they sleep as they mature.
    So animals doze off because they haven't got much else going on. Okay. But why is there no override? And why is it so painful to avoid sleeping if, historically, we've only done it because, "Hey, why the hell not? No food here..." Maybe the next great leap in human productivity will come when scientists figure out how to let us forego sleep. That could be exciting. Or maybe extraordinarily violent, what with people forced to be around each other all the time. It's hard to say how much of human society is the way it is because we need to sleep for eight hours a night. In the sleepless future, maybe, people won't even need homes; or at least, it's not clear that we would have developed the concept of a "home" in the first place if we didn't need sleep. Or maybe it's just good to have a place to do laundry. Hmm, must go find coffee now...
    -- Brad Plumer 8:10 AM || ||

    November 1, 2005

    Myth of the Suitcase Nukes

    In the Wall Street Journal yesterday, Richard Miniter kicked around what he calls the "myth" of the "suitcase nukes." Most likely, he says, the Russians never made any such thing, and what sort-of-portable nukes did exist have almost certainly been destroyed. Good news if he's right, of course, though some of his points seem less than airtight. For example, here's Miniter's account of the Denisov investigation in 1996, which looked into allegations by Alexander Lebed, a Russian general, that anywhere from 50-100 Rissoam "suitcase nukes" were unaccounted for:
    Lebed's onetime deputy, Vladimir Denisov, said he headed a special investigation in July 1996--almost a year before Lebed made his charges--and found that no army field units had portable nuclear weapons of any kind. All portable nuclear devices--which are much bigger than a suitcase--were stored at a central facility under heavy guard.
    Well there we have it. Or do we? Here's a less-glossy account from the Center for Non-Proliferation Studies in late 2002:
    It should be noted that almost nothing is known about the methods of the [Denisov] commission's work: for example, whether it checked only records or was able to compare the actual inventory to records as well (if only records were checked, it cannot be said with certainty whether more warheads were missing or whether any warheads were missing at all). Since the commission was disbanded before it was able to complete its work, it has remained unclear whether it was able to confirm the alleged loss of warheads (i.e., it looked everywhere and failed) or simply did not have time to clarify the situation (Denisov's statement seems to imply the latter). It is not even known who the members of the commission were.
    Not quite as comforting. Also, some scientists have claimed that any suitcase nukes would have been controlled by the KGB, and so not listed in the records Denisov looked at, although this seems unlikely. In the end, people have said all sorts of things about "suitcase nukes," and it's truly hard to separate fact from bluster. The CNS report concludes, persusasively, that "the existence of smaller devices custom-designed for [Russian] Special Forces, probably analogous to American small atomic demolition munitions (SADMs), should not be ruled out… with a caveat that their existence should not be taken as fact." Fair and balanced, that one. But there is evidence, for instance, based on artillery shell designs, that the Russians engineers could have created such a weapon. And the records are too patchy to prove that they didn't.

    Whether any of these theoretical weapons actually could have been stolen after the crack-up of the Soviet Union, meanwhile, is "impossible to say," and I don't think Miniter refutes the concerns of CNS conclusively. But. One very encouraging point, which Miniter hammers on, is that any truly portable nuclear device—weighing around 60 lbs.—would have had a very short maintenance period, like most Soviet weaponry, and would probably have deteriorated by now. Another point: the most likely time and place for a stolen nuclear suitcase bomb would have been in or around Chechnya in the early 1990s. The Chechens, certainly, have had ample reason to threaten or actually use such a device. But they haven't. Huh. So the balance of hunches definitely favors Miniter's thesis, no doubt, although this is also the sort of thing we really, really don't want to get wrong, and it would be nice to get some more solid information on this.
    -- Brad Plumer 6:46 PM || ||

    October 31, 2005

    Porn Stats

    Come to think of it, there hasn't ever been a good pornography discussion on this site—and this post probably isn't it—but some of the facts laid out in this American Sexuality article on the subject seem worth tucking away for future reference:
    From research and the testimony of women who have been prostituted and used in pornography, we know that childhood sexual assault (which often leads victims to see their value in the world primarily as the ability to provide sexual pleasure for men) and economic hardship (a lack of meaningful employment choices at a livable wage) are key factors in many women’s decisions to enter the sex industry. (For a good summary of this evidence see Margaret Baldwin’s article "Split at the Root: Prostitution and Feminist Discourses of Law Reform," 5 Yale Journal of Law & Feminism 47, 1992.) We know how women in the sex industry—not all, but many—routinely dissociate to cope with what they do. We know that in one study of 130 street prostitutes, 68% met the diagnostic criteria for post-traumatic stress disorder. (For details on this, see the work of Melissa Farley, "Prostitution, Violence, and Post-Traumatic Stress Disorder.")
    Clip. Good magazine, too, although I'm a little dismayed that left-wing critiques of Queer Eye for the Straight Guy haven't really advanced much in the past two years.
    -- Brad Plumer 6:08 PM || ||
    Alito and Abortion

    So Samuel A. Alito, Jr. will be the new Supreme Court dude. Emphasis on "dude". Or emphasis on "fascist". Whatever. Anyway, I've been reading his infamous dissent on Planned Parenthood v. Casey, the one in which he upheld a spousal notification law for abortions, and it's important to hash this out. The conservative defense of Alito will be that it wasn't his job to decide whether the law was good public policy or not, merely to decide whether it was constitutional; and on the latter, he was upholding what he thought were the precedents on abortion at the time. That's not implausible; see this passage:
    Taken together, Justice O’Connor’s [earlier] opinions reveal that an undue burden does not exist unless a law (a) prohibits abortion or gives another person the authority to veto an abortion or (b) has the practical effect of imposing "severe limitations," rather than simply inhibiting abortions "to some degree'" or inhibiting "some women."
    In Alito's defense, it's sometimes hard to figure out exactly what Sandra Day O'Connor intends in her opinions—often only she knows for sure—and prior to Planned Parenthood, the Supreme Court had placed restrictions on abortion that, while not "severe," probably did prevent some women from getting abortions. So Alito's ruling partially stems from previous Supreme Court sloppiness, it seems. Meanwhile, the plaintiffs who opposed the spousal notification law had not shown that the 5 percent of women who don't notify their husbands would in fact be harmed by the new law. (The law leaves an out for women who have "reason to believe that notification is likely to result in the infliction of bodily injury upon her.") On one level, then, Alito's opinion is sort of reasonable.

    But on another level, it's not. It's ridiculous. It's dangerous. It's wrong. According to Alito, because only a small number of women might face an "undue burden" in theory, but that's not known for sure, the law is just fine and dandy? What kind of legal principle is that? The Supreme Court obviously disagreed with Alito, noting that regardless of whether 95 percent of women would be unharmed by the law, "[l]egislation is measured for consistency with the Constitution by its impact on those whose conduct it affects." And that includes women potentially affected.

    This all matters very, very much because in an upcoming abortion case, Ayotte v. Planned Parenthood, the Supreme Court will decide just this sort of dry procedural issue: on whether litigants need to show that an abortion restriction places an "undue burden" on women in the abstract—and is therefore unconstitutional—or must show that it places an "undue burden" in a particular case. Alito would appear to side with the latter view, and a ruling this way would make it very hard for women to challenge abortion restrictions (litigants would have to show that parts of the law affect them personally), and the net effect would be that Roe v. Wade, for all practical purposes, would be crippled—states could leave restrictions on the book for many years before ever being challenged.

    The political issue here is that over 70 percent of Americans support spousal notification laws, and if Democrats try to fight on that terrain, they could well lose. [EDIT: Sorry, I didn't intend this to mean that they shouldn't even try to convince people otherwise; they should.] But there's so much more at stake here.
    -- Brad Plumer 9:55 AM || ||

    October 29, 2005

    Feed the Beast

    "Starve the beast"—Grover Norquist's theory that cutting taxes and running massive deficits will somehow force Congress to rein in spending—obviously looks flimsy after the high-spending Reagan and Bush years. Even the Cato Institute agrees. But via AngryBear, this old Daniel Shaviro article makes the point that cutting taxes and running huge deficits not only fails to curb spending, but also increases the influence of interest groups:
    The increased fiscal gap also makes future government policy far less predictable. Having a looming debt of that size will stir every interest group in Washington to try to influence future policy. It won't be possible to take any government commitment for granted for more than a few years. With even Social Security and Medicare likely to be on the chopping block eventually, no group or lobby will be able to rely on political inertia to protect what it now has. That is an enviable state for members of Congress set on gaining campaign funds, but a worrisome situation for the rest of us.
    Judging from the past four years, this seems anecdotally true—out-of-control deficits lead not only to more spending, but worse spending, tilted heavily towards lobbyists and other interest groups. Alternatively, one could look at it this way: when various political interests see certain groups rewarded with tax cuts, they feel they should be rewarded too. So the squabbling begins. Meanwhile, so long as government spending is being paid for with borrowing and future taxes rather than present-day taxes, it's very easy for Congress to spend more than it otherwise would. (Which is why, of course, big-government liberals have usually opposed balanced budgets.) Ultimately, it's probably easiest to restrain government spending after tax increases—since that sets a general tone of fiscal austerity, and perhaps becomes easier for Congressmen to turn down spending requests. Divided government probably helps too.

    Obviously, though, from a Republican perspective, Bush- and Reagan-style tax cuts do accomplish two very important things: they redistribute wealth upwards—rather than tax the rentier class to pay for spending, the government just borrows from them instead and pays them interest for their generosity (during the Reagan years, that interest was paid with payroll tax hikes on workers); and second, the tax hikes put Democrats in a bind by forcing them to raise taxes if and when they come to power, which hurts them politically, and forces the "responsible" party to rein in its own preferred spending programs (the tax-cutting party, meanwhile, can spend more freely). It's all very clever, at least so long as deficits don't collapse the economy and prompt a socialist takeover of government. But that looks increasingly unlikely.
    -- Brad Plumer 8:41 PM || ||
    The Other Intelligence Story

    Hm. What to write about? Scooter Libby? Nah, not dull enough. Oh, what about John Negroponte's new "National Intelligence Strategy," released last week. Yes, that's the dullness we need. Important, though, especially since the new report doesn't have much to say about one of, I think, the United States' most persistent intelligence weaknesses over the past decade. More on that in a bit. For now, it's enough to note that Negroponte's report is full of resolutions to coordinate this and integrate that and improve the other, and it's all focused, naturally, on using intelligence to combat terrorism, stopping WMD proliferation, and, uh, democracy promotion. Apart from the last, which is bizarre, this is all pretty uncontroversial and likely useful—up to a point.

    The idea that we need to "improve and integrate" our intelligence is always a nice platitude, but it's worth stepping back and asking how we got to where we are. In this day and age, curiously, every major foreign policy move needs to be backed, it seems, by ultra-solid intelligence—as if to give the public a veneer of objectivity for what are often simply judgment calls, hunches, guesses. That alone puts a very high degree of natural political pressure on what is otherwise an inherently imprecise process. The Bush administration may have been particularly flagrant about "stove-piping" CIA reports during the march to war in Iraq, true, but any policymaker who gets it in his or her head to pursue a course of action will have to do the same to some degree or other, because he or she will want backing from the intelligence agencies, and they can often provide no such thing.

    Now Congress can always re-jigger the ways in which the various agencies are set up, as it recently did, and maybe that will alleviate some of the pressure from here, but the implicit "politicization" of intelligence can never go away completely. In a better world, policymakers would just acknowledge that intelligence is highly imperfect even in the best of times, and tell voters that ultimately, their national security decisions are primarily judgment calls, rather than obvious conclusions borne of intelligence. So many charades could be dispensed with. (Including the idea that ordinary citizens aren't informed enough to form an opinion on foreign policy decisions—a fiction that should have been buried by the Iraq war.) This cultural change will never happen, of course. So the real problem is that foreign policies that put an impossibly high burden on intelligence—the Bush doctrine and preventive war come to mind—will likely fail more often than not.

    (As a side note, if only because I don't know where else to stick this, Chaim Kaufmann's "Threat Inflation and the Failure of the Marketplace of Ideas" is one of the better essays on intelligence failure during the run-up to the Iraq war; he notes, among other things, that the tendency of experts to avoid treating what they think they see as a scientific hypothesis—which would entail making predictions—was an especially ingrained failure. No amount of shuffling or re-jiggering will fix this.)

    At any rate, these are badly disjointed thoughts, sorry, but I promised to say a bit about one of the most glaring and overlooked types of intelligence failure throughout U.S. history: namely, our poor ability to predict how other countries—or other people, period—will react to our actions abroad. Robert Jervis has written a few papers on this, but to put it another way, U.S. policymakers rarely seem to be able to figure out how other countries see the world, a blind spot which, during the Cold War, was more serious than the various mistaken analyses about missile gaps or mineshaft gaps or the like. The problem is that this sort of "empathy" is much very difficult to improve—trying to figure out the near-infinite set of calculations and beliefs other actors might have will always be close to impossible.

    A few examples from history: In the 1950s, the U.S. failed to understand that Stalin would invade Korea—at the time, his strategy was remarkably opaque. More recently, in 1996, the Cedras junta in Haiti for some reason didn't take the Clinton administration's warnings to abdicate seriously until an invasion force was actually in the air. (Did they not think Clinton meant it? Why?) Ditto a few years later, when the Clinton administration couldn't understand why Milosevic wouldn't back down from Kosovo in the face of NATO threats. Nor did the Bush administration make any apparent attempt to understand why, in 2002, Saddam might have been acting the way he did—for instance, keeping the status of his WMDs ambiguous to fool Iran. But so long as the U.S. has a poor handle on the beliefs and calculations of other world leaders, especially its adversaries, coercive diplomacy will tend to fail. (And we haven't even touched on terrorist groups....)
    -- Brad Plumer 8:23 PM || ||

    October 27, 2005

    Holy Nanotech Batman

    Check out the list of potential new nanobiotechnology weapons under development by the U.S. military. Let's see, we've got: ultralight body armor; "artificial muscles" built of nanomaterial; nanotech sensors capable of detecting individual molecules; camouflage suits that automatically heal a wounded soldier. It's every adolescent's comic-book fantasy! Now all we need to do is start a war or two to test this stuff out…

    No, really, it's pretty stunning to see how much research money is poured into weapons. Over a third of NIAD's basic research is now in biodefense, and that's a growing share of a shrinking research budget. "Biodefense" is much like missile defense, only infinitely more lunatic, and in practice ends up creating ever more deadly biological weapons—necessary to test out the defenses, see—potentially kicking off a bioweapons arms race. Meanwhile, there's no money left for flu vaccines, or much of anything else. We are insane. Human beings are insane. But at least we'll have cool armor.
    -- Brad Plumer 5:35 PM || ||
    Assume the Worst

    In the New Republic today, Clay Risen has a smart analysis of the infamous Wal-Mart memo that surfaced yesterday—which described ways the company could reduce its health care costs while appearing to care more about its workers:
    The memo is the result of a study carried out in coordination with McKinsey, the elite consulting firm--and it shows in its fantastic grasp of [Wal-Mart's] numbers and abysmal conception of the workers who make them possible. One proposal would replace the current 401(k) program, into which the company puts a fixed percentage of the employee's wage, with a matching program, in which the company's contribution is equal to the employee's (this on top of the proposed cut in company contributions, from 4 percent to 3 percent). From a cost-savings point of view, this is a brutally efficient strategy--after all, the average Wal-Mart employee makes $17,500 a year. How many are going to set aside 3 percent of that for retirement? What's amazing, though, is that the memo's author, Susan Chambers, seems to believe that employees would actually like this reduction in benefits, because, for those who can somehow afford to take full advantage, it "would help Associates better prepare for retirement."

    Then there is the proposal to shift all employees into health-savings plans, replacing traditional insurance with tax-free bank accounts in which both employees and the company set aside money; they then use that money to pay for doctor visits, prescriptions, and so on. Again, from a coldly rational point of view, this makes certain sense: The more financial responsibility employees bear in their health-care costs, the less they are likely to spend. The problem is that, again, poorly paid employees are unlikely to make the sort of contributions necessary to cover expenses. Moreover, it's much easier for the company to quietly adjust its own contributions to employee health downward, a fact sneakily acknowledged by the memo (though instead of proposing a check it merely recommends more p.r.: "Wal-Mart will have to be sophisticated and forceful in communicating this change").
    That's the crux of it: Wal-Mart will use some nifty gimmicks to slash its workers' health and retirement benefits and then just pretend that this counts as an improvement. Ultimately, of course, this won't work. Wal-Mart's critics have bullshit detectors like few other groups of people on the planet, and always, always, always assume the worst about the store. The company will never appease its "well-funded and well-organized" attackers until it actually starts offering substantial benefits for workers. Although, do note, Wal-Mart executives are probably paranoid that the critics want to destroy the company altogether, rather than merely improve the lives of its workers, so maybe Wal-Mart thinks that there are no steps ever worth taking—because its enemies will never be appeased. Surely it doesn't help when lunatic lefties start writing posts like "Abolish the Corporation," either.

    Alternatively, of course, Wal-Mart could solve its problems by lobbying for some sort of government-run health insurance, which would relieve the company of the burden of covering workers in the first place. It probably will end up doing this, although it won't lobby for single-payer, but rather the GOP's plans for government-financed Health Savings Accounts, high-deductible insurance, and tax credits, along with a phase-out of the employer-health tax deduction; two steps that I think would be very bad for actual people, but on the other hand would let Wal-Mart and other big companies wash their hands of handling health insurance without having to pay taxes for some sort of single-payer system. We'll see.
    -- Brad Plumer 2:35 PM || ||
    Breakdown

    No larger moral lurking here, but Dexter Filkins' New York Times Magazine story on Iraq from last weekend was a really good read. Since the topic of the hour seems to be whether the occupation was doomed from the start or could have succeeded with a more competent helmsman, these four paragraphs should do the trick:
    The tough tactics employed by Sassaman's battalion had their effect. Attacks in the Sunni villages like Abu Hishma, wrapped in barbed wire, dropped sharply. And his men succeeded in retaking Samarra. Winning the long-term allegiance of the Iraqis in those areas was another matter, however. If many Iraqis in the Sunni Triangle were ever open to the American project - the Shiite cities like Balad excepted - very few of them are anymore. Majool Saadi Muhammad, 49, a tribal leader in Abu Hishma, said that he had harbored no strong feelings about the Americans when they arrived in April 2003 and was proud to have three sons serving in the new American-backed Iraqi Army. Then the raids began, and many of Abu Hishma's young men were led away in hoods and cuffs. In early 2004, he said, Sassaman led a raid on his house, kicking in the doors and leaving the place a shambles. "There is no explanation except to humiliate," Muhammad told me. "I really hate them."

    In retrospect, it is not clear what strategy, if any, would have won over Sunni towns like Samarra and Abu Hishma. Crack down, and the Iraqis grew resentful; ease up, and the insurgents came on strong. As Sassaman pointed out, the Americans poured $7 million of reconstruction money into Samarra, and even today, the town is not completely under American control.

    But there is another reason American commanders shy from using violence on civilians: the effects it has on their own men. Pittard, the American commander in Baquba, says that he was careful not to give his men too much leeway in using nonlethal force. It wasn't just that he regarded harsh tactics as self-defeating. He feared his men could get out of control. "We were not into reprisals," Pittard says. "It's a fine line. If you are not careful, your discipline will break down."

    In most of the 20th century's guerrilla wars, the armies of the countries battling the insurgents have suffered serious breakdowns in discipline. This was true of the Americans in Vietnam, the French in Algeria and the Soviets in Afghanistan. Martin van Creveld, a historian at Hebrew University of Jerusalem, says that soldiers in the dominant army often became demoralized by the frustrations of trying to defeat guerrillas. Nearly every major counterinsurgency in the 20th century failed. "The soldiers fighting the insurgents became demoralized because they were the strong fighting the weak," van Creveld says. "Everything they did seemed to be wrong. If they let the weaker army kill them, they were idiots. If they attacked the smaller army, they were seen as killers. The effect, in nearly every case, is demoralization and breakdowns of discipline."

    Continue reading "Breakdown"
    -- Brad Plumer 9:54 AM || ||

    October 26, 2005

    Abolish the Corporation! Er, Maybe.

    Steven Greenhouse's New York Times article today on how Wal-Mart is trying to pare down its health-care costs makes for depressing reading alongside Time's new cover story on those ever-shrinking corporate pension funds. The short story: Companies don't want to be on the hook for medical or retirement costs. Wal-Mart's first order of business is to keep its profits rising, which means insuring its employees as little as public opinion will allow. (Ensuring high turnover helps.) Meanwhile, managers across America have been raiding and overstating their company's pension holdings, while forking over millions to takeover artists and CEOs. Congress has let them get away with it by promising, dubiously, to take care of the retirees if things go badly. The abstract term for this is "moral hazard"; the more concrete term, I believe, is: "retirees recycling cans to avoid eating garbage." But fear not: as long as shareholders are happy, capitalism is working. We know because they tell us so.

    How did we get into this mess? I mean, the quick 'n' dirty answer is that in the postwar era, short-sighted union leaders bargained with employers for corporate benefits rather than stumping en masse for universal health care and super-Social Security. But why, in this day and age, are companies still forced to worry about health care and retirement funds? They shouldn't have to do it; the system only encourages Wal-Mart-esque behavior, and makes it hard for businesses to compete globally. Luckily, good liberals have an exit strategy: the government should handle health care and retirement, so that corporations can get back to what they do best: focusing on profit-making. GM would no longer have to operate as a "social insurance system that sells cars to finance itself," and the business of America could be business once again. Only the state can free the market from these heavy chains, say liberals. On most days, I'd agree with this. But is all this really the best way to go, or only yet another short-sighted solution to a longstanding problem? Let's digress for a bit.

    There is, of course, no such thing as a truly "free" market, only types of markets designed by the state, and it's hard to figure out what the ideal design really is. The framers of the Constitution never predicted that the corporation as we know it would ever exist—at the time of the American Revolution, there were only franchises charted by legislatures for public purposes. The Jacksonians, aiming to curb corruption, later revised the corporate charter and opened it up to all comers, but never intended to exempt corporations from the common law or social responsibility. It wasn't until the late 1800s that New Jersey changed all that, rewriting its charter laws to allow corporations to do whatever they damn well pleased. Soon all the major corporations were flocking to New Jersey, and states were forced to compete with each other for lax charters (thanks especially to several Supreme Court decisions protecting charters and declaring corporations "persons" entitled to full constitutional protection—including out-of-state recognition). Toss in decades of lavish federal subsidies and voila, we've got corporate America. Hence the modern "free market" that conservatives have fought so hard to protect against "state intrusion."

    True legal originalists would overturn Santa Clara County vs. Southern Pacific Railroad Company—which gave corporations 14th amendment protection—as an unconscionable act of judicial activism, though don't expect anything along these lines today (nor, necessarily, should there be). But that's just to say that ultimately nothing stops citizens from revising corporate charter law, if needed. Charters aren't sacred. That leaves the question of whether to do so. The situation we have today, in which firms like Wal-Mart and GM are obliged to cater to shareholders (in theory at least), but somehow got lumped with these other profit-draining social responsibilities, like paying for prescription drugs—is untenable. It's no surprise, then, that, as Time details, companies are raiding and shedding their pensions and getting the government to bail them out of their obligations to retirees whenever possible. It's what they're "supposed" to do.

    Again, one response is to say, "Enough, enough" and just make the government assume primary responsibility for all these profit-draining obligations. Set up basic universal healthcare and mandatory savings accounts, and let corporations offer extra benefits only insofar as they need to compete for workers. Taxpayers will foot the rest. That way, unions can stop haggling with employers over premiums and deductibles and focus instead on wages and workplace conditions. It's a far more rational and stable system than what we have now, true. But why we think it will be any more sustainable, over the long haul, than the postwar bargain struck fifty years ago is a good question. Isn't it likely that, even if we had a single-payer health care system and "mandatory savings accounts", companies would still offer extra benefits to workers, only to blow the whole thing up down the road when they decide it's no longer profitable to support increasingly long-living retirees?

    Alternatively—and you see this proposal at WTO protests or in Multinational Monitor from time to time—we could start rewriting corporate charters, drastically, and require companies to worry about this stuff. Always seems iffy, but you know. Perhaps in the end we could even hack away at a good deal of government regulation; there'd be no need if companies were beholden to civil authority, as was the case in the 19th century, and required to meet their social responsibilities in whatever manner they find most efficient. What would that mean for health care? I don't know. State-run health insurance might still probably be the best way to go. Fine. Perhaps charter revision would prove far more useful in other areas, like in environmental conduct. (I'm not sold on the current "corporate responsibility" trend underway.) But corporations were originally designed by the people and for the people; why not have them act that way? "Ah," one will say, "but that sort of thing would never work in The Globalized Economy™; companies couldn't compete!" Or: "Fool! We tried this already; it was called 'Fascist Italy.'" Yeah, yeah. Still, the idea that companies should just do their thing while government can swoop in later and pick up the mess looks less and less appealing by the day.
    -- Brad Plumer 5:52 PM || ||

    October 25, 2005

    Against Homework

    Question of the day: Is homework even necessary? Ayelet Waldman demands answers:
    I also learned from professor Cooper -- aka the homework guru -- that there is no correlation between how much homework young children do and how well they comprehend material or perform on tests. [n.b., see also this study.] Why? … Because their attention spans are just too short -- they can't tune out external stimuli to focus on material. Second, younger children cannot tell the difference between the hard stuff and the easy stuff. They'll spend 15 minutes beating their heads against a difficult problem, and leave themselves no time to copy their spelling words. Finally, young children do not know how to self-test. They haven't the faintest idea when they're making mistakes, so in the end they don't actually learn the correct answers. It isn't until middle school and high school that the relationship between homework and school achievement becomes apparent.

    So why the hell do Zeke and I have to spend every afternoon gnashing our teeth… The reasons, Cooper says, extend beyond Zeke's achievement in this particular grade. Apparently, by slaving over homework with my son, I am expressing to him how important school is. … When younger kids are given homework, Cooper says, it can also help them understand that all environments are learning ones, not just the classroom. For example, by helping calculate the cost of items on a trip to the grocery store, they can learn about math. The problem is, none of my children's assignments have this real-world, enjoyable feel to them. My children have never been assigned Cooper's favorite reading task -- the back of the Rice Krispies box.

    The final, and perhaps most important, reason to assign homework to young children, says Cooper, is to help them develop study habits and time management skills that they'll need to succeed later on in their academic careers. If you wait until middle school to teach them these skills, they'll be behind. I suppose this makes sense. Spending their afternoons slaving over trigonometry and physics will come as no surprise to my kids. By the time they're in seventh grade they won't even remember what it's like to spend an idle afternoon.
    I guess that settles that: Everyone go out and play. Seriously. Also, let me call bullshit on Dr. Cooper and doubt very much that homework "help[s children] develop study habits and time management skills." Generalizing from a single experience here, when I was in elementary school, I remember very distinctly cutting corners on virtually all of my homework. Math problems would get scribbled frantically in pencil on paper during homeroom. (In fact, what little creativity I have owes entirely to those ingenious, sweaty-fingered minutes spent trying to make it appear as if I had thought very hard about, say, problem #23(a) but just couldn't get the answer.) The spelling workbook, I quickly discovered, didn't need to be filled out at all—if you worried about grades you could always recoup your losses by getting the "bonus" spelling words on quizzes right. "Homework" always denoted something to do as little of as physically possible. Ever since, I've always had terrible study skills, and while I blame my own laziness, all that useless homework gets part of the blame.

    But let's do Waldman one better and say it flat out: homework is most likely evil. Yes, evil. Any educational system that relies on parents at home to help with the "learning process" will only end up perpetuating inequality, as long as some parents can help their kids and some cannot; as long as some parents can speak English and some cannot. And homework, for all its uselessness, is far more likely to put undue stress on family life than anything else. Of course, let's also be honest, the whole point of public school isn't to turn students into well-educated citizens but rather to produce good consumers and dutiful worker bees—people with short attention spans who follow authority, care deeply about status, and will attend with all due diligence to humiliatingly pointless tasks. Get used to working overtime, kid, you'll need it. In that regard, homework is indispensible.
    -- Brad Plumer 5:59 PM || ||
    Innuendo

    In the midst of all that praise he was heaping on Ben Bernanke, I'm glad that Brad DeLong took the time to let us know what he really thinks...
    -- Brad Plumer 5:03 PM || ||
    More Brains, Igor

    The New York Times has a very good piece today on the "brain drain" phenomenon among developing countries; wherein the most talented and educated workers in the Third World emigrate to the United States or Europe or other wealthy countries, thus leaving their home countries with very little in the way of human capital, and no way to exit the vicious cycle that caused people to leave in the first place:
    Most experts agree that the exodus of skilled workers from poor countries is a symptom of deep economic, social and political problems in their homelands and can prove particularly crippling in much needed professions in health care and education.

    Jagdish Bhagwati, an economist at Columbia University who migrated from India in the late 1960's, said immigrants were often voting with their feet when they departed from countries that were badly run and economically dysfunctional. They get their government's attention by the act of leaving….

    But some scholars are asking whether the brain drain may also fuel a vicious downward cycle of underdevelopment - and cost poor countries the feisty people with the spark and the ability to resist corruption and incompetent governance.
    Remittances back home from expatriate workers make up some of the difference—and these payments are usually spent more effectively than foreign aid—but not enough. Interestingly, the "powerhouses" of the developing world—China, India, Indonesia, Brazil—don't suffer from brain drain, with less than 5 percent of their skilled citizens living in OECD countries.

    Some suggest that OECD countries should restrict skilled immigration. One response would be that in some sense we already do; strict licensing requirements here in the United States already put up staggeringly high informal tariffs on the importation of doctors, lawyers, economists, and other professionals. Quick example: Several years ago the federal government paid New York hospitals $400 million to train fewer doctors out of concern for "oversupply"; blue-collar protectionists never had it so good. These barriers, by the way, dwarf our rather small tariffs on goods that "free traders" tend to worry so much about. But that's only part of it. On the other hand, the United States, Britain, Canada, and Australia really do actively seek out many other sorts of skilled workers from abroad, especially in more technical fields, and this seems to hurts developing countries the most.

    So what to do? Only a handful of countries have been successful in luring their emigrés back home. Bhagwati has suggested that developing countries should tax their expatriates. Creating networks among entrepreneurs might offer one solution—I know of at least one example in Latin America where the government sets up links between researchers abroad and workers at home to share knowledge. Set up something like Craigslist for really smart expatriates. Ultimately, the best thing to do would be to figure out how to get the poorest countries in the world to start growing—just as China, India, Indonesia, and Brazil have done—but the first person who figures out a fail-proof way to do that will get a very nice prize indeed.
    -- Brad Plumer 11:22 AM || ||
    "It was just a day like any other day"

    From the New York Times' obituary of Rosa Parks:
    Over the years myth tended to obscure the truth about Mrs. Parks. One legend had it that she was a cleaning woman with bad feet who was too tired to drag herself to the rear of the bus. Another had it that she was a "plant" by the National Association for the Advancement of Colored People.

    The truth, as she later explained, was that she was tired of being humiliated, of having to adapt to the byzantine rules, some codified as law and others passed on as tradition, that reinforced the position of blacks as something less than full human beings.

    "She was fed up," said Elaine Steele, a longtime friend and executive director of the Rosa and Raymond Parks Institute for Self Development. "She was in her 40's. She was not a child. There comes a point where you say, 'No, I'm a full citizen, too. This is not the way I should be treated.' "
    Right. Similar "caveats" (Parks was a NAACP plant!) seem to be wending their way through the internet, and I'm still not sure what the "point" of these myths is; in truth, they don't matter very much. Yes, Parks was handpicked—agreed to be handpicked—by civil rights leaders to become the poster child for the Montgomery bus boycotts. So what? That's always made her even more of a hero, I think; to have agreed to set her life aside and stand at the forefront of a movement. From the obit: "Her act of civil disobedience, what seems a simple gesture of defiance so many years later, was in fact a dangerous, even reckless move in 1950's Alabama. In refusing to move, she risked legal sanction, even harm." To put it lightly. No amount of mythmaking can denigrate that.

    As many people know, nine months before Parks refused to move, a fifteen-year-old—fifteen!—named Claudette Colvin did much the same thing on a Montgomery bus; the case she ended up filing in court along with three other women, Browder v. Gayle, eventually became the one in which the Supreme Court's struck down bus segregation. Initially, the NAACP wanted to organize a boycott around Colvin's case, but backed off because they didn't think she made for a suitable enough poster-child—Colvin was allegedly several months pregnant, and "prone to outbursts." Or perhaps the timing just wasn't right—mass movements are always sensitive to timing. (Baton Rouge had staged the first bus boycotts two years earlier, but that had been forgotten.) Parks, as a member of the NAACP, was Colvin's mentor, and sat in on the decision to boycott or not after the younger girl was arrested, and was eventually inspired by her example to do the same nine months later. That this was how a movement sprouted—with two women inspired by each other—is no less sweeping a story than the traditional tale of one brave person sparking a wildfire.

    Presumably those civil rights leaders were right that the nation needed to see Rosa Parks—"one of the finest citizens of Montgomery"—at the head of the boycotts rather than Colvin, who might have been more easily be slimed by reactionaries who think a movement can be discredited by attacking the private lives of the people who lead it. Not much has changed in the last fifty years, in that regard. At any rate, none of this can minimize what Parks did; that wouldn't be possible.
    -- Brad Plumer 10:45 AM || ||

    October 24, 2005

    Electing to Fight

    John M. Owen IV has a Foreign Affairs review of Electing to Fight: Why Emerging Democracies Go to War, which argues exactly what the title suggests:
    According to Mansfield and Snyder, in countries that have recently started to hold free elections but that lack the proper mechanisms for accountability (institutions such as an independent judiciary, civilian control of the military, and protections for opposition parties and the press), politicians have incentives to pursue policies that make it more likely that their countries will start wars. In such places, politicians know they can mobilize support by demanding territory or other spoils from foreign countries and by nurturing grievances against outsiders. As a result, they push for extraordinarily belligerent policies.

    Even states that develop democratic institutions in the right order -- adopting the rule of law before holding elections -- are very aggressive in the early years of their transitions, although they are less so than the first group and more likely to eventually turn into full democracies.
    The historical record bears this out, it seems. Owen wonders if, on this theory, "a democratic Iraq [will be] no less bellicose" than Saddam Hussein's regime, as various factions in the near future "compete for popularity by stirring up nationalism against one or more of Iraq's neighbors." This doesn't seem so implausible—I could see an Iraqi government with a large Sadrist presence getting all up in some neighbor's face; Jordan, perhaps—but it does sort of seem like the least of Iraq's concerns right now. On the other hand, a rapid push for democratization in the Middle East—if and when it ever comes—would make this sort of chaotic outcome all the more likely. But as Josh Marshall once suggested, perhaps this was the plan all along.
    -- Brad Plumer 7:01 PM || ||
    All Hail Our New Chairman?

    This isn't really the place to come for Federal Reserve commentary, but maybe I can provide a few knee-jerk lefty complaints about the new Fed chief, Ben Bernanke. He's undoubtedly a smart guy, and all the center-left blogs like him, but this just looks like more of the same. He's a fan of "formal inflation targeting," eh? As best I can tell from his 1999 spat with James K. Galbraith, Bernanke doesn't take this to mean that the Fed should sacrifice everything else under the sun—including employment growth—at the altar of Always Low Prices, but Gerald Epstein argues here that that's what inflation targeting tends to mean in practice. That inflation-obsessed monetary theorists in the U.S. wrongly insisted that the rate of unemployment could never go below 6.5 percent during the 1980s, letting wages stagnate and poverty rise, makes Scooter Libby's high crimes and misdemeanors look rather flimsy in comparison.

    Moreover, Epstein argues, moderate rates of inflation, up to about 20 percent, "have no predictable negative consequences on the real economy," so perhaps the Fed obsession is misguided after all. As far as I can tell, no one seems to know for sure whether or not inflation would hurt the poor, but that's probably not to question to ask, instead let's debate: what sort of monetary policy would be better for the least well-off, and the rest of us? Or rather: Why not have the Fed stop fretting about inflation—within limits—and instead focus on promoting full employment, investment, and GDP growth? Good question. The answer is to follow the money:
    One likely explanation is that a focus on fighting inflation and keeping it low and stable is in the interest of the rentier groups in these counties. Epstein and Power (2003) present new calculations of rentier incomes in the OECD countries supporting the view that in many countries, higher real interest rates and lower inflation increase the rentier shares of income.
    Ah, rentiers. The argument against Epstein, I take it, is that theoretically a central banker just can't use inflation to boost employment because people aren't dumb, they'll soon catch on to what the bank's doing and plan accordingly, nothing will change when inflation strikes, and soon we're on the path towards stagflation. Hence the virtues of a hawk like Greenspan—or Bernanke. In reply, the dying herd of old Keynesians might say eh, this isn't really a concern, since the real inflationary dangers come not from full employment, which is usually a good thing, but from stagnant growth, since during a slowdown monopolistic enterprises will start raising prices to recoup their fixed costs. Certainly Big Pharma and Big Insurance have been doing just that recently, so score one for the dying herd.

    I'm not even fractionally smart enough to know who's right in all of this, so I'll just leave it at that and admit that my bias is towards Epstein. His suggestion for "real targeting" makes sense on the surface, although for the Fed to be truly democratic, the whole institution itself will probably have to be rejiggered so that ordinary citizens get actual input into central bank decision-making. That obviously won't happen in my lifetime, but surely the least we can do is be bitter about it, no?
    -- Brad Plumer 6:23 PM || ||

    October 23, 2005

    Adventure!

    I was flipping through a copy of National Geographic Adventure yesterday, and figured, hey, some of this stuff is worth linking to on the ol' blog. The cover feature's about a guy who backpacked along some or all (I forget) of the Great Wall of China. Some good factoids in there: The GWoC took 1800 years to build, you can't actually see it from space, and the reason that the Mongols of old would get so ornery and conquer stuff every now and again was because Mongolia is prey to the occasional freak super-blizzard, called a zud, that would wipe out all their livestock. You'd be ornery too.

    The other good story was about how climate change has made it difficult for grizzly bears in ANWR to find food these days, so now they're out for blood... human blood! No, really, they never used to attack people, but now they do. Best part comes at the end when, shortly after being chased by a bear, a once-fuzzy-wuzzy environmentalist vows to go on a shooting spree the next time he sees one. Cool pictures, too.
    -- Brad Plumer 7:16 PM || ||
    Who's Doing the Damage?

    Oh hilarious. Here's the much-mocked Maggie Gallagher's take on the real problem with legalizing gay marriage:
    If the principle behind SSM is institutionalized in law… then people like me who think marriage is the union of husband and wife importantly related to the idea that children need moms and dads will be treated in society and at law like bigots.
    Awww… poor thing. Sign gay marriage into law and suddenly people might not be allowed to gay-bash on the radio anymore, for fear of sounding like bigots and having their broadcast licenses revoked (I really don't think she needs to worry here); and future schoolteachers will brainwash their students into thinking that the gay rights debates of yore pitted a few noble crusaders for equality against a wall of old-fashioned and mostly stodgy bigots. Liberal elites can be so cruel! Really, though, I wanted to comment on this, somewhat less-goofy, paragraph:
    The most important fault line in the marriage debate is between a) people who think SSM will help a small number of gay couples and not affect anyone else and b) people like me who think this is going to change fundamentally the nature of marriage.
    Is that really the fault line? Neither of these propositions is testable unless we just go for it and legalize gay marriage—or, alternatively, we could just look at Europe's experience and note that Option A looks like the likely result. Alternatively, though, one could throw in a third option—that gay marriage will change marriage, yes, but for the better. I don't see why this argument's any less implausible than the other two. Insofar as legalizing gay marriage can send out a signal that being married is preferable to not being married, it could easily strengthen the institution, which, I take it, is Andrew Sullivan's argument. That's why you have more than a few feminists on the left opposed to the whole idea, seeing as how it would bolster what they see as a patriarchal and mostly oppressive institution. And they're probably right.

    But let's also take Gallagher's fears seriously for a second. My guess is that keeping gay marriage illegal will do far more to erode marriage than anything else in the near future. Corporations and states, after all, are increasingly creating partner benefits for gay couples—it's hard to stop the states from doing this, and even harder to stop companies from doing it. (I guess you could try to pass an amendment, but that seems difficult.) And once there are benefit systems in place for gay couples, straight couples may as well sign on too, forgoing marriage. If companies increasingly extend healthcare and retirement benefits to "domestic partners," well, that's one less incentive for everyone else to get married, isn't it? I think Jonathan Rauch once warned that without gay marriage, "every unmarried gay couple"—especially those with kids—"will become a walking billboard for the joys of co-habitation." Not good for Gallagher. This seems like the greater "threat" to marriage, and unless we plan on banning all gay people everywhere from even looking at each other—and even in America this seems like a daunting task—allowing gay marriage is probably the best way to avert the inevitable "erosion" at work here.

    We can add another loop too. Just as Gallagher seems to fear, young people increasingly do seem to see the backlash against gay rights as a form of bigotry. How much respect will those kids have for an institution they see as discriminatory? Not much, one would think. This should really be what gets Gallagher nervous. Granted, it's near-impossible to test any of these arguments—I guess we can see what happens in Massachusetts and, inevitably, California in the coming years—though my gut feeling is that it would be impossible for gay couples to screw up marriage any more than straight couples have already done.

    (Granted, in real life I think it's right to allow gay marriage even if it does somehow affect straight couples—just like it was right to end racial discrimination among employers even if the net effect is to pull down white wages—but this seems to be one of those cases where doing what's right and doing what's beneficial for the majority are actually aligned.)
    -- Brad Plumer 6:43 PM || ||

    October 21, 2005

    Balancing Act

    This is over a year old, but Stephen Brooks and William Wohlforth of Dartmouth have written a very interesting (draft) paper asking whether other countries are engaging in "soft balancing" against the United States. Prior to the Iraq war, many liberal analysts worried that too much unilateralism from America would provoke other nations—especially Europe, China, and Russia—to start banding together and counterbalancing that loud, honking hegemon across the Atlantic. U.S. conservatives, meanwhile, viewed France and Germany's opposition to the war as stemming from a desire to constrain American power. On this view, what started as "soft" balancing—a bit of stubbornness at the Security Council—would soon lead to hard opposition. As Bill O'Reilly said on the Daily Show just a few nights ago, "France is the enemy!"

    So is this true? Brooks and Wohlforth say probably not. It's hard to distinguish, granted, between explicit "balancing" and normal moves made by other countries, for reasons of their own, that just so happen to inconvenience or hurt the United States. But real "balancing" would mean that Europe and Russia and China were taking moves that are only coming about because the United States is the pre-eminent power in the world, and they fear that; moves they wouldn't pursue otherwise.

    This probably isn't the case. Jacques Chirac and Gerhard Schroeder opposed the Iraq war partly because they genuinely thought it was a bad idea, quite rightly, and partly because opposition was popular domestically. Likewise, Russia's recent "strategic partnerships" with India and China may look menacing, but they aren't really intended to counter U.S. power in any meaningful way. (All three countries are pursuing economic modernization, and since that entails working with U.S.-controlled financial institutions, they still need to cozy up to the hegemon.) Meanwhile, Russia's recent arms sales to India and China, along with its support for Iran's nuclear program, mostly stem from its desperate need to slow the rapid decline of its defense sector, which is in a bad way. That's why Vladimir Putin can call nuclear proliferation the "main threat of the 21st century" and still fund the Bushehr reactor in Iran. He's sincere about the former, no doubt, but that reactor contract means 20,000 jobs at home.

    The EU's proposal for defense cooperation, meanwhile, is meant to complement, not counter, American military power. Again, people like Chirac may say otherwise for public consumption at home, but in reality, the EU is actually weakening its ability to balance against the United States—by foregoing investment in advanced defense technology—in order to create a rapid reaction force that can help the U.S. by dealing with Balkans-style problems. Given that the U.S. and the EU are currently working together on Iran, it's obvious that their interests are mostly aligned.

    In short, people like O'Reilly are wrong. No one's balancing against the U.S.; not yet. Though it still seems that the U.S. should avoid unilateralism when possible, because ill will makes cooperation on other issues difficult. Also, notice that France and Germany have a serious dilemma here. The more that they use the language of balancing—the more that they talk about "checking American power," even when they obviously intend to do no such thing—then the more the U.S. will discount their specific objections to policies. Chirac and Schroeder may have had good reason to believe that Iraq was a flawed idea, but U.S. policymakers were inclined to dismiss their objections as knee-jerk anti-Americanism. That's bad. Likewise, if U.S. leaders believe that, say, France and Germany want to work through international institutions only in order to check American power, then the U.S. will be less likely to pursue multilateralism.
    -- Brad Plumer 9:36 AM || ||
    Preposterous Universe

    One of these days, I'll actually be able to wrap my head around those extra dimensions in space that string theorists always talk about. One of these days.
    The simplest way to hide extra dimensions from view is to imagine that they are "compactified"—curled up into a tiny ball (or other geometrical configuration) with an extent much smaller than what can be probed by current experimental apparatus. In the 1990s, however, a new possibility arose, as scientists came to appreciate the role of "branes" in higher-dimensional physics. A brane, generalizing the concept of a membrane, is simply an extended object: A string is a one-dimensional brane, a membrane is a two-dimensional brane, and so on, up to however many dimensions may exist. A remarkable feature of such objects is that particles may be confined to them, unable to escape into the surrounding space. We can therefore imagine that our visible world is a three-dimensional brane, embedded in a larger universe into which we simply can't reach.

    Gravity, as the curvature of spacetime itself, is the one force that is hard to confine to a brane; the extra dimensions must therefore have some feature that prevents gravity from appearing higher-dimensional. (For example, in four spatial dimensions, the gravitational force would fall off as the distance cubed, rather than the distance squared.) One possibility, proposed by Nima Arkani-Hamed, Savas Dimopoulos and Georgi ("Gia") Dvali, is that the extra dimensions curl up into a ball that is small without being too small—perhaps as large as a millimeter across in each direction. Randall, in collaboration with Raman Sundrum, showed that an extra dimension could be infinitely big, if the higher-dimensional space was appropriately "warped" (hence the title of her book).
    That's from a review of Lisa Randall's new book, Warped Passages.
    -- Brad Plumer 9:05 AM || ||
    Share Your Toys!

    The other day, I wrote that it might be time to socialize drug research, or at least take a kangaroo hop in that direction. (Granted, the Pharma lobby would never let this happen, but let's keep things unrealistic for now.) In comments, serial catowner pointed out that the drug industry isn't in any way innovative these days, which I definitely agree with, and JimPortlandOR pointed to the Bayh-Dole Act of 1980 as a culprit mucking things up. Good point. So here's a suggestion.

    To recap: The Bayh-Dole Act, in essence, transferred the patents for all pharmaceutical inventions made with the help of federal research grants to the universities and small businesses where they were made. No longer would taxpayers own the research that the government had paid for; it would be in private hands from now on. Many people credit this act with spawning the multi-billion dollar biotech industry, since in 1979, only 5 percent of government-held patents had ever been developed—because companies didn't want to risk commercializing them if they didn't own the patents. Bayh-Dole fixed that, in theory. But it also made academic institutions much more unwilling to share their research with other scientists, instead spending their time seeking out licenses with private business in order to earn millions. Out with the altar of Hermes, in with Mammon.

    Whether or not Bayh-Dole was warranted at the time—if nothing else, it helped many research universities reap windfalls—it's certainly not having a positive effect on drug innovation today. Between 2000 and 2003, the average number of "new molecular entities," or genuinely new drugs (as opposed to "me-too" drugs) dropped to eight a year, and few of them were by the major corporations. More tellingly, drug companies have made often made relatively little progress on any number of important diseases in the past few decades. There's virtually nothing out there to treat MS, or Parkinson's, or Alzheimer's. Diabetes treatments have stalled. Cancer medications aren't really going anywhere. Perhaps that's just because these things are intrinsically difficult. But one theory is that, because government-funded research institutions now worry more about cashing in on their inventions, they spend more time hoarding their research, groveling for contracts, and litigating over patents than they do collaborating fruitfully with other scientists. Plus, Bayh-Dole inflates the price of drugs—drugs researched with taxpayer money.

    The thing is, as Clifton Leaf pointed out in a recent Forbes article, there's another "high-technology, university-incubated industry" that's doing perfectly fine without anything like Bayh-Dole: the computer industry. The IT industry still has patents, of course, but companies and research institutions are much more generous in licensing their technology, and inter-company sharing is much more widespread. In part because of all this sharing, computer prices keep going down, Moore's law is awesome, and innovation after innovation keeps cropping up. Meanwhile, entrepreneurs and researchers at universities don't need restrictive patent rules as incentive to innovate: Leaf points out that the "$50K Competition" at MIT, which offers a mere $50,000 in seed money for innovative business plans, "has showcased some notable winners—and losers" over the years, including Ask Jeeves and Akamai. Smart people will always find ways to bring good ideas to the market, and, the more widely ideas are shared, the more stuff they'll probably invent. No reason the pharmaceutical industry should be any different.

    Now here's the kicker: technically the Bayh-Dole Act empowers federal agencies to ensure that new technologies—gene analysis, cell lines, research techniques—are being shared as widely as possible. But the NIH has never once used this power. As well, Bayh-Dole technically allows the government power to use its taxpayer-funded research royalty-free, but it's never done that. One wonders: What the hell? Government reticence on both these measures essentially acts as a taxpayer subsidy to Pfizer and Bayer, and hinders innovation. I guess that's the point, but it sucks. I honestly don't know whether it's time to repeal Bayh-Dole altogether. As a "compromise" an amendment could be passed, though, that forces the government to do the above two things—and require scientists to license their patents as widely as possible—at minimum.
    -- Brad Plumer 9:01 AM || ||

    October 20, 2005

    Simplify, Simplify

    The consensus seems to be that the Tax Reform Commission's proposals for, uh, tax reform won't actually go anywhere, and they're mostly just ideas for "discussion" rather than things the Bush administration will actually end up backing. (With sinking poll numbers and Karl Rove potentially out of commission, it's hard to see the president finding the gumption to cap the mortgage-interest deduction, ya know?) So let's "discuss." I realize no one on the House Ways & Means Committee plans on asking me, but here's one way to take a more progressive stab at tax simplification:
    1. Find some way to slowly phase out the mortgage-interest deduction. Robert Shapiro has argued that it doesn't actually benefit home-buyers, since sellers just bid up the price of houses until they exactly offset the cost of the deduction, so in essence, it just acts as a taxpayer subsidy to the construction and real estate industries. Is that really worth it? Any phase-out would lower home values, though, so this step makes for thorny politics, but that's why you…

    2. Simplify and expand the family tax credit. Rep. Rahm Emanuel has proposed a simplified, refundable tax credit available to all working taxpayers with children that would replace the EITC, Child Credit, Additional Child Credit, and Child and Dependent Care Credit—cutting away about 200 pages of the tax code. This would cost an extra $200 billion over ten years, which is a lot, but doable once we get to…

    3. Earlier this year, David Cay Johnston reported on a paper by two tax experts noting that a number of investors overstate the price of stocks, businesses, and real estate, because they're allowed to report their capital gains and losses on the honor system, unlike wage-earners. Actual verification and enforcement of these reports could recoup at least $250 billion over the next decade.

    4. It also seems like a good idea to consolidate, simplify, and expand, as Paul Weinstein, Jr., has suggested, both the various college subsidies into one single College Tax Credit, and the various tax savings vehicles—IRAs or 401(k)s—into a single and transferable universal pension account. Simplifies a lot, and good for all involved. I realize people like Paul Krugman have argued that college isn't for everyone, but might as well try to raise the numbers. Weinstein lists a bunch of corporate loopholes and tax deductions we could close to pay for these parts. Works for me.
    That's not so hard. Those aren't earth-shaking steps, but they're all good, liberal things to do, and they do simplify the tax code quite a bit, especially for working families. I don't really see the point in repealing the alternative-minimum tax (AMT), which is there to ensure that the very wealthy won't exploit loopholes and dodge taxesl; if the AMT is falling on too many middle-class families then just raise the threshold and reform, rather than eliminate, it. I also don't really know how one would simplify capital gains taxation, which is obviously at the heart of any reform, but I'm sure there are decent ways to go about it. Oh yeah, and most of the Bush tax cuts are going to have to be repealed (for a start) to avoid fiscal disaster in the long run, but that's another story...

    Continue reading "Simplify, Simplify"
    -- Brad Plumer 2:32 PM || ||
    Is Liberal Interventionism Dead?

    Sam Rosenfeld and Matt Yglesias have a new TAP article arguing that the war in Iraq—or at least the "liberal hawk" idea that Iraq could be made into a democracy at the barrel of a gun—was always doomed to fail, and it wasn't just because Bush utterly botched it. They say that even if the war had been sold and fought exactly as the liberal hawks wanted—as a way to turn Iraq into a liberal democracy—with a different, more competent administration, it still would have failed.

    Well, agreed. The United States has never shown much interest in democracy-building, it's never been very good at it, and as I noted earlier in the week, the success of our nation-building adventures abroad have usually depended on internal factors in the occupied country, rather than the competence of our plans. That was as true of the American South in 1865 as it was of Kosovo in 1999. And sad to say, but the mere existence of a profit-seeking military-industrial complex made problems like the looting of the Iraqi treasury pretty much inevitable. There's no reason to think an invasion run by George Packer or Peter Beinart could have "remade" Iraq better than Bush did. That said, I think this part of the TAP piece sells the idea of liberal interventionism somewhat short:
    Intervening requires us to take sides and to live with the empowerment of the side we took. Tensions between Kosovar and Serb, Muslim and Croat, Sunni and Shiite are not immutable hatreds, and it’s hardly the case that such conflicts can never be resolved. But they cannot be resolved by us. Outside parties can succeed in smoothing the path for agreement, halting an ongoing genocide, or preventing an imminent one by securing autonomy for a given area. But only the actual parties to a conflict can bring it to an end. No simple application of more outside force can make conflicting parties agree in any meaningful way or conjure up social forces of liberalism, compromise, and tolerance where they don’t exist or are too weak to prevail.
    That's obviously true of the United States' military, which has classically been good primarily at smashing things, although our twenty-year-old soldiers have adapted to "mission creep" unbelievably well in Iraq. But Donald Rumsfeld wants to make the military even more focused on smashing things—as opposed to people like Thomas Barnett, who wants to see a more fully developed "SysAdmin" side—and regardless of what you want to call it, the "Jacksonian tradition" in American foreign policy has never had much interest in anything more than overwhelming bloodletting in the defense of the national interest. We're a nation ruled by speculators and powered by Southern nationalists; as such, idealistic projects abroad just aren't in the cards, except in very rare circumstances.

    But the United Nations complicates the tale somewhat, since their peacekeeping forces actually have succeeded in reconciling a large number of post-conflict nations. Post-WWII UN operations in Congo, and post-Cold War peacekeeping forces in Namibia, El Salvador, Mozambique, Eastern Slavonia, Sierra Leone, and East Timor should all count as successes—the UN disarmed the parties, demobilized militias, held relatively free and fair elections, and put the countries on a path towards sustained civil peace. So in one sense, outside forces can "make conflicting parties agree in [a] meaningful way," and if those UN missions didn't conjure up, as TAP puts it, "social forces of liberalism, compromise, and tolerance," they at least pointed the way down that path. Those countries, save for the Congo, are all peaceful democracies today. We know it can work because it's been done.

    On the other hand, even the UN can't seem to stop a country on the brink of disintegration from doing so, but it's hard to tell how much of that failure has come from the sheer difficulty of the task and how much from poor implementation. The original UN peacekeeping mission in Somalia obviously flopped, but it was also severely undermanned. Same with the initial UN force in Bosnia. (Could a more robust operation—say, 20,000 more troops and American commanders—have averted many of the Balkan crises later in the 1990s? Who knows?) The UN actually enforced (rather than just "kept") the peace in Eastern Slavonia and East Timor, both successfully, when it had enough troops. So I don't think I'm quite as ready to say "it's impossible", although a good deal of modesty and skepticism is absolutely crucial here. I think the United States is inherently awful at nation-building right now, yes. But that says as much about the United States and its military as it does about the inherent impossibility in peacekeeping and nation-building, and it's worth, I think, trying to disentangle the two.
    -- Brad Plumer 12:08 PM || ||

    October 19, 2005

    Bomb Training

    Time has an illuminating interview with "Abu Qaqa al-Tamimi," an Iraqi insurgent trainer, that among other things sheds light on why so many suicide bombers in the country have been foreign fighters rather than Iraqis:
    Most of the more than 30 bombers he says have passed through his hands were foreigners, or "Arabs," to use al-Tamimi's blanket term for all non-Iraqi mujahedin. Although he says more and more Iraqis are volunteering for suicide operations, insurgent groups prefer to use the foreigners. "Iraqis are fighting for their country's future, so they have something to live for," he explains. He says foreign fighters "come a long way from their countries, spending a lot of money and with high hopes. They don't want to gradually earn their entry to paradise by participating in operations against the Americans. They want martyrdom immediately." That's a valued quality sought by a handler like al-Tamimi, says counterterrorism expert Hoffman: "It's one less thing for the handler to worry about--whether the guy is going to change his mind and bolt.
    Makes sense. Meanwhile, al-Tamimi—a pseudonym, obviously—claims that he was radicalized after being tortured in Abu Ghraib by occupation forces; which could be true or not, though he does seem to have used prison time productively to become more religious and develop further terrorist contacts. (He was originally a member of Saddam's Republican Guard, although as with most Baathists joined up with radical Islamic networks in 2004.) Another point: as Doug Farah has noted, one would think that capturing people like al-Tamimi would probably be much more effective for purposes of counterinsurgency than worrying about all those "high-ranking lieutenants," since the trainers and former military men seem to have all the semi-irreplaceable skills. But then, the Republican Guard alone numbered some 175,000 before the war, and that doesn't include Mukhabarat (100,000) and Fedayeen Saddam (~40,000), so it's not like people like al-Tamimi are at all in short supply...

    (Those numbers, by the way, come from John Robb, who has argued, persuasively I think, that the United States is fighting a much bigger insurgency than we've been lead to believe.)
    -- Brad Plumer 10:35 AM || ||
    Light Switches, Eh?

    At long last, the readable layman's primer on evolutionary development (or as the cool kids apparently say, "evo devo") I've been waiting for, courtesy of H. Allen Orr in the New Yorker:
    Why, then, do different creatures look so different? How do penguins and people emerge from the same genes? Evo devo's answer to this question represents its second big finding. Different animal designs reflect the use of the same old genes, but expressed at different times and in different places in the organism. As embryos, penguins might express one combination of genes in their limbs (and the result is wings), while people might express another (and the result is arms).

    The basis of this selective expression involves that part of the DNA which is noncoding. Most genes, like most light fixtures, have "switches" near them. These switches, which are made of DNA, affect only whether a gene is on in a particular cell at a particular time; they do not change the actual protein coded by a gene. One switch might specify whether a gene should be on in the pancreas and another whether it should be on in adults. What’s special about many of those tool-kit genes is that they make proteins that toggle these switches. If a tool-kit protein finds and binds to a switch, it insures, through a complex molecular choreography, that a certain gene is expressed (or, in some cases, not expressed). In effect, tool-kit proteins act like molecular fingers, reaching out and physically turning on or off the switches that sit next to genes. … If all goes well, each of the possibly trillions of cells in an animal’s body will express just the right genes: insulin in your pancreas, not in your eye.

    The real excitement about evo devo, however, has to do with its third claim. Carroll and others have taken the next, and by far the most radical, step and argue that evolution is mostly a matter of throwing these switches.
    Undeniably cool. Though when I first saw the article's subheading ("A revolution in the field of evolution?") it looked like, egad, we were in for yet another full-length treatment of intelligent design. Not so, happily. Which brings me back to the most infuriating part of the entire ID "debate": namely, that there are so many genuine controversies in the field of evolution out there, most of them actually interesting, that it seems like a shameful waste of paper to spend time dithering over a bunch of creationists who pretend not to understand how light-sensitive receptors could evolve into eyes. In not-really-related news, except insofar as I lump all things vaguely related to biology together, it's not a good idea to stop worrying and embrace the world's invasive species.
    -- Brad Plumer 9:58 AM || ||

    October 18, 2005

    Time to Socialize Drug Research?

    Dean Baker's post on why the U.S. government should confiscate the Tamiflu patent is all well and good (read Tyler Cowen's rejoinder too), and his rant on the evils of the pharmaceutical industry well-warranted, but the real action's all in this old paper he wrote on alternatives to our present method of financing drug innovation. Why doesn't the current patent system—which allows drug companies to sell their little pills for 300-400 percent above the competitive market price in order to recoup their "research" investment—work very well? Here:
    [T]here are very good reasons - well known to all economists -- for preferring that drugs be sold in a competitive market with the price approximating the marginal cost of production. The gap between price and marginal costs under the current system of patent supported research leads to large and rapidly growing distortions. This includes denying drugs to patients who could afford them if they were sold at their marginal cost, the distortions also include the tens of billions of dollars spent each year on promoting drugs.

    Even more serious is the incentive that monopoly pricing provides firms to conceal or misrepresent research findings. Finally, a large gap between price and marginal cost will inevitably lead to the production of unauthorized versions of patent protected drugs. While these unauthorized versions make drugs available at a lower costs to patients, their quality cannot be ensured since illegal markets are unregulated.
    All very real problems, these, and one can note that this sort of protectionism matters much, much more than the various trade barriers people get agitated about. Now obviously we can't just junk the patent system; companies need some incentive to invest in research. But sure we can think of alternatives that work better. Baker lists a couple, including Dennis Kucinich's proposal to get rid of drug patents and steer about $25 billion in taxpayer money—about what Big Pharma claims to spend on research—to government-backed research organizations, similar to the current NIH or the research universities of yore, and socialize drug research. (He lists some other, less drastic, and very clever alternatives too.) More on that in a bit, but the point here is that any financing alternative will have to achieve four main things:
    1) provide incentives for pursuing "useful" research
    2) minimize the possibility that market distortions will create incentives to pursue less useful lines of research
    3) minimize the risk that political interference will direct research spending to less useful ends
    4) minimize the incentive to suppress research findings
    Obviously it's tricky to decide what is and isn't "useful" research—who decides? the "market"? the government? the dying children lobby?—but the current patent system certainly does badly on the last three counts. Drug companies presently have greater financial incentives to cater their research towards balding, impotent, overweight suburban males rather than look into, say, innovative malaria treatments for the Third World. The patent system also gives drug companies incentives to pursue "me-too" drugs and reap the monetary rewards—see Marcia Angell on this—as well as to suppress any inconvenient research findings.

    Now if the government decided to sponsor research directly, as Kucinich proposed, it could avoid many of these problems—2) and 4) especially—but, of course, there's the possibility that politicians could start mucking around with where the research dollars go. Think the reigning First Church of Dennis Hastert would approve one cent for developing new contraceptives? Me neither. And under Kucinich's plan, private research companies could use the legalized graft system in this country to win contracts unduly. On the other hand, to some extent this problem already exists—current research at the NIH is subject to political pressures, and since drug companies often depend to a large extent on government Medicare purchases to profit from their patents, innovation already depends on lobbying, to some extent.

    So... What Is to Be Done? In my opinion, the pharmaceutical industry as it stands still does good work, and I don't think full-blown socialism is called for just yet. No, I much prefer creeping socialism. Right now most government research money goes towards basic research, rather than the development and testing of new drugs. Why not steer a couple billion this way, as a test to see if the government can do drug innovation on its own? In the meantime, draconian regulation to crack down on some of the worst excesses of the current system: i.e., force drug companies to open its books; regulate advertising; free the FDA from Big Pharma's tentacles; make the approval of new drugs contingent on improvements over existing drugs (right now, new drugs merely need to be better than placebos to be approved). We can be reasonable people.
    -- Brad Plumer 3:49 PM || ||
    More on Prop. 77

    According to the Oakland Tribune, a new UC Berkeley study says that Proposition 77—Arnold Schwarzenegger's redistricting initiative for California—would "create some closer races... but would not shift decisive power to the Republicans." It would also boost the number of competitive districts from 30 back up to 50. Well, that sounds like it could alleviate my earlier fears that the initiative would corral Democratic voters into a handful of urban bantustans. So I'd like to read the study, but oddly, "the release of the IGS redistricting study has been delayed, pending further research." Huh?
    -- Brad Plumer 2:15 PM || ||
    Would the GOP Dare Overturn Roe?

    The big news on Harriet Miers today... she's an abortion opponent! Seemed fairly predictable, but okay:
    President Bush's Supreme Court nominee, Harriet E. Miers, pledged support in 1989 for a constitutional amendment that would ban abortions except when necessary to save the life of the woman.
    We still don't know for sure, granted, whether or not Miers would actually vote to overturn Roe v. Wade if she got a chance—and it's worth noting that even if she did, the Court would still have a five-vote Roe majority (Ginsberg, Stevens, Souter, Breyer, Kenendy)—but it's as good an indication as we'll get. Really, we can only guess. What I do want to question, though, is the prevailing view among some liberals that George W. Bush would never want to see Roe overturned, on the theory that it would mean the electoral death of the Republican Party. Is this really so certain? It's true that a substantial majority of Americans supports abortion rights, but there's reason to think that the Republican leadership would still try to overturn Roe, and risk the backlash, if they got the chance.

    Will Saletan formulated one version of the "Republicans fear overturning Roe" thesis in this Slate piece, noting that in 1989, when Pa Bush was president, and it looked like the Supreme Court might overturn Roe, the political backlash was colossal: Voters reportedly made it an issue; pro-life politicians lost their jobs; many pro-lifers backed away. That's the claim, anyway. Saletan suggests that the younger Bush has learned his lesson and would never touch Roe. Alternatively, some pundits have suggested that if Roe was overturned, the Christian right would finally be satisfied, pack up their protest signs and go home, and thus dampen voter turnout for the GOP. Pro-choice activists, meanwhile, would be newly fired up (and popular), and the Democrats would benefit.

    Historical analysis bears this out to some extent, although these things are always fuzzy. Roe v. Wade helped the Republican Party, no doubt, by spurring religious groups, notably the Christian Coalition and Pat Robertson's 700 Club, to abandon their longstanding quietist stance and finally start getting involved in politics. It also, maybe, spelled the beginning of the end of the Democratic Party's reputation as the party of "moral values" among the electorate. As William Galston and Elaine Karmack recently pointed out, the Democrats have historically polled much higher than the Republicans on "traditional family values" questions, but that lead started declining in or around 1973. Did Roe cause that? Hard to say—surely not singlehandedly (Dems were still doing well on this question in the mid-80s), but perhaps in part.

    Nevertheless, there's no going back to 1973, even if Roe was overturned. The Christian Right and other social conservatives wouldn't dismantle the vast political operation they have in place; instead they'd stay focused on passing bans at the state and national level. (The usual estimate is that 30 states would ban abortions if Roe was overturned, although this poll suggests that only about 10-15 states have pro-life majorities; so there's lots to crusade against here.) Once Roe's gone, the Ralph Reeds and Pat Robertsons of the future won't suddenly wash their hands of the GOP and decide that they no longer need to rile up the faithful for political ends—the money's too good and the power too marvelous. No, the conservative base will stay perfectly active. Meanwhile, pro-choice activists in California, Illinois, Massachusetts, New York, and other blue states might actually be de-mobilized, since abortion would in theory be safe for them and their friends, and fighting for abortion rights at the state level is much more daunting than fighting to uphold Roe. This might not happen, but it's not that outlandish either. Perhaps this is a risk someone like Grover Norquist would just as soon not take, especially since conservatives are doing just fine as it is, but there seem to be enough hardliners on the Republican side of the aisle willing to take the plunge.

    Personally, I think all of this would be horrific, which is why I believe Roe shouldn't be overturned. But the point is that it's not impossible to think the conservative movement would survive the backlash—they've already made abortion a near-impossibility for a large swath of the country, and haven't suffered for it yet—and those who believe the GOP would never dare overturn Roe are making a pretty daunting leap of faith, it seems.
    -- Brad Plumer 1:19 PM || ||

    October 17, 2005

    One-Inch Punch

    For some mysterious reason, Echidne has a fantastic tribute to Bruce Lee up on her site, and mentions the famous "One-Inch Punch," which would send people flying backwards with little to no windup. (See the video!) Now I had read elsewhere that even Bruce Lee never harnessed the full power of the One-Inch Punch, and here's a random and obviously reliable website that seems to confirm that:
    The One Inch Punch, which was made world famous by Bruce Lee, is in truth an ancient technique in Weng Shun Kuen. Bruce Lee, who was never tutored in this technique, learned it by spying on senior students. Because he never got to learning the second form, "Chum Kiu", that trains the footwork that is needed to perform the One Inch Punch correctly, he never got that part right. In fact, the One Inch Punch can easily be learned, if one understands the principle behind it.
    Principle shminciple. How does it work? This part actually seems like a decent explanation:
    We all know what it feels like when we have to sneeze uncontrollably. Or when we are startled so bad that our hair in our necks stand on end and you feel this "electrical" tingling in our spine. Now THAT is natural Fa Jing! It has got something to do with that same uncontrollable force that’s unleashed while you’re sneezing. You can’t keep your eyes open, no matter how hard you try. That is how close I can can come to describing the essence of Fa Jing for you.
    Ah! Speaking for myself, I never really got beyond yellow belt in my elementary-school karate class, so this is probably out of my league, though I did master a fearsome Tiger Claw. Or maybe it was an Eagle Claw. Nevertheless, fearsome. Also, this, from Echidne's comments, is very funny.
    -- Brad Plumer 4:10 PM || ||
    Life Tenure for Judges?

    Ronald Brownstein has a piece in the Los Angeles Times today noting that "[some] prominent legal thinkers from left and right" want to end life tenure for Supreme Court justices. Each time I hear this suggestion, it sounds an awful lot like a solution in dire search of an actual problem, but since it keeps popping up, let's take a look:
    Fewer vacancies mean more conflict over those that occur because neither side can be certain when it will receive another chance to change the court.

    Longer tenure also raises the stakes in each confirmation by multiplying the effect of each nominee. The common assumption during the recent confirmation debate over new Chief Justice John G. Roberts Jr. was that he would serve at least 30 years.
    Kevin Drum says the "benefits almost certainly outweigh the drawbacks," but I'm not so sure, although admittedly it's all a bunch of guesswork trying to predict what would happen. The way I see it, giving Supreme Court justices, say, fixed 18 year terms (which would mean that each president is guaranteed two nominees per four-year term) would lead to a lot more instability in law, as a two-term president could completely remake the Court. Moreover, since the justices would be serving for a shorter period of time, the Senate would have less incentive to oppose overly-radical nominees, meaning that we'd potentially see greater ideological polarization, and hence greater swings left and right. (There would also be the sense that the president "earned" his two picks by winning his/her election, and thus should get greater deference... basically, I see a lot less "advise and consent" under this system.) Obviously I'd be thrilled if the Court swung somewhere to the left of Earl Warren, but preferences aside, continual instability of this sort doesn't really benefit anyone.

    Is it "fairer" to give each president two guaranteed picks to the Court? Maybe, but then you start getting into problems like the fact that people vote for president for lots of reasons, often with the Supreme Court far from mind. Now perhaps if every president was guaranteed two picks, each presidential election would become "about" those two potential nominees to a much greater extent. Good or bad? I don't know. It might lead to a greater degree of cronyism, and could lead to those two judicial nominees showing undue deference to the president who appointed them (since their fortunes were more directly tied to the president's). Now I'm not absolutely convinced that the Supreme Court should be "insulated" from the democratic process, as it currently is, but some of this makes me nervous.

    Perhaps even more seriously, I'm not thrilled with the idea of having a bunch of justices who have to worry about what they'll do after they step down from the Court. The revolving door between Congress and K Street—where legislators retire and pick up some plum lobbying job—has produced more corruption and impropriety than is really healthy among the legislature, and I there's no reason to believe that a high court operating in a similar manner would avoid producing its share of Billy Tauzins.

    Now maybe we could find clever ways to solve these problems, and others that might arise, and perhaps there wouldn't be any problems, but that leads us back to the deeper issue here: Why bother? Ultimately, if we wanted to end life tenure on the Supreme Court, it would have to be done with a constitutional amendment. If the experiment goes badly, it's unlikely that we can have a do-over, or constantly tweak the rules. Which brings us back to the question: Why are we itching to muck up the current rules in the first place? Granted, there might be a few problems with the existing system—perhaps life tenure can lead to the odd senile Justice sitting on the Court—but just because the horse has a few fleas is no reason to shoot it, I think.
    -- Brad Plumer 1:53 PM || ||
    Poverty and War

    Here's a bleak statistic: Over the past twenty years, the median per capita growth of the poorest countries was zero. That's from Branko Milanovic's paper, "Why Did the Poorest Countries Fail to Catch Up?" His explanation: war. Conflict may be on the wane everywhere else in the world, but poor countries are much more likely to get involved in wars and civil strife. This alone accounts for an income loss of about 40 percent. If so, then all the debates about free trade and good governance and foreign aid, while important at the margins, miss the larger trend here. Conflict-prevention in the Third World would do more for global poverty than any other single measure.
    -- Brad Plumer 12:50 PM || ||

    October 16, 2005

    Schwarzenegger and Redistricting

    Kos plans on voting for Arnold Schwarzenegger's "redistricting reform" ballot initiative, while Nancy Pelosi's making it her "top priority" to defeat the measure. On the face of things, I think Pelosi's right and Kos wrong. Yes, redistricting reform makes sense if it can stop legislators from gerrymandering themselves into permanently safe seats, but there are good ways and bad ways to enact reform, and Schwarzenegger's proposal seems like one of the bad ways, judging by the initiative's proposed guidelines for drawing up districts:
  • Judges must maximize the number of whole counties in each district, and minimize the number of multi-district counties.

  • Judges must maximize the number of whole cities in each district, and minimize the number of multi-district cities.

  • Districts must be as compact as practicable. To the extent practicable, a contiguous area of population shall not be bypassed to incorporate an area of population more distant.
  • This looks dubious. Under the second guideline there, the judges drawing the boundaries could end up packing the majority of urban voters into a few concentrated, ultra-Democratic districts. (The first guideline might, equally, pack Republicans into conservative "counties," but I can't tell without data, and am guessing this would be a smaller effect.) Schwarzenegger's plan wouldn't necessarily lead to more competitive districts either, as is widely hoped. Since "[j]udges must maximize the number of whole cities in each district," you'd have a handful of ultra-safe single-city seats that would vote overwhelmingly Democratic. If you wanted more electoral competition, then you'd try to create a bunch of districts that, say, combined parts of "blue" urban areas with parts of "red" suburbs. Schwarzenegger's plan does the exact opposite.

    Now his plan would give representatives more "natural" regions to represent (i.e., it makes sense to represent a whole city rather than parts of two different regions), but that's a different goal from either a) ensuring competitiveness or b) making sure that voters have anything like proportional representation in Congress, and should be sold as such. Plus it looks for all the world like a naked, calculated power grab, rather than a solid reform that just happens to hurt the Democrats. (I'd happily support the latter; not so much the former.)

    Moreover, the initiatives's requirement that districts must be "as compact as practicable" doesn't necessarily make sense. Pundits love to bemoan the fact that many congressional districts are long and squiggly and funny-looking, but sometimes long and squiggly districts are more appropriate than a compact, block-like district. Geography is funny, and people arrange themselves in all sorts of funny ways. There's no reason why districts shouldn't represent this fact. Again, it depends what you want. Let's say you had a state with two small Democratic enclaves on either end—together comprising a fourth of the population—and a large Republican middle section. If you divvied the state up into four blockish districts, then you'd likely get four Republican representatives—two of them semi-competitive—whereas if you created a funny-looking district that encompassed the two pockets with a long corridor in between, then you'd get three Republican districts and one Democratic one, as the state's population might warrant. But those wouldn't be competitive!

    Indeed, Iowa has sensible, block-like districts, drawn by computer, but it also has only one Democratic district out of five in the House, despite the fact that 49 percent of Iowans voted for Kerry in 2004. Ultimately, if you wanted to make the representation "fairer" for Democrats in Iowa, you might have to draw some funny looking districts that connected disparate "blue" parts of the state. Indeed, some political scientists believe that a focus on compactness will always hurt the party that relies heavily on the urban vote. But then elections might become less competitive! So it depends on what your goals are. In the abstract, if you think that a state with X percent of its population voting for a given party should have X percent seats in the House hailing from that party, then compactness won't always help.

    Basically, it's not at all easy to figure out what the best way to do redistricting reform is, because reformers aren't always clear on what exactly they hope to achieve. Do they want more competitive districts, or do they want the representation to approximate the popular vote in a state? (These aren't always compatible goals.) Or do they want something else? At any rate, there are smart and not-so-smart ways of achieving each of these goals, but there's no reason to support "reform" in the abstract, especially if the goals are unclear, the plan seems poorly designed, and it looks strongly like a partisan power grab. And that's exactly what Schwarzenegger's plan looks like.

    MORE: As several commenters point out, the best way to have competitive and proportional elections would be to turn California into a single district and elect at-large candidates (via party lists, or the single-transferable vote, or what have you). I agree completely, although this seems difficult to pull off in practice, since U.S. voters seem to enjoy having "local" representatives.

    EVEN MORE: See Mark Kleiman for some numbers (down at the bottom).
    -- Brad Plumer 5:19 PM || ||