Skip Navigation
Megan McArdle

Megan McArdle

Megan McArdle is the business and economics editor for The Atlantic. She has worked at three start-ups, a consulting firm, an investment bank, a disaster recovery firm at Ground Zero, and the Economist.

Megan McArdle was born and raised on the Upper West Side of Manhattan, and yes, she does enjoy her lattes, as well as the occasional extra dry skim milk cappuccino. Her checkered work history includes three start-ups, four years as a technology project manager for a boutique consulting firm, a summer as an associate at an investment bank, and a year spent as sort of an executive copy girl for one of the disaster recovery firms at Ground Zero . . . all before the age of 30.

While working at Ground Zero, she started Live from the WTC, a blog focused on economics, business, and cooking. She may or may not have been the first major economics blogger, depending on whether we are allowed to throw outlying variables such as Brad Delong out of the set. From there it was but a few steps down the slippery slope to freelance journalism. For the past four years she has worked in various capacities for The Economist, where she wrote about economics and oversaw the founding of Free Exchange, the magazine's economics blog. She has also maintained her own blog, Asymmetrical Information, which moved to the Atlantic Monthly, along with its owner, in August 2007.

Megan holds a bachelor's degree in English literature from the University of Pennsylvania, and an MBA from the University of Chicago. After a lifetime as a New Yorker, she now resides in northwest Washington DC, where she is still trying to figure out what one does with an apartment larger than 400 square feet.

U.S. Budget Deficit to Pass $1.5 Trillion This Year

"Grim" doesn't seem to be a terrifying enough word to describe the budget outlook that the CBO released today.  Oh, sure, we sort of knew this was coming--tax cuts are expensive if you don't find spending cuts to match.  And yet the numbers still hit one like a punch to the gut.  From a guy wearing brass knuckles.  Wrapped around a roll of  quarters.  Shiny new quarters that you can't really afford to use for punching people, because you've got a $1.5 trillion budget deficit this year.

What is there to say?  This has got to stop?  At this point, saying so feels sort of Job-like.  Speciically, Job 14:  "Man born of woman is of few days, and full of trouble.  He cometh forth like a flower, and is cut down: he fleeth also as a shadow, and continueth not." It's absolutely true, of course, but it's kind of a downer.  And no matter how often you say it, you know you're still going to die.

It seems clearer and clearer that short of a near-death experience, no one is going to do anything about this problem. Our president spent over 5,000 words last night kind of noting, offhand, that we might have a problem, and then studiously avoiding proposing any serious solutions to the problem.  Paul Ryan's response emphasized the problem, but not the ugly solutions: raise taxes, cut entitlements.  And Michelle Bachman . . . well, what do you get when you cross a motivational speaker with an eighth grade social studies teacher?  I'll tell you what you don't get: any serious proposals to fix our budget woes.

The market is fully prepared to serve as judge, jury and executioner if we don't straighten up soon.  We don't have to fix the budget right now--but we do have to develop a credible plan that both sides can actually commit to.  Unfortunately, right now the only serious plan anyone seems to have is to put off making decisions, and hope that you'll be out of office when the day of judgement finally arrives.

Cato Responds to the SOTU

Libertarians hate every State of the Union address, and this one is no exception:


Me-Too Drugs: Herd Animals, Not Copycats

I've never really understood the objections to "me-too" drugs.  Somehow, the topic of health care makes otherwise sensible people forget everything they ever knew about economics and start spouting Victorian-era Socialist rhetoric about wasteful competition and superfluous duplication.  These same people would think you were crazy if you started ranting about how many societal resources are wasted by having three kinds of unsalted butter available in the supermarket.  And yet, this is the same argument.


Nonetheless, it does seem to bother a lot of people that we have more than one SSRI or anti-platelet drug on the market.  In their telling, companies barely bother to do research any more; they mostly just wait until someone else discovers a drug, and then they generate a cheap knockoff, like those guys on the street corner in Chinatown.

Derek Lowe points out that this view of how "Me-Too" drugs come about is not very accurate:


We've talked about this here before, but now we can put some numbers on the topic, thanks to this article in Nature Reviews Drug Discovery. The authors have covered a lot of ground, looking at first-in-class drugs approved from the early 1960s up to 2003, with later entrants in the same areas accepted up to 2007. There are 94 of those different therapeutic classes over that period, with a total of 287 follow-on drugs coming after the pioneer compounds in each. So there you have it - case closed, eh?

Not so fast. Look at the timing. For one thing, over that nearly 50-year period, the time it takes for a second entry into a therapeutic area has declined steeply. Back in the 1970s, it took over nine years (on average) for another drug to come in and compete, but that's gone down to 1.7 years. (The same sort of speed-up has taken place for third and later entries as well). Here's what that implies:


Implicit in some of the criticism of the development of me-too drugs has been the assumption that their development occurs following the demonstration of clinical and commercial success by the first-in-class drug. However, given assessments of the length of time that is typically required for drug development -- estimated at between 10 to 15 years -- the data on the timing of entry of follow-on drugs in a particular class, in this study and in our previous study, suggest that much of the development of what turn out to be follow-on drugs must occur before the approval of the breakthrough drug.
That it does, and the overlap has been increasing. I've been in the drug industry since 1989, and for every drug class that's been introduced during my career, at least one of the eventual follow-on drugs has already been synthesized before the first one's been approved by the FDA. In fact, since the early 1990s, it's been the case 90% of the time that a second drug has already filed to go into clinical trials before the first one has been approved, and 64% of the time another compound has, in fact, already started Phase III testing. Patent filings tell the story even more graphically, as is often the case in this industry. For new drug classes approved since the 1970s, 90% have had at least one of the eventual follow-on drugs showing its first worldwide patent filing before the first-in-class compound was approved.

So the mental picture you'd get from some quarters, of drug companies sitting around and thinking "Hmmm. . .that's a big seller. Let's hang a methyl off it now that those guys have done the work and rake in the cash" is. . .inaccurate. As this paper shows (and as has been the case in my own experience), what happens is that a new therapeutic idea becomes possible or plausible, and everyone takes off at roughly the same time. At most, the later entrants jump in when they've heard that Company X is working in the same area, but that's a long time before Company X's drug (or anyone's) has shown that it's going to really work.

If you wait that long, you'd be better off waiting even longer to see what shortcomings the first drug has out in the real marketplace, and seeing if you can overcome them. Otherwise, you're too late to go in blind (like the first wave does). And blind it is - I can't count the number of times I've been working on a project where we know that some other company is in the same area, and wondering just how good their compound is versus ours. If you know what the structure is (and you don't always), then you'll make it yourself and check your lead structure out head-to-head in all the preclinical models you care about. But when it comes to the clinical trials, well, you just have to hold your breath and cross your fingers.

One thing that never seems to occur to the denigrators of "Me-Too's" is that in some ways, they're riskier than novel drugs. Of course, you know the pathway can be targeted.  But once there's a good treatment on the market, the FDA's approval bar goes up for successive drugs. Treat cancer, and you can get by with a lot of ugly side effects and not so good efficacy.  But if you want to introduce another high-blood pressure drug, you'd better show that it's superior to existing treatments in some way, and that its side effects are pretty minimal.

Beyond that, looking at the relevant timeframes, I've never understood how the narrative of pharma companies simply copying eachother's work with me-too drugs got established.  Development, as Lowe notes, takes a decade or more, while the much-derided me-toos frequently all descend on the market in the space of a few years.  Anyone who can count should have known that simple copying is not a plausible explanation.


Comment of the Day

From iamafarmer:


Germany-"We'll export our way out of this!"
Australia-"We'll ramp up exports!"
France-"Our exports will see us through!"
SoKorea-"Our Export-based model will be our bulwark!"
Japan-"Thankfully, we can rely on exports"
and now, finally, we have the last guy to the party
USA-"We'll increase our exports in the next 5 years"

Either SOMEONE is throwing a hail mary here or Martian and Venusian Aggregate Demand have gone parabolic in the last quarter.


"We'll all become export nations!" is the new "We'll all provide value-added service!"

The President as Micromanager

While watching the speech, I tweeted that "Obama sounds remarkably similar to the CEOs I used to listen to on earnings calls: the ones with mediocre EPS and a failing business model."  This wasn't a crack at Obama, or Democrats; it was a reaction to the content.  And after watching the responses, the impression lingers--indeed, maybe it's strengthened.

The nation is facing some really difficult problems, particularly on the fiscal front.  There's no longer any way to put it off; pretty soon, the government is going to have to start making some very hard choices about taxes and spending.  No matter what it chooses, that probably means lower economic growth, angry voters, and some real loss on the part of whoever's ox is gored.

SOTU_bug.gif
Listening to earnings calls means listening to quite a few CEOs in analogous situations.  Often, the situations they are in are largely not of their making, or indeed anyone's fault at all.  But they are expected to fix it.  And too often, they can't, at least not yet.  Think of Rick Wagoner, and the other managers at GM who knew they were on the road to disaster, but couldn't exit without the consent of stakeholders who weren't quite ready to believe it was necessary.  

Faced with that situation, what does the CEO say?  He puts the best face on things.  I once listened to the head of a biotech company which was burning cash every quarter, had no good research prospects in the pipeline, and had already capitalized (i.e. sold) the income streams from their existing intellectual property.  Despite the fact that this was obviously patently insane, he spent quite a lot of time detailing his plans for the future of the company. What was he supposed to say?  "Sell my stock now, guys!" 

Everyone on the call knew that the future of the company lay in bankruptcy court or a fire-sale liquidation, but bizarrely, they sort of went along with it.  Of course, for obvious reasons, there weren't really a lot of dedicated analysts dialing in.

The government's situation is not quite that bad.  But it's pretty bad.  The underlying economy is, I think, ultimately fine, but the structural problems with the government's finances are driving it rapidly towards an unpleasant denouement.  Like a CEO with a stuck company, however, he can't just say that.  Stating the obvious would make things worse, as customers and creditors decide that the end really is nigh, and it's time to get out while they still can.

So what do those CEOs do?  They spend a lot of time talking about their company's proud history, even if that history only stretches back a few years. They lavish extravagant praise on their awesome, dedicated workforce.  And they deftly avoid talking about the big problems, for which they have no solutions, by talking about strategic areas for potential growth ("green jobs"), and going over a laundry list of new initiatives that do nothing to solve any of the core problems.  When they are forced to talk about the core problems--and if the company is big enough to attract analyst coverage, they will rudely draw his attention to the problematic areas on the financial statements during the Q&A--he responds in vague generalities that restate the problem as if doing so constituted a solution:

To put us on solid ground, we should also find a bipartisan solution to strengthen Social Security for future generations. And we must do it without putting at risk current retirees, the most vulnerable, or people with disabilities; without slashing benefits for future generations; and without subjecting Americans' guaranteed retirement income to the whims of the stock market.
The absolute favorite tactic, however, is the management reorganization.  You may be in a saturated market where your second-rate franchisees are slowly destroying your brand, making it impossible to attract higher-quality franchisees . . . but that's nothing that can't be fixed by creating a new Chief Strategy Officer under the CEO, and giving that officer oversight of marketing, research, and HR.  Perhaps a much larger competitor whose cost structure allows them to undercut your prices by 32% has entered your niche, but can they really withstand the fearsome might of your ISO 9000 certification and your new cross-functional product teams?  The government regulators who just outlawed your three top-selling products and made two-thirds of your capital plant obsolete may be powerful--but not as powerful as your revolutionary sales force compensation scheme!

You can't blame the dodges, but they are a warning sign.  Not that the CEO is a bad CEO, but that the CEO is in a bad situation he can't fix.

It's not that Obama doesn't know how to fix the problems; I think that like most people in Washington, he understands the broad parameters within which the fixes will be carried out.  But he can't make Congress do it before there's an actual crisis.  And saying all of this is all too likely to trigger the crisis--a crisis he'd much rather would happen during someone else's presidency.  So he tells us what we want to hear: that we need to find a way to fix Social Security without, y'know, changing it in any way.  And will you look at those green jobs!  I think we're going to have a bumper crop!

The reason he does this, of course, is that like the analysts on all of those calls, we let him.  Indeed, we actively, even eagerly, participate in the denial.  After all, if we knew how to fix the company, we'd be CEOs, not sitting on the couch kvetching about their nonsense.


Thumbnail image credit: Pete Souza/White House

The Value of Health Care Experiments

Atul Gawande has a lengthy piece in the latest New Yorker on attempts to control medical expenditures by targeting the costliest, at-risk patients.  Unfortunately, it's not out from behind the pay wall, but I recommend buying the magazine--it's a sobering, but often inspiring, read.

Perhaps it mirrors our general outlook on health care that for Ezra Klein, the article seems to be a beacon of hope, while for me, it's ultimately pretty depressing.  Ezra sees these as the beginnings of the sort of experimentation that is going to allow us to figure out what works, and thereby control health care costs.  I see them as admirable local efforts that are unlikely to go anywhere.

The history of social science--very much including public health studies--is littered with exciting programs that promised to both significantly improve the lives of the targeted populations, and to save money.  Yet you will notice that spending on things like health care and education is still going up, while the major reforms that have succeeded in either changing lives or controlling costs have been extraordinarily blunt: things like the EITC, where we just give poor people money; or welfare reform, where we stop doing so.

Why don't we have more revolutions in human affairs?  For starters, because these revolutionary studies are usually working with a pretty small number of patients.  This means that there's going to be a lot of variance--some will, by chance, show good results; some will, by chance, seem like disasters.  The programs with "good results" will survive and get written up by social science journals and people like Atul Gawande; the programs that end up costing money will collapse and disappear into a welter of administrative embarassment.  Note that I don't say that this is what has happened in the case of these particular programs.  The problem is, with small programs like this, it always has to be at the back of your mind.  That's one of the major reasons why promising pilot programs are so rarely replicated successfully. 

But not the only reason. Even the programs that genuinely work have a lot of things going for them that a broader program won't.  They have a crack team of highly educated experts who are extremely excited about the program, and understand the ideas behind it backwards and forwards.  They work in a controlled environment, and usually have a decent amount of administrative support for their efforts.  They are time limited, which matters--people are willing to endure lots of things for a limited, known duration that they wouldn't do permanently. They are often offering bonuses for participation.

Then they get implemented in the real world, with ordinary people who don't particularly want to change the way they've always done things, don't really care about the noble ideas behind your program, and don't see any end to it.  And the effects disappear.  

Ezra writes "We don't really know if his success can be replicated. But somebody's can be."  I'm not really so sure. These aren't medical problems; they're social problems.  And there hasn't actually been a lot of inspiring progress on the social problem front in the last hundred years.  "Give poor people more money" is mostly as far as we've gotten.  It works for problems that mostly stem from not having enough money--like malnutrition, or lack of adequate clothing.  But it's the opposite of the problem we're trying to solve now.


Why Don't Publishers Check Facts

The Economist pens one of its customarily acerbic book reviews in which it notes an extraordinary number of basic errors:


The trouble starts when Ms Moyo ventures into economic analysis. In comparing America's economy with China's, for instance, whether you convert yuan into dollars at market exchange rates or after adjusting for purchasing power matters a lot. Measuring at purchasing-power parity makes the gap in GDP and living standards look narrower. This explains a great deal of the difference between two sets of figures that Ms Moyo cites. Yet she does not mention it. 
 This is basic stuff. Much else in elementary economics also gets mangled here. Governments usually manipulate exchange rates to make their currencies artificially weak, not strong. In the Keynesian national-income identity, G represents government spending, not the budget surplus. The idea of a special tax on sports stars' incomes to discourage youngsters from unrealistic aspirations is intriguing, if contentious; suggesting that the two groups might bargain away such effects is absurd. 
 There are some puzzling omissions. Ms Moyo rightly complains at the exclusion of big emerging economies (except Russia) from the G8. She celebrates the strengthening of their diplomatic muscles. Amazingly, she seems not to have noticed the prime manifestation of this: the rise of the G20, which since 2008 has eclipsed the smaller, rich-country club. 
Worse, Ms Moyo commits some jaw-dropping factual errors. General Motors, she writes, was bought by Fiat, "an event unimaginable just a couple [of] years earlier". Yes, and it still is: the Italian carmaker did not purchase GM, but a 20% stake in Chrysler, recently increased to 25%. France gets "almost 20% of its electricity from nuclear sources". The OECD says the figure is close to 80%. 
Ms Moyo's editors are as bad as her fact-checkers. If they couldn't spot the analytical flaws, they might have done something about the stylistic ones that range from curious analogies to long phrases in parentheses. Endnotes are used almost at random.
How does something like this happen?  Online, of course--people write quickly, and they often work from memory rather than looking up every fact.  It is, as any writer can attest, startlingly easy for a bad fact--like Fiat buying GM--to insert itself so thoroughly into your consciousness that you don't even know you ought to look it up.

But this is what fact checkers are for, and I don't understand why book publishers don't have them.  They cost money, to be sure--but not that much money.  Sadly, there are a lot of experienced magazine people around right now who could be got at very competitive freelance rates.  A quarter of a million dollars a year would get you the world's finest staff of crack fact checkers; quite a bit less money would prevent embarassments like this book.  It might have even headed off the Arming America disaster, if a fact checker had noticed that the figures in his, er, smoking gun table, didn't add up.

Presumably the answer is that it isn't economic: readers don't care, and indeed rarely learn; there's no money in preventing the occasional catastrophe like Arming America.  But then one must turn the question around: why do magazines like The Economist, the New Yorker, and yes, The Atlantic, employ fact checkers?  Our readers are the potential consumers of books like the one that the Economist is reviewing; do they care less about accuracy in their books than in their magazine articles?

Not that anyone at The Atlantic thinks about it that way; we employ fact checkers because it seems like the right thing to do.  But why does this ethic prevail at so many magazines, and at no publishing house?

Comment of the Day

Rob Lyman suggests, of Mrs. Obama's purported relationship to rising pedestrian deaths:
I think this is kind of awesome, in the same way that I love negative campaign ads and the cheerful people who earnestly discuss high-fructose corn syrup in 30-second spots.

To which Blighter responds:
You're going to sit there & tell me you've never had a pithy-yet-informative discussion with a friend or family member contrasting the hype about the alleged evils of high fructose corn syrup with the down-home truth about how natural & wholesome it is? Really? Why, just this morning as I was enjoying one of the blueberry muffins my wife had made, I remarked about how glad I was to be having her home-cooked food, b/c it's blessedly free of high-fructose corn syrup, transfatty acids, & other commonly maligned bogeymen of our foody-age. Well, she came right back at me with all of the facts & figures that she had dug up last night during one of her regular exhaustive visits to the Corn Refiners Association website. I gotta tell you, it really opened my eyes about a few things... Mostly it made me question why I had ever thought marrying such a pedantic bore might be a good idea. Well, at least there's the blueberry muffins, I suppose.
I confess, I have wondered how the corn growers settled on "the supercilious know-it-alls of the world" as their spokesmen for the wonders of corn syrup.

Department of Awful Statistics

What to say about a statement by the Governor's Highway Safety Association spokesman which seems to blame--I swear, I am not making this up--Michelle Obama's national fitness campaign for an uptick in pedestrian deaths?


In order to make this sort of statement, I'd want some pretty ironclad evidence that, first of all, Michelle Obama's exhortations were actually causing people to spend more time walking on our nation's roads--a premise that this libertarian, for one, is pretty skeptical of.

I'd also want to see some evidence that they were walking on roads where, y'know, more people were dying.

As James Joyner says:

Well, first off, there are no figures provided. Via Dr. Google, I see "The Governors Highway Safety Association says in the report that 1,891 pedestrians were killed in the first six months of 2010, up from 1,884 in the same period in 2009 -- a 0.4 percent increase. " Now, I don't know the historical variation in these things, but I'd say offhand that this is a statistically insignificant swing. Regardless, a variety of factors -- alcohol, technology, and road design among them - seem to be considered possible explanations for the slight reversal in trend.

Second, while I don't pay much attention to the social campaigns of First Ladies, I don't recall Mrs. Obama telling people that they should get drunk, strap on an iPod, and go wandering around the streets reading their BlackBerries. She's advising people to get some exercise, not to go wander around in traffic. Yes, that's technically a form of exercise. There are others.

Third, anecdotally at least, I have indeed seen an increase in pedestrians distracted by electronic devices, whether it be texting while walking or grooving to whatever's piping through their little white earbuds. Then again, I've seen the same thing among people operating automobiles -- and traffic deaths are down 8 percent during the same period.

I presume that the spokesman had some sort of temporary freakout during the radio interview and blurted something he didn't quite mean.  It happens even to seasoned pros.  But c'mon, guys, where's the mumbling, red-faced, excruciatingly apologetic retraction?

Update:  The spokesman writes

I am the spokesman for the Governors Highway Safety Association. In the interview you reference, I did not blame Mrs. Obama for the small uptick in pedestrian deaths. I noted that in our study we note that programs such as Mrs. Obama's may be increasing the number and frequency of pedestrians and thus exposing them to more risk. We support these programs but want to make sure that pedestrians are behaving safely-not using iphones, texting, crossing in dangerous places, etc. It's ludicrous to suggest that the non-partisan, nonprofit Governors Highway Safety Association is blaming Mrs. Obama. We encourage walking/jogging, etc. We just want to make sure that this doesn't lead to more needless deaths.
Fair enough, but it seems like we should get some data on the number and frequency of pedestrians, and their relationship to programs like Mrs. Obama's.

My Last Word on Loughner

I see my colleague is still arguing that it was noble and good and even correct to leap into a discussion of violent rhetoric on the right after the shooting in Tucson.  I've been following his posts for several days now, and the argument seems to have several strands, so forgive me if I haven't gotten them all:

  1. No one is saying there's a direct causal relationship, which means that conservatives don't have any right to get miffed.  The implication seems to be that, suddenly and for no apparent reason, everyone wanted to talk about conservative bombast on the second amendment; coincidentally, a congresswoman was shot around that time.  
  2. Okay, so we're not saying there's a direct causal link, only that Jared Loughner could have picked up on this miasma of overheated talking points and then maybe gone out and shot Giffords because of this.  In the words of my colleague "My own view, as of the current evidence, is that Loughner was first a foremost a mentally ill person, but that some shards of far right ideology had entered his paranoid brain." Sure, it's possible--though this is in direct contradiction of the testimony of, AFAICT, everyone who actually knew him, all of whom say he was uninterested in politics and didn't listen to the talk radio or watch the television shows that were the main focus of our national tut-tuttery.  He could also have been sent over the edge by Andrew's impassioned writing on torture, which might have convinced him that Giffords was part of a monstrous state apparatus committing war crimes.  I mean, we don't have any evidence of this, but how do we know he didn't read Andrew and go over the edge?  For that matter, I think it's about time someone called out those dangerous blowhards at the Chicago Manual of Style, since it seems quite likely that their tracts on the necessity of strict rules for style and grammar fuelled Loughner's paranoid fantasies about government and language.  Those are, as I presume Andrew well knows, the only conspiracy theory that we have actually connected to Loughner's fixation on Giffords.
  3. Rush Limbaugh is a first class jerk, and Sarah Palin's a dangerous moron.
Okay, as it happens, I agree that Rush Limbaugh and Sarah Palin are net detractors from American civil society.  However unfair it is, if Sarah Palin's inept response to the foofooraw takes her out of the running for president in 2012, I think the country will be better off--not least because it means that there might be an actually viable opposition candidate, rather than someone who gets the nomination precisely because she drives the rest of the country crazy.

But this is not really a stirring defense of the attempt to link Loughner's crimes with Limbaugh, Palin, or anyone else on the right--any more than it's okay to put people in jail for things they didn't do, just as long as the cops and prosecutors are really sure that they're bad people who deserve to be in jail.  The right has a legitimate grievance here: every time there's some potential act of terrorism, it seems that people feel perfectly free to assume that it must have been a right wing lunatic who committed it.  The same people who urged us not to rush to judgement after the Fort Hood shootings didn't see anything wrong with Bloomberg's speculation that the Times Square Bombing--a bombing actually committed by a Muslim terrorist wanna-be--was probably committed by a militia member. And now this.

I am in general impatient with the notion that "discrimination against (fat people, Christians, Catholics, gays, transvestites, etc.) is the last acceptable prejudice."  As you can see by the list, there still seem to be a lot of acceptable prejudices left.  But this rush to indict conservatives for every incident of mass violence where motives are unknown does have a bit of this flavor.  We have a laudable desire to avoid making incendiary remarks about Muslim terrorism, that might result in terrible violence against a mostly law-abiding community.  So why do we express this desire by rushing to blame any possible terrorist acts on a different, mostly law abiding community?  "Round up the usual suspects" is a law enforcement tactic that we should be skeptical of no matter who it is applied to.

Andrew's defense seems to be that there are a lot of right wing jerks out there, and that by combing Loughner's writing, he can find a few sentences here and there that sort of sound like things that might have been said by one of those right wing jerks.  But I'm pretty sure that if I combed Loughner's writing, I could find some sentences here and there that imply that Loughner read Andrew's writing, or gay rights literature, or Edmund Burke.  The guy was so disjointed that people on the UFO conspiracy theory message boards were saying, "Dude, this sounds craaaazy.  You need to get help for your delusions".  Meanwhile, many of the alleged connections to right wing writings are, as Jim Lindgren has pointed out, incredibly tenuous overreadings; moveover, there are at least as many connections to left wing ideas.  Which is to say, not many, in any recognizeable sense.  Trying to discern the hidden meanings and motives behind his writings means becoming, in some sense, as crazy as he was--it is trying to find a hidden order when there's no evidence of order, hidden or otherwise.

As Loughner did, however, if you are determined enough to find a hidden order, you can manufacture one.  The fundamental problem with using the Giffords tragedy as an indictment of conservative rhetoric is that the freelance prosecutors started by issuing the indictment, and then started looking for evidence to build their case as more information became available.  This is not exactly uncommon among real prosecutors when there's a lot of political pressure on, and it often results in some spectacularly weak cases.  And as with real prosecutors, when it turned out that the facts didn't really support their initial belief, the response was not to withdraw the indictment, but to frantically hunt for enough evidence to maybe get a conviction on a lesser included offense.

As any cognitive scientist will tell you, if you really go looking, you can build a case for almost anything you want to believe.  It's called confirmation bias: you start with a hypothesis, and then you look to see whether there are facts out there that confirm your hypothesis.  Since you aren't looking so aggressively for disconfirming evidence, it quickly comes to seem as if you have a pretty good case.  If you hadn't started with the hypothesis, however, you probably wouldn't have reached the same conclusion.  These are the cases that tend to fall apart dramatically in a courtroom, where the jury doesn't necessarily share your priors, and the other side gets to talk, too.

I'm sure I don't have to tell you that this effect is particularly pronounced in politics.  

The whole thing was somewhat egotistical, really--everyone immediately leaping to assume that the issues they most cared about surrounding Giffords (health care, regulation, the tea party) must also have been what Loughner cared about.  It wasn't crazy to suspect this--I too thought it might well be a recognizeably political assassination.  But the level of certainty was, I think, unwarranted--and because it immediately engendered attacks on other people, it became very difficult to retract that assessment.  It is relatively easy for a seasoned pundit to admit that you are wrong, but relatively difficult to admit that you have wronged someone else.  This is why I try not to write about how my opponents are a specially dreadful brand of moral degenerates; it makes it too hard to walk back when it turns out I was mistaken about the fact of the case.

As it happens, I'd be very happy to have a discussion about the overblown rhetoric on talk radio and cable--as I've long noted, I dislike the nastiness of Rush Limbaugh, the bombast of Sean Hannity and Keith Olbermann, the strident parochialized victimhood of Sarah Palin.  But I am astonished that anyone believed, for even a minute, that now was a good time to have that discussion.  Did anyone really think they could have a productive dialogue that started by accusing the other side of inspiring, or encouraging, a horrific mass murder?  All I can say is, if you did, it must be something spectacular to watch when you finally decide that it's time to have that talk with the spouse about bathroom hygiene.

Our Readers Respond on Unionization

Couple of great comments to the post on Harpers.  Gabriel Rossman reminds us of a classic work:

Our hostess writes "We tend to think about labor disputes as attempts to divide the spoils of success: unions form when there are excess profits that they can divert to workers. " 
It's probably worth remembering that the subtitle to Hirschman's Exit, Voice, and Loyalty was "Responses to Decline in Firms, Organizations, and States"
Meanwhile, I am chastised for my simplistic characterization by reader Mickey Zellberg:

"We tend to think about labor disputes as attempts to divide the spoils of success: unions form when there are excess profits that they can divert to workers"

The only people who think this have no experience with actually working conditions in the real, low-paid wage world. The Harper's people unionized because their management were assholes. This happens all the time.

I once worked at the famous Strand Bookstore in New York City. The workers there had unionised and made little more than minimum wage under their contract. They had a crappy little health plan. The main point of unionizing for them was that it enabled them to say "F You" to the boss.

And this is borne out in many studies of unions, which show that "lack of respect" was the main motivation.

DougJ says I'm being too triumphal:


The Atlantic isn't exactly an economic powerhouse either. 

I do think that my post came off as more critical of Harper's than I meant it to.  Whatever the rights of the dispute between management and staff (I'll have more on that tomorrow), I don't think the problem with Harper's is that they're too left wing, or in some other way not enough like The Atlantic.  I mean what I say: it's really hard to make money in this business. I have a very keen appreciation of how much hard work by how many people it took to get The Atlantic to profitability, including a lot of amazing talent on both the editorial and the business side.  There's nothing I can point to and say "Harper's should have done that" because there were thousands of that's, many of which wouldn't have been appropriate to a very different magazine.  And anyway, if there's one thing that covering business has taught me, it's that even the most well-deserved success involves luck: some unexpected movement in markets, the death of a key player, or other contingencies too numerous to name, can undo even the best laid plans.

Nor was I saying Harper's is a bad magazine; only that I've stopped reading it.  It's very expensive, and it seems to have lost some of its urgency (for me, at least) since Bush left office.  It's not a comment on the magazine's politics or quality; only on my taste, and budget.  Most political magazines flourish when they're in opposition, languish when their guy wins.  Maybe you could question the decision to be overtly political, which made them vulnerable to this cycle, but I'm certainly not going to question it; only the owners and staff can rightly decide what a magazine is for.

So I wasn't trying to throw stones.  The labor disputes have me sad and puzzled about what's happening to one of America's great journalistic institutions.  No criticism implied of anything except the belief, which I know to be false, that my magazine is lying about our success.

The Rise and Fall of the Labor Force

In keeping with today's emerging "labor" theme, I now turn to David Leonhardt's article on the mystery that is puzzling economists:  why is American GDP recovering robustly, while employment growth lags other nations, and our own history?


This is not a new mystery.  You may be old enough to remember the ponderous articles on the "jobless recovery" that appeared like clockwork to punctuate the passage of months of the early Clinton administration.  You are almost certainly old enough to recall similar fare under Mr. Bush.  Before the 1990s, recessions seemed to have a clear pattern--jobs would be lost during the downturn, then roar back along with the rest of the economy.  Since then, we've seen a different pattern: the jobs go away, and the workers take a long time to find another.  In 2003, Erica Groshen and Simon Potter offered a possible explanation for this, in a paper for the Fed.  Unemployment was lagging, they suggested, because while previous downturns had mostly featured cyclical unemployment--employers laid off, then rehired when demand picked up--the more recent recessions were producing mostly structural unemployment: whole industries and job categories were going away, which meant that a lot of laboriously acquired human capital (both skills and networks of contacts) was being destroyed.  Those workers were taking a lot longer to find new jobs.

There's been an interesting back and forth over this theory in the blogosphere recently; like Leonhardt, I particularly recommend Tyler Cowen and Paul Krugman on the topic.  Leonhardt's suggestion for the quandary, heartily seconded by most of the left-leaning blogs in my RSS reader, is simple: as the labor movement has declined, workers have lost bargaining power.

Relative to the situation in most other countries -- or in this country for most of the last century -- American employers operate with few restraints. Unions have withered, at least in the private sector, and courts have grown friendlier to business. Many companies can now come much closer to setting the terms of their relationship with employees, letting them go when they become a drag on profits and relying on remaining workers or temporary ones when business picks up.

Just consider the main measure of corporate health: profits. In Canada, Japan and most of Europe, corporate profits have still not recovered to precrisis levels. In the United States, profits have more than recovered, rising 12 percent since late 2007.

For corporate America, the Great Recession is over. For the American work force, it's not.

I think I can tell an intuitively compelling story where this is true (though Adam Ozimek disagrees):  unions fight layoffs harder than single workers can, so more powerful labor means fewer firings.

However, there are (ahem) some problems with this theory, and with the conclusions that others have drawn from it--leaping into lamenting the loss of labor protections.  For one thing, if it is true that labor prevents companies from firing during downturns, it almost certainly means that there is less hiring during upturns--if you can't fire workers when you're losing money, you tend to hire as few as possible in the first place.  I can, with a lot of squinting, read the Europe/US data disparities to confirm this slightly less inspiring story of unionization lowering the volatility of employment--but you have keep your head moving in a sort of figure eight to prevent Spain's unemployment data from crossing your field of vision.

But the larger problem, I think, is that the decline of the labor movement is not an uncaused cause.  I can tell a pretty plausible story where the decline of the labor movement isn't itself causing the changes in employment trends--rather, it's a symptom of other forces which cause both the decline of unions, and more volatility in the labor market.

You can tell a lot of these "just so" stories, some more plausible than others.  Maybe the entry of women into the workforce undercut unions (bigger labor supply=less labor bargaining power, plus anecdotally, wives used to exert a lot of pressure on their husbands to stick at steady, boring union jobs; the benefits were designed to give the biggest boost to families).  Perhaps it also made it easier, psychologically, to lay people off who weren't the sole support of their families. Or perhaps the more rapid pace of structural change in the economy has decreased the returns to unionization: there's not much point investing a lot of time in a union if your job description will be obsolete in five years, and you can't expect much loyalty from a firm if you're planning to leave whenever someone else offers you more money.  Firms try to avoid layoffs whether or not there's a union because of the effect on morale; perhaps all these changes have lowered the morale cost of firing workers.

I'm not saying that all the bloggers who wrote about this are unaware of these trends; indeed, many cited them.  But they tended to assign major causal weight to the decline of the labor movement, or worker bargaining power, when those might simply be side effects of some third factor.

The most plausible third factor, to my mind, is one that not many bloggers did mention: companies are simply more competitive than they used to be.  The mighty labor-industrial complexes of the postwar era were mostly cosy oligopolies; there was a lot of value for labor to extract because they didn't have to worry so much about losing their customers.  Those cosy oligopolies had cosy relationships with bankers, who lent them money to keep overstaffed in downturns without much thought of how the depositors would feel, and the depositors didn't care because their heavily regulated accounts paid exactly the same interest rates as every other bank, and were federally insured.    The CEOs, cosily safe from interference by meddling outsiders like shareholders, could amass vast piles of cash to keep workers on even when there was little work for them, as well as building their conglomerated empires.

There were upsides and downsides to this arrangement, of course, but this is not the blog post in which to discuss them.  Rather, the point is that when this regime ended, it made it much harder for labor unions to bargain: whatever arrangements they struck suddenly had to take customers, competitors, and shareholders into account, rather than simply arguing with management over their share of a relatively fixed pie.  This made unions much less valuable to workers; it also, independently, put pressure on companies to fire workers during downturns.  Do we have any evidence that a more unionized workforce would have prevented the layoffs made necessary by a much more competitive environment?

This story--the story of more competitive industries, not less-organized ones--is arguably the best fit with the data.  It explains why our recessions now look different from our recessions in 1960, and also why our recessions look different from recessions in Germany or France, where domestic firms enjoy a lot more protection from competition on various levels than American firms do.  Maybe it even explains why someplace like Spain, where the markets are much more open than they used to be, are having so much trouble.

Or maybe not; I don't want to claim too much for any one theory.  But it's at least worth thinking about.

Tough Times at Harpers

There's something both puzzling and tragic about the labor disputes at Harper's.  I had been aware of their struggles with circulation--indeed, I'm part of them.  Given how high the price is, and how rarely I felt like I was finding surprising, challenging articles, eventually, regretfully, I stopped taking the magazine.  Apparently, a lot of other people agreed, a problem that was compounded by the recession.


The budget deficit has led the owner to make changes; the changes has led to resistance from the staff.  And since this is Harper's, naturally, the staff has unionized.

We tend to think about labor disputes as attempts to divide the spoils of success:  unions form when there are excess profits that they can divert to workers.  But this is not always the case.  Unions also form when there is no longer enough to go around, and workers are trying to hold onto more of their former share.

It's hard to see what the point is.  MacArthur, the owner, wrote the typically self-serving missives that management issues during unionization disputes, but the bits that New York Magazine excerpts are basically right: a union is not going to give the staff the power to choose the editor in chief of a money-losing magazine that depends for its continued existence on the sufferance of a wealthy heir who is currently pouring about four million dollars a year into the thing.  Nor can it prevent layoffs when losses of that magnitude are on the line.  Does the union think that it can somehow force MacArthur to keep running the place to their satisfaction?

Given where I work, I suppose I can't let an interesting side-point pass unremarked:  MacArthur apparently thinks of us as his rivals; and he simply flatly refused to believe that we were profitable.

In the months following Hodge's ouster, the staff became alarmed when MacArthur's name began appearing on top of the masthead (previously it had been underneath the editors' names, along with the business staff). Senior editors Bill Wasik, Luke Mitchell, and Jen Szalai departed, along with web editor Paul Ford. To fill Hodge's position, MacArthur appointed Ellen Rosenbush, Harper's' longtime managing editor, as acting editor. The move struck many staffers as a way to have a more pliant editor in charge: Rosenbush helped edit MacArthur's monthly column in the Providence Journal and his book You Can't Be President.. Staffers also complained that MacArthur's business plan was doomed to fail. He seemed to show little interest in the web in general or the iPad in particular, at a moment when The Atlantic, its longtime thought-leader rival, had invested heavily online and had reaped benefits both in prestige and in financial viability. "He said no one will ever make money on the web," one staffer told me on condition of anonymity.

A couple of months after Hodge's firing, senior editor Donovon Hohn helped to convene a meeting about publishing Harper's on the iPad. MacArthur didn't attend. But shortly thereafter, staffers began receiving xeroxed articles from MacArthur in their mailboxes that trashed the iPad and Kindle. One article from the Spectator had a hand-typed line at the top: 

To: Hoipolloi
From: Rick

Last month, MacArthur wrote a column for the Providence Journal, subsequently posted on Harper's' website, that bashed the Internet. "I never found e-mail exciting," he wrote. "My skepticism stemmed from the suspicion that the World Wide Web wasn't, in essence, much more than a gigantic, unthinking Xerox machine ..."

When one staffer brought MacArthur's attention to a recent New York Times article that stated The Atlantic was profitable this year because of its heavy investments in the web, MacArthur responded: "They're lying. They're a private company and they can say whatever they want."

At least in this corner of The Atlantic, we wish our brother journalists at Harper's nothing but success; any feelings of rivalry have waned to nothingness since those rambunctious days of the 1880s.  

I hope that you will take it from me, then, that we aren't lying; we really are profitable.  I spent most of last fall being repeatedly sworn to silence on our progress towards corporate goals.  These profits were achieved, not so that we could show the fellows at other publications, but simply because everyone--owner and staff--is happier with a profitable publication.

As you can see from the Harpers experience.  When there's not enough money to go around, people fight over the scraps.  Of course, getting to profitability is insanely difficult, which is why not many publications have managed the feat recently.  Given what Harper's has gone through, I can see why MacArthur would find it hard to believe in a profitable magazine.

Which Agencies Work?

Of regulatory capture and finance, a reader writes:

This is exactly how Stigler describes regulatory capture happening. If you're going to regulate trucking intelligently, you have to get people who know about trucking. The only place to get them is industry (by and large) and industry is the only place they can go back to after serving at the agency, other than academia where they train people to either go into industry or agency. Pretty soon, the agency makes rules for the benefit of the incumbent players, industry takes out ads welcoming the new mandate, innovation dies, stasis prevails, and we all end up working for the betterment of Goldman Sachs.
I said in my previous post that this could be avoided.  But is this true?  I'd like to hear both bloggers and readers nominate their candidates for regulatory agencies that work.  They don't have to be perfect, just basically functioning the way we'd want them to.  I'll nominate the FDIC; readers of all political persuasions are invited to offer candidates for what agencies work, and why they might be different.

I am not interested in comments focused on the agencies you don't like, or your belief that there is no such thing as a good government agency.  For the purpose of commenting on this post, accentuate the positive!

Eileen Rominger Heads to the SEC

Stand by for more fuming about the revolving door between regulatory agencies, and those they regulate: Eileen Rominger, who announced this fall that she was leaving Goldman Sachs, has landed as the new Director of the Division of Investment Management at the SEC. It's an interesting appointment, because it's the first time in a long time--maybe ever--that the position has gone to an actual practitioner of investment management, rather than a lawyer who specializes in the relevant law. Rominger was the Chief Investment Officer of the portfolio management businesses in Goldman Sachs Asset Management.

A source who works in securities law says that for all the fuss this is bound to raise, it might be a good thing:"I would think it has a very good chance of being good for the agency. Anyone with her level of experience should be pretty familiar with the regulatory structure aroung investment management, and her level of direct asset-management experience may well be a very fresh and welcome perspective." After a moment, my source added, rather dryly, "I have seen some examples of the upper levels of the SEC being unwilling to recognize that the world contemplated in certain regulatory requirements does not reflect reality."

Of course, she will also naturally tend to see things through the eyes of an investment professional--which generally means that thinking that what is good for investment professionals, is good. But this is simply an inherent problem with regulating markets. Who knows the most about the markets being regulated? The firms. It would be foolhardy to try to regulate without their input--you'd be very liable to do something that sounded good, but turned out to be disastrous. (This is approximately what the communists in Russia and China did, and no, I am not saying that regulating the stock market is like communism--I'm saying it's not like communism, because we seek input from those we regulate.)

But the natural result of seeking input and information from regulated entities is what historians and political scientists call "regulatory capture"--the regulating agency comes to reflect the interests of the people it regulates. (Often, by, say, helpfully erecting barriers that keep out smaller competitors.) You can control this tendency, but you can't eliminate it. As I've said before, it's like that old Woody Allen joke: "I think my brain's the most important organ--but then, look who's telling me that!" The very act of getting information from the people you regulate will subtly bias your worldview towards theirs, and therefore their interest.

Should we try to control this tendency by keeping them out of our agencies? As my source indicates, this leaves agencies vulnerable to making stupid mistakes. So I don't think there are any very good answers.

Where are All the Sick People Who Can't Get Insurance?

Approximately 300 million Americans face a serious risk of being killed in an auto accident.  Which is to say, there are auto accidents, and the entire population of the country is at risk of being one of the thousands of people every year who are killed on our nation's highways.


That statement is about as useful as a new report from HHS, presumably timed to undercut the GOP as they debate their fruitless attempt to repeal the health care bill.  HHS says that millions of people--about half the country, in fact--either has, or has a loved one with, a condition that could cause them to have difficulty securing insurance.

As with the catchy opening sentence on auto deaths, this turns out to be much less interesting when you examine it.  I don't really want to know who could be conceivably affected by a problem--after all, even someone with no medical conditions now could presumably develop one.  What I want to know is, how many people this problem affects.

More »

You Can Have My Double Space When You Pry it From My Cold, Dead Hands

I was going to write something on why Farhad Manjoo's polemic on the double space after a period is dead wrong.  But Tom Lee's piece on the topic is so superlatively better than what I would have written that I will just turn the mike over to him:


I'll take Manjoo's word that all typographers like a single space between sentences. I'm actually pretty sympathetic to arguments from authority, being the big-state-loving paternalist that I am. But, with apologies to friends and colleagues of mine who care passionately about this stuff, I lost my patience with the typographically-obsessed community when they started trying to get me to pay attention to which sans-serif fonts were being used anachronistically on Mad Men.

I love you guys, but you're crazy. On questions of aesthetic preference there's no particular reason that normal people should listen to a bunch of geeky obsessives who spend orders of magnitude more time on these issues than average. It's like how you probably shouldn't listen to me when I tell you not to use .doc files or that you might want to consider a digital audio player with Ogg Vorbis support. I strongly believe those things, but even I know they're pointless and arbitrary for everyone who doesn't consider "Save As..." an opportunity for political action.

Nor should we assume that just because typographers believe earnestly in the single space that their belief is held entirely in good faith. They're drunk on the awesome power of their proportional fonts, and sure of the cosmic import of the minuscule kerning decisions that it is their lonely duty to make. Of course they don't want lowly typists exercising opinions about letter spacing. Those people aren't qualified to have opinions!

(For what it's worth, I don't think you rabble should be using Flash or Silverlight or anything other than plain text in your emails. You can't be trusted with it! And, not that this motivates me or typographers at all of course (we just want what's best for you), but when you do such things it makes my job slightly harder.)

Manjoo's argument about beauty, like all such arguments, is easy enough to dismiss: I disagree. I find it easier to read paragraphs that are composed of sentences separated by two spaces. Perhaps this is because I, like most technologists, spend most of my time working with (quite lovely!) fixed-width fonts for practical reasons. But there's also a deeper beauty to the two space rule -- a sort of mathematical beauty. Let me explain.

Consider the typical structure of writing. Letters are assembled into words, which turn into phrases, which are arranged into sentences -- at the same time being assigned to speakers, a neat trick -- which are then combined into paragraphs.

It's a chemical process, a perfect and infinitely flexible hierarchical system that should command our admiration. Being able to rationally examine, disassemble and interrogate the final product is a mark of the system's beauty. Anything less is settling for a sort of holistic mysticism.

It's disrespectful to let writing's constituent elements bleed into one another through imprecise demarcations. If you see me "making mistakes with comma placement", please rest assured that it's deliberate. In most cases the comma doesn't belong to the phrase delimited by the quotation marks that enclose it. Placing an exclamation point or question mark to the left or right of a close-quote is a weighty decision! That we violate the atomic purity of quotations with injected commas is an outrage.


Let me just add: if you're spending time worrying over whether my emails contain one or two spaces, you need to ask them to let you out of the asylum more often so you can pursue a more interesting hobby.  I double space after sentences because I learned to type on a manual typewriter, and it's not worth the effort to retrain myself.  Even if typographers groan every time they open one of my missives.

The Problem With Too Many Currencies

Matt Yglesias muses about the possibility of a Brooklyn-based currency:

Here's a random paragraph from Paul Krugman's opus on the Euro:

I think of this as the Iceland-Brooklyn issue. Iceland, with only 320,000 people, has its own currency -- and that fact has given it valuable room for maneuver. So why isn't Brooklyn, with roughly eight times Iceland's population, an even better candidate for an independent currency? The answer is that Brooklyn, located as it is in the middle of metro New York rather than in the middle of the Atlantic, has an economy deeply enmeshed with those of neighboring boroughs. And Brooklyn residents would pay a large price if they had to change currencies every time they did business in Manhattan or Queens.

I guess I wonder how inconvenient this would really be in 2010 as opposed to 1970. Individuals wouldn't, after all, really need to "change currencies" every time they went to Manhattan. You could buy things with your credit or debit card, and if you only had Brooklyn Bucks in your pocket, American dollars are only an ATM visit away. I think the real issue here isn't so much that it would be too inconvenient as that it wouldn't be inconvenient enough--getting US dollars and dollar-denominated financial assets would be so simple that US dollars would circulate widely in Brooklyn and Brooklyn Bucks would wind up being marginalized.

Eh, the currency exchange problem is pretty big, as long as you're forward looking. Ordinary people who don't operate businesses, and travel mostly for pleasure, don't tend to think about this so much. For us, the major problem with operating in different currency zones is the hassle of changing money. But for anyone who enters into contracts, currencies are a major problem. After all, what happens if the exchange rate changes?

As a general rule, people want to get paid in the same currency in which their obligations are denominated. Matt would not like a ten-year contract that paid him in Brazilian reals while he lived in the United States, even if pulling the money out of his bank account in dollars was a painless transaction. What if the real dives 40% against the dollar? How would he make the mortgage?

This is why the annual reports of multinational corporations so frequently contain discussions of currency risk, and currency movements that either boosted or deflated their profits. When you're dealing with developed countries, this isn't usually an enormous effect, because multinational operations rarely sell something in a currency zone without incurring expenses in that currency zone; since those move together, it mitigates the problems. But companies who are exposed to currency risk do try to hedge it where they can.

The European borrowers in places like Iceland who started thinking about currency like tourists--as mostly an issue of convenience--got themselves into big trouble. A lot of people in Iceland took out loans denominated in other currencies, expecting that their biggest risk was a fairly minor movement in the relative exchange rates. When there were big swings, they were hammered. Of course, that has happened to exporters, too, but it's not the sort of mistake anyone makes twice.

So having a unified currency really does enhance economic transactions, by removing the currency risk that people face: it's now much less dangerous to enter into forward-looking contracts with people in other countries. But as Krugman and others have documented, it also has substantial thoughts.

Little Girl Lost

My Dad worked for the government when I was little, so I met some politicians as a child--there is an adorable picture of me, four, with Mayor Koch at some swearing in.  But since a nine-year old was shot while meeting Congresswoman Giffords, I've been thinking about one meeting in particular: the time I met Senator Moynihan.  I was eight.  I shook his hand so enthusiastically that I may have injured it.

I haven't been reflecting so much on that meeting, which I barely remember, as realizing the magnitude of what was lost.  Think of the last thirty years of your own life, how much of it there has been.  All of that was robbed from her, and another thirty or forty years to come.

What, Exactly, is the Problem With Foreclosures?

Since he has frequently made the point in the comments section that non-lawyers are egregiously misunderstanding the issues at stake, I have asked Rob Lyman to explain what he means.  Rob used to have his own blog, once upon a time, and as you can see, the world lost a great blogger when he gave it up.


So, thanks to the generous invitation of our gracious hostess, herewith my discussion of why the mortgage mess has nothing to do with "archaic land records systems." There are doubtless more legal errors that merit discussion, but I'm going to stick with what I know best. Several things should be said 1) this is not legal advice. It's general information. Consult a lawyer, and pay him well. 2) The law varies considerably from state to state, so some of what I say may be downright wrong in your state; I'm speaking in broad generalities here. 3) I'm not actually a land lawyer. I'm a patent lawyer. It happens that patents have a title system which closely resembles the land title system, and yes, I've written a mortgage intended to cover a patent. But I may make mistakes that a real estate lawyer wouldn't make. Feel free to correct me.
I'm going to start with a summary of the purposes and effects of the recording system used by most counties in the US. Then I'll pause briefly to discuss the rules surrounding transfers of the documents in issue. After that, it should be clear what the problems the banks are currently experiencing are, and what they should have done to avoid them.

So. Land records. It is obvious that it is possible to have possession and apparent ownership of land without actually owning it, or having the right to sell it. You can't just walk up to the guy living in the house and buy it from him and expect that to go well. Renters obviously can't sell the house, and there are many sorts of "estates" in land which permit possession without the power of sale. The way one transfers (or proves) ownership is by deed: here is a document showing that the previous owner transferred it to me, and showing what sort of transfer it was.

But of course the question arises: how did the previous owner get the right to do that? Who sold it to him? This gives rise to the concept of a "chain of title," stretching back to time immemorial, which at common law was 1189. So now suppose your seller shows you a nice chain of title going back to the coronation of Richard I, and you buy the house. The next day, some other dude shows up and says HE bought the house from the same owner, too. Or worse, somebody shows up with a chain of title dating back to the coronation of Richard II, claming to have bought the place from an entirely different "previous owner." Which one of you gets it?

At common law, first in time was first in right; whoever had gotten their deed from the previous owner won, and the other guy had no recourse but to sue that the seller for fraud. Or maybe sue the guy who has been dead for a few hundred years for fraud. This system was obviously not fully satisfactory. The solution is to permit buyers to record their deeds in searchable books at the county courthouse, so that you can see if anyone else has bought the place before you.

Additionally, the recording laws operate to cut off the interests of people who fail to record their deeds. This means that if buyer A fails to record his deed, and buyer B comes along later and, after a diligent search of the land records, buys the house from the sneaky seller, buyer B can deprive buyer A of the property by recording. There are different types of recording statutes which operate in slightly different ways, but the basic rule is this: you don't have to record, but if you don't, you risk somebody coming along later, "buying" the place from the same guy who sold it to you, and then kicking you out successfully. On the other hand, if you record promptly, you can be protected from somebody who bought before you, but failed to record, as well as anyone who buys after you.

Lenders who get mortgages from buyers can (and should!) also record their mortgages, and for the exact same reasons: they don't want their interest in the property (the right to take it over if they aren't paid) to be cut off by a subsequent buyer. Mortgages will have a priority based on the order of their recording: first recorded will generally be the first paid. So recording the mortgage is a pretty good idea.

Note something here however: recording is generally not mandatory, nor does it affect the validity of a deed or mortgage. It is perfectly possible to, say, bring a trespassing action against someone even if you have no recorded deed, as long as you can produce the actual deed to show the court you are in fact entitled to bring it. Similarly, to foreclose, no recordation is necessary. The county records affect ONLY the rights of prior and subsequent buyers/mortgagees. So the fact that mortgage securitizers didn't record these transfers does not in any way affect the validity of the transfers. They weren't required to record them.

One other thing to note is that county recording offices usually record only documents affecting title to land (although they do vary in both their rules and the rigor with which they are enforced). That means that transferring the note (the evidence of the debt) wouldn't even be recordable in many places; only the mortgage (which is a security interest in real property) would even be accepted by the clerks.
So what happened?

Well, there's a right way and wrong way to transfer the rights to both a stream of payments (the note) and to foreclose (the mortgage or deed of trust). I have been saying that the former calls for "wet ink" on the face of the document, which nk101 pointed out isn't quite right. Wet ink is required to make someone a "holder" of the note, but not to grant the right to enforce it, for which being a mere "assignee" will suffice. So I got that wrong (or exaggerated a bit). But what is clear is that to transfer ownership of either a note or a mortgage, you must do it in writing. A handshake or a call to your broker isn't good enough, even if it's a really really earnest handshake. This doesn't necessarily have to be a literal piece of paper, although I confess I have a personal preference for that sort of thing. You could do it with some kind of electronic document and digital signature, if you like. But either way, the transfer must be evidenced by some kind of document, which you will then have to produce for the court to prove you have the rights you say you have.
Also, of course, somebody is going to have to produce proper evidence of the original note and mortgage; a scan or photocopy should suffice, presuming your bank hasn't managed to get a reputation for forgery, in which case the judge may start wanting to see wet ink on that, too.

It appears that the formality of a writing was neglected in many MERS-related cases. So while the bankers all agree amongst themselves (thanks to MERS), that Servicing Company A has the right to receive payments and sue for foreclosure, SCA is unable to produce documentary evidence sufficient to support that claim in court.

However, SCA was never required to record the transfer of the mortgage, and had the transfer been properly accomplished, the failure to record would have had no effect on their right to foreclose. SCA is in some sense taking a risk that whoever sold them the mortgage might sell it to someone else, too, but since bankers have deep pockets, they don't generally go in for that kind of easily proven fraud. Plus, since all the bankers are using MERS, an erroneous extra transaction would probably be denied by the MERS system (I don't really know how MERS works, but I presume this is why it was created: to give banks some of the confidence that recording laws give them).

The problem here is improper assignments of mortgages (and notes), not failure to do something which was, after all, never legally required. In some sense, a policy of recording transfers would have prevented the problem, because if you're going to go record something, you need to actually have that something in hand, and thus you wouldn't neglect to create that something. However, failure to record is not an attempt to "circumvent the law," and having a highly sophisticated recording apparatus wouldn't have prevented idiots from failing to effect transfers in writing.



State of the Union 2011 - January 18-31 - News, Analysis, & Commentary from The Atlantic
View All Correspondents

Subscribe Now

SAVE 59%! 10 issues JUST $2.45 PER COPY

Newsletters

Sign up to receive our three free newsletters

This Week on TheAtlantic.com (sample)

This Month in The Atlantic (sample)

5 Best Columns from The Atlantic Wire (sample)

I want to receive updates from our partners and sponsors

Megan McArdle
from the Magazine

Dire States

Deep in debt, most governors will have to either raise taxes or cut spending— exactly what…

Paging Dr. Luddite

Information technology is on the brink of revolutionizing health care— if physicians will…

Can GM Get Its Groove Back?

Buyers remain wary, and Washington is unlikely to recover all its bailout cash. But the colossus…