Opinion

Christopher Papagianis

Is Uncle Sam ever truly an investor?

Christopher Papagianis
May 2, 2012 15:48 EDT

Last week, a debate erupted about whether the government’s massive Troubled Asset Relief Program (TARP) made or lost taxpayers money. Assistant Secretary for Financial Stability Timothy Massad and his colleagues at the Treasury Department argue that TARP is going to end up costing a lot less than originally expected and may even end up turning a profit for taxpayers. Breakingviews Washington columnist Daniel Indiviglio scoffs at this, arguing that TARP “looks more like a loss of at least $230 billion.”

While the two sides are miles apart on their calculations (and it is important to examine why), their disagreement reflects a broader philosophical dilemma that deserves more attention. It concerns whether the U.S. government should be held to the same standards as private investors. Put another way, should policymakers adopt the same analytical approach that private-market participants use to evaluate or measure the prospective return on new investments? The answer has important consequences for defining the roles for the public sector and private enterprise – and particularly how the U.S. government accounts for all of its trillions in direct loan programs and loan guarantees.

Let’s start by using TARP as a case study. The calculation Treasury uses is simple: If a bank that received a TARP capital injection pays back the original amount, then the taxpayer broke even. If some interest or dividend income (i.e., on the government’s ownership stake from the injection) is generated, then the taxpayer likely made a profit on the investment.

Indiviglio takes a different approach, arguing that Treasury’s “fuzzy math wouldn’t fly with any sensible portfolio manager.” He insists that government needs to factor in the cost of money and its value over time.

The crux of this argument is about whether the government’s investment strategy should be evaluated in the same way that a private investor would evaluate a potential investment. Massad confronted this head on in his rebuttal to Indiviglio (delivered through Politico’s Morning Money):

“The [Indiviglio] piece doesn’t look at the math correctly in light of the purpose of, and need for, the TARP investments. The government isn’t a hedge fund and nor should it have acted like one. We made these investments to help put out the financial fire and prevent a second Great Depression. And it’s certainly good news for taxpayers that we’re going to get most if not all of the money back…”

While the “money-in-money-out” approach has obvious intuitive appeal, there are actually ways of demonstrating its limitations. One stems from the Congressional Oversight Panel (COP) for TARP, which looked at this issue back in 2008 and 2009. The COP commissioned a valuation project to determine whether the Treasury received a fair value price for its investments. The COP found that “additional information about the value of the TARP transactions could be derived by comparing those transactions to three large transactions involving private sector investors that were undertaken in the same time period.”

  • Berkshire Hathaway purchased an interest in Goldman Sachs (September 2008)
  • Mitsubishi UF announced an investment in Morgan Stanley (September 2008)
  • Qatar Holdings LLC purchased a stake in Barclay’s (October 2008).

While COP noted these private investments were not perfect analogs, comparing these transactions with the government’s investments did reveal that “unlike Treasury, private investors received securities with a fair market value … of at least as much as they invested, and in some cases, worth substantially more.” The COP valuation report concluded that: “Treasury paid substantially more for the assets it purchased under the TARP than their then-current market value.”

Here is the key table from the COP report:

This table shows that the government injected capital into these institutions at a 28 percent premium (on average) to what other private investors were willing to pay (the table’s 22 percent is the subsidy rate, or percentage of purchase price that went directly to bank management and shareholders). Note, these figures were calculated after taking into account any boost in value the financial firms got from the announcement that they would be receiving support under TARP (and the Capital Purchase sub-Program).

At its most base level, Massad’s argument is fairly circular. What Treasury did was rescue the financial system, which was good because it rescued the financial system. That is to say, the capital injections through TARP broke even, on average, largely because the capital injections themselves stabilized the financial system. But this valuation debate is not about whether the government should or should not have injected capital into these institutions. That is taken as a given. The question is whether that capital should have been injected on such concessionary terms? Sure, Warren Buffett didn’t have enough capital to rescue the entire financial system, but why couldn’t the government have driven the same bargain?

Indiviglio concludes his argument on TARP by reinforcing this key point:

Even using a more conservative discount rate of 10 percent would still leave the loss at over $190 billion. The U.S. Treasury isn’t a hedge fund, so was willing to invest poorly for the bigger, unquantifiable return delivered by stability. But rather than try and obscure the painful price tag of its rescue, it should be emphasizing that avoiding a global meltdown was worth the cost.

This section also identifies the key variable – the discount rate – that determines the true cost of the program. Most people don’t know this, but an “only in Washington” law (codified under the Federal Credit Reform Act) requires that to project the costs of a federal loan program, official scorekeepers must discount the expected loan performance using risk-free U.S. Treasury interest rates. There is no factor for “market risk”, which is the likelihood that loan defaults will be higher during times of economic stress and those defaults will be more costly as a result.

Jason Delisle of the New America Foundation has written extensively on this topic, arguing that when the government values risky investments using only risk-free discount rates, lawmakers have a perverse incentive to expand rather than limit the government’s loan programs. This is because a private-sector institution that extends the same loans (say on the exact same terms) would be required to factor in market risk. And, when this difference results in the government program showing that its lending activities would turn a profit, policymakers are inclined to expand the program to capture more of these fictitious profits (which conveniently they can also spend on other programs, even if the returns never materialize).

The confusion about how to view the U.S. government’s role as an investor has led many in Washington to argue that loan programs can subsidize everything from mortgages to student loans – all at no cost to the taxpayer. The principal concern with not evaluating a government and private investment in the same manner is that the government’s purported profits are often cited as proof of an inherent advantage the government has over the private sector in delivering credit. The truth is that this result generally comes from a less-than-full accounting of the risks taxpayers have been made to bear. (For more, see the work by Deborah Lucas at MIT, who also consulted on the COP report and other CBO studies.)

Jason Delisle (and I) wrote a piece at Economics21 last month spotlighting a very revealing comment that Shaun Donovan, secretary of the Housing and Urban Development Department (HUD), made before Congress defending the status quo on government accounting. Donovan argued that the Federal Housing Administration could provide 100 percent guarantees on the credit risk of low-downpayment mortgages, charge less than a private company would for bearing this risk and still make a profit for taxpayers. In his view, FHA “doesn’t have shareholders,” and it doesn’t “have a need for return on equity.”

Here is the bottom line: When the government issues a loan guarantee, it’s the taxpayers who become the equity investors. They are the ones who will be asked to make up any unexpected loss on the loans over time. U.S. Treasury holders certainly won’t be asked to take a haircut.

Just because the government, rather than a private company, extends a loan doesn’t mean that the market risk vanished. Taxpayers would be better off if the government’s accounting rules for its credit programs reflected that there is only one true cost of capital – and it’s the price investors are willing to pay in the market.

PHOTO: Boxing gloves during a training session of heavyweight boxing titleholder Vladimir Klitschko of Ukraine in Duesseldorf, March 17, 2010. REUTERS/Ina Fassbender

Can Silicon Valley fix the mortgage market?

Christopher Papagianis
Apr 25, 2012 12:12 EDT

Without question, the rise of social networks has been the dominant theme in Silicon Valley over the past few years. Platforms like Facebook and Twitter have inspired countless startups looking to latch on to networks to deliver new applications and services for consumers. In many ways, the glue that binds these enterprises is an advanced ability to organize and analyze the reams of user data generated by these networks or systems. Entirely new business models have emerged to try and capitalize on this improved understanding of consumer preferences and behavior.

Over the last couple of years, the analytics experts in Silicon Valley have started to turn their attention to other big data problems. A question that is increasingly attracting their attention is: How can the fallout from the subprime mortgage crisis be better managed for all the players involved, including at-risk homeowners, lenders, mortgage servicers and investors?

We’ve heard a lot about the near-universal frustration that at-risk borrowers have had with their mortgage servicers. The common refrain is that if mortgage servicers could only make smarter and quicker decisions on how to modify the terms of individual mortgages, then there would be fewer foreclosures on the margin and lenders or mortgage investors would actually lose less money in aggregate, since the foreclosure process itself is costly.

For many, the challenges in this area are about asymmetries of information or structural market frictions, since win-win outcomes aren’t realized as often as they should be in an otherwise efficient marketplace. Glenn Hubbard and Chris Mayer at Columbia University have developed plans for addressing some of the frictions that have blocked borrowers from taking advantage of today’s low interest rates by refinancing their mortgages. But now new companies, some with their roots firmly established in Silicon Valley, are eyeing the mortgage servicing market as fertile ground for deploying their creative and analytical firepower.

A prime example is Palantir Technologies (pronounced Pal-an-TEER). At its core, Palantir develops platforms that help other companies integrate and analyze their data. Initially Palantir’s focus was on the intelligence and defense community, helping organizations like the CIA and FBI ferret out terrorist activities. Analogous platforms have since been developed to help financial institutions comb through their networks to identify suspicious or fraudulent transactions. Hedge funds, including one of the world’s largest – Bridgewater Associates LP – have also knocked on Palantir’s doors looking for ways to leverage their open-ended or extendable platform as a way to better process or integrate their investment-related data and research, which often comes from multiple sources.

Joe Lonsdale, co-founder of Palantir, and Rosco Hill, one of his colleagues, recently gave a TEDx New Wall Street presentation about how this Silicon Valley-to-Wall Street workstream is evolving and has the potential to improve mortgage servicing. One of the underappreciated problems in the mortgage market today is that most servicers are still playing catch-up from when the housing bubble burst and their systems started to get overloaded with processing non-performing mortgages. A top U.S. Government regulator on housing – the Federal Housing Finance Agency (FHFA) – has even launched an initiative to restructure the way that mortgage servicers are compensated to try to establish a more durable servicing industry that is better prepared for boom-and-bust cycles.

The activities and costs associated with servicing a performing versus a non-performing mortgage are fairly dramatic. Before the housing crisis, when home prices were rising and foreclosure levels were down, the servicing industry was primarily thought of as a payments processing business (i.e., sending out forms to borrowers, collecting payments and passing cash flows on to lenders or investors). The best servicers were the most efficient processors, looking at each turn for new ways to streamline their systems to reduce costs and achieve economies of scale as a way to maximize returns.

Servicing non-performing mortgages, however, is a labor-intensive business. It can be difficult if not impossible to achieve economies of scale, since many mortgage workouts or modification strategies involve direct or personal interactions with borrowers. The underinvestment in servicing technology heading into the housing crisis was perhaps best summarized by FHFA:

Prior to 2007, servicers were mainly focused on building efficiencies in the servicing of performing loans in order to reduce costs and optimize financial returns. Relatively few chose to invest in the technology, systems, infrastructure and staff needed to service large or rapidly growing volumes of non-performing loans. Consequently, many servicers were ill-prepared to efficiently process the high numbers of delinquencies that occurred after the housing market collapsed. Since then, the servicing industry has increased its investment in the processes and technologies needed to meet the challenge of servicing non-performing loans in today’s environment.

While the five big banks that dominate the servicing industry (Wells Fargo, Bank of America, Citigroup, JPMorgan Chase and Ally Financial) have increased their investments in servicing-related technologies and infrastructure over the past few years, smaller servicers are also now looking to gain market share. Part of this emerging story is about the new data platforms that special servicers are utilizing to distinguish themselves from some of their competitors.

One area where new technologies are starting to make a difference is in helping servicers approve short- sale transactions as an alternative to foreclosure. (A short sale is when the lender accepts less than the full amount owed on the debt in a sales transaction, but still releases its claim on the underlying real estate.) The promise of new technology platforms on this front is that they can connect different data sets on home prices and other variables, whereas many big bank servicing platforms still rely on closed systems that don’t easily integrate all the public and proprietary data sources that are available. Better data integration allows for more comprehensive search and discovery processes, which have the potential to help servicers confirm what exactly is a fair value price for a home in a declining market.

The ultimate goal is to find the “efficient” spot on the axis in between fully automated and individually personalized mortgage modification solutions. The key is using all the data that’s out there to gain a better understanding of why individual borrowers are at risk of foreclosure, learning how better data can speed up the decision process for servicers evaluating modification options, and identifying common factors that could lead to the development and then deployment of more personalized foreclosure avoidance strategies.

The mortgage market has long been driven by quantitative analytics, but Joe Lonsdale and Rosco Hill framed a key question in their TEDx presentation that suggests a transformation of sorts is playing out at the nexus of Silicon Valley and Wall Street. In describing an exchange that a Palantir team had with a large bank that was evaluating new servicing-related technologies, a bank executive asked both his own IT-servicing department managers and the Palantir data mavens in the room whether the answers to today’s servicing-related challenges were more about finding mechanical or creative solutions? The answer is both, but it’s the underappreciated role of creativity in the development process of the data platform itself (i.e., turning an analytical tool into a decision-making platform) that gives Silicon Valley the edge in providing a real breakthrough on mortgage servicing.

PHOTO: Realtor and bank-owned signs displayed near a house for sale in Phoenix, Arizona, January 4, 2011. REUTERS/Joshua Lott

COMMENT

This is not a big data problem. The largest mortgage loan originator and servicer was Countrywide, now Bank of America. The computerized Countrywide Loan Underwriting Expert System (CLUES) processed all the Uniform Residential Loan Applications (URLA), Federal Form 1033, to determine all the variations of loan criteria, summarized with a decision: “RECOMMENDED” or “NOT RECOMMENDED”.

The problem is not that the extensive and detailed database was inadequate, nor the underwriting – aka “Artificial Intelligence” – risks not determined. The problem was that a lot of people decided to ignore these “recommendations” in pursuit of higher returns.

The phony “complex” risk models created to justify high-risk low-doc & no-doc loans (not just subprime) to known unqualified borrowers weren’t questioned as to one basic assumption: “Housing prices will always go up.”

We don’t need any geniuses from Silicon Valley or anywhere else to tell us what went wrong with the casino culture of Countrywide, Fannie Mae, Freddie Mac, IndyMac, AIG, Citigroup, JP Morgan Chase, Wells Fargo and now Bank of America.

I used to work for two of these entities. The fraud was well known, not exactly a secret to thousands of employees including managers and executives. What’s being done about changing this casino culture? Outside some curious pieces on “60 Minutes” and detailed discussions on “Moyers & COmpany”, essentially nothing beyond talk. The Department of Justice continues to sit on its hands and bemoan the lack of funding to pursue these elusive thousands of potential witnesses.

BTW, this was (and is) not primarily a subprime loan crisis as the media keeps harping, but a serious problem with obsessive gambling involving low-doc-no-doc loans of a large portion of ARM loans with losses guaranteed 100% by the federal government. This hasn’t changed one whit.

The Tenth Percenters win. Their capital is preserved and they continue to receive better than ten percent returns from the casino “banks”.

Posted by ptiffany | Report as abusive
  •