Kevin Drum

Chutzpah Award of the Week

| Tue Apr. 6, 2010 7:42 PM EDT

In a column today declaring that "global warming is dead," Wall Street Journal editor Bret Stephens marshals this as part of his evidence:

In Britain, environmentalist patron saint James Lovelock now tells the BBC he suspects climate scientists have "[fudged] the data" and that if the planet is going to be saved, "it will save itself, as it always has done."

This takes herculean chutzpah. It's true that Lovelock thinks that climate scientists at East Anglia might have fudged some data, but here's what he has to say to BBC interviewer John Humphrys about climate change more generally:

Humphrys: You say "if" global warming happens. You believe it both is and will get a lot worse?

Lovelock: Yes, I do believe it will get a lot worse. You can't put something like a trillion tons of carbon dioxide into the atmosphere without something nasty happening.

Humphrys: How many of us, in your view, will not survive in the process?

Lovelock: ....If it really does warm up as badly as I've said in [The Vanishing Face of Gaia], as it might well do, then we'll be lucky if there's a billion left.

Humphrys: So in other words, seven out of eight will die?

Lovelock: Well, something like that.

Humphrys: Do you believe the science has been misrepresented to us?

Lovelock: No, I just think there are too many people doing it.

The only reason Lovelock says the earth "will save itself, as it always has done" is because he thinks climate change is likely to be so catastrophic that there's nothing we can do about it anymore. And by "save itself," he means only that the globe will go on spinning but with about seven billion of us dead.

I think Lovelock is wrong about it being too late to affect climate change, but that's neither here nor there. Regardless of whether he's right about that, to quote him in support of the idea that climate change is a gigantic hoax simply takes titanic balls. Did Stephens really think that no one on this side of the Atlantic would bother to actually listen to the interview?

Advertise on MotherJones.com

Net Neutrality Returns

| Tue Apr. 6, 2010 4:39 PM EDT

Net neutrality is back in the news. But not in a good way: an appellate court has ruled that the FCC has no authority to force cable companies to treat everyone's web traffic equally:

The decision, by the United States Court of Appeals for the District of Columbia Circuit, specifically concerned the efforts of Comcast, the nation’s largest cable provider, to slow down customers’ access to a service called BitTorrent, which is used to exchange large video files, most often pirated copies of movies.

After Comcast’s blocking was exposed, the F.C.C. told Comcast to stop discriminating against BitTorrent traffic and in 2008 issued broader rules for the industry regarding “net neutrality,” the principle that all Internet content should be treated equally by network providers. Comcast challenged the F.C.C.’s authority to issue such rules and argued that its throttling of BitTorrent was necessary to ensure that a few customers did not unfairly hog the capacity of the network, slowing down Internet access for all of its customers.

The BitTorrent issue is probably not the best way to understand the real problem here. After all, Comcast has a legitimate interest in making sure that traffic runs smoothly on its network, and throttling bandwidth hogs might be a reasonable way to do that. Rather, the problem is that once you lose the general principle, the next likely step is a lot less benign. Matt Steinglass of the Economist explains:

The writers at this blog don't really care about today's appeals court ruling, which concluded that the FCC lacks authority to regulate net neutrality. Why should we? The paper will pay whatever Comcast or any other connectivity provider charges to make sure our bytes get out to the masses at a reasonably high speed. At least, we think it will. Unless the Financial Times or Forbes offers more. Then the magazine will have to ante up, or face discriminatory second-class service. Perhaps Comcast will start demanding "ultra business elite" fares on our packets if we expect them to reach that last mile just as fast as those from the FT. Then, of course, they might offer the FT the Sapphire Express rate on their packets, with an absolute guarantee that packets will arrive faster than the competition.

As much as such services are worth to us, they'd obviously be worth vastly more to Bloomberg or Dow Jones. A guarantee that time-sensitive financial information will arrive milliseconds ahead of the competition can be worth billions when you're trying to move markets. How could a last-mile connectivity provider possibly explain to its shareholders a decision not to take advantage of this opportunity, to offer "priority packet service" to time-sensitive information companies and induce them to engage in a bidding war?

I've long thought that broadband suppliers have at least half a case to make against pure net neutrality. There really are certain services, such as on-demand video streaming, that require lots of bandwidth and extremely reliable delivery. Charging extra for that is pretty defensible.

But where do you draw the line? Historically, when common carriers are allowed to discriminate, the result is pretty disastrous for everyone except the folks who currently dominate their market. So if you have a startup search company that outperforms Google, but only if it's as fast as Google, well, what are the odds that Google won't pay to make sure that its service is always faster than yours? Sure, their motto is "Don't be evil," but who knows if they'll still consider that kind of thing evil when the crunch comes?

Not me. All I know is that a free and open internet has worked pretty well, and we abandon it at our peril. With any luck, then, today's court ruling will actually be good news because it will spur Congress to do something legislatively instead of simply relying on FCC rulemaking. Internet providers ought to have a procedure they can go through to petition for tiered service for specific applications, but equal access should always be the default. They should be allowed to diverge from that only occasionally, only under specific conditions, and only after plenty of public comment. Markey-Eshoo is probably a pretty good place to start.

The American Dream

| Tue Apr. 6, 2010 12:53 PM EDT

Yesterday I linked to Mark Gimein's piece suggesting that recent stability in the housing market was a mirage and prices still have a ways to fall. Today, via Felix Salmon, a survey from Fannie Mae suggests that a surprising number of people don't believe this:

Hey, it's always a good time to buy a house! Now, in fairness, there may be parts of the country where housing prices have retreated to their long-term trend rates and it really is an OK time to buy a house. But I'm pretty sure it's not 64% of the country. Regardless, nearly three-quarters of the people in the survey thought that home prices were unlikely to fall over the next year.

As Felix says, the notion that "housing prices never go down" has a strong hold on the American imagination. If the last few years haven't taught people otherwise, I'm not sure what will.

Anyway, lots of good stuff in the survey, and Felix has loads of good comments too. It looks like owning your own home is still the American dream.

POSTSCRIPT: And speaking of that, it's an excuse to link to a part of my appearance on Bill Moyers' show in January that I'm surprisingly proud of. At the end of the main show, a producer suddenly told us that she wanted to tape us giving an answer to the question, "What's the future of the American dream?" I had about 10 seconds to think about this before there was a camera in my face, and yet I somehow managed to provide an almost sensible answer anyway. You try it! Ten seconds, starting now.

Unfortunately, my answer started with "I think probably the American Dream in the future is going to be maybe a little bit less about owning a home." It might have sounded sensible at the time, but I guess I was dead wrong.

Responding to Recession

| Tue Apr. 6, 2010 12:08 PM EDT

Mark Thoma is really discouraged about our response, or lack thereof, to the Great Recession:

When the crisis hit, we needed fiscal policy right away. Given the lags between changes in policy and actual effects on the economy, which were known to be lengthy, and given that monetary policy was not going to be enough, there was no time to "wait and see" (as many people I respect were calling for). But the reality is that fiscal policy didn't get put into place until much, much later, far too late to stop the worst of the downturn (and it wasn't big enough anyway). The way too slow policy process, and the way too small policy that came out of it, was frustrating to watch.

....This crisis has taught me that policy of that magnitude is nearly impossible to put in place based upon what looks to be happening, i.e. before the recession actually occurs. There must be clear evidence that a severe recession is actually underway before policy will be considered. Unfortunately, by that time it's too late to prevent the worst part of the downturn.

I'd say Mark is being too optimistic here. Sure, massive stimulus programs are impossible to put in place before a recession occurs. That's really not surprising. But this time around we didn't respond forcefully enough even after there was abundantly clear evidence not just that a recession was underway, but that the mother of all recessions was underway. There was just too much political resistance from an implacable cabal of Republican ideologues, misguided deficit hawks, and weak-kneed Democrats. The lesson I've learned from all this is to be a little more sympathetic toward all those folks in the past (or in other countries) who we deride for not responding forcefully enough to an economic downturn on their watch. It's a lot easier now for me to understand what they were up against.

Privacy and Control

| Tue Apr. 6, 2010 11:42 AM EDT

Bruce Schneier writes today about the meaning of privacy:

To the older generation, privacy is about secrecy. And, as the Supreme Court said, once something is no longer secret, it's no longer private. But that's not how privacy works, and it's not how the younger generation thinks about it. Privacy is about control. When your health records are sold to a pharmaceutical company without your permission; when a social networking site changes your privacy settings to make what used to be visible only to your friends visible to everyone; when the NSA eavesdrops on everyone's e-mail conversations — your loss of control over that information is the issue. We may not mind sharing our personal lives and thoughts, but we want to control how, where and with whom. A privacy failure is a control failure.

....You can see these forces in play with Google's launch of Buzz. Buzz is a Twitter-like chatting service, and when Google launched it in February, the defaults were set so people would follow the people they corresponded with frequently in Gmail, with the list publicly available. Yes, users could change these options, but — and Google knew this — changing options is hard and most people accept the defaults, especially when they're trying out something new. People were upset that their previously private e-mail contacts list was suddenly public. A Federal Trade Commission commissioner even threatened penalties. And though Google changed its defaults, resentment remained.

I agree, even though I suppose I qualify as part of the "older generation" these days. I remain cranky about loyalty card operations, for example, because I don't really want my grocery store selling detailed information about my buying habits to anyone willing to cough up a few pennies per name for a database rental. Likewise, it's why I still don't use Gmail, even though it would be pretty handy in a lot of ways. Every time I think about switching over to my Gmail account, I stop to wonder if I really want Google to have access to all that sweet, sweet information. Yes, I know: they'll never, ever use it for anything I don't want them to. Which might be true. Until, maybe, they get a new CEO who decides they've been operating in the dark ages, or they decide that what they really meant was that they'll never let anyone else use it. Or they decide it's OK to share it in aggregate as long as they're pretty sure there are no personally identifying traits in the data. Or something. And then I back off. Basically, I just don't trust them. That storehouse of email data is just too tempting a target, and I'm not 100% convinced that it will be as private tomorrow as it is today.

In other words, I'm a crank. Too late to change that now, though.

Copenhagen Three Months Later

| Mon Apr. 5, 2010 7:48 PM EDT

Is Copenhagen working? Not the city — which is working just fine, I assume — but the Copenhagen Accord reached last December. At the time, it seemed like a bust: the initial hope had been to get a legally binding global agreement to seriously cut greenhouse gases, but as time went by hopes diminished until, finally, in a chaotic last minute scramble that featured various world leaders wandering from room to room while the Danish hosts desperately tried to get something in writing in the face of opposition from just about every corner, the exhausted conference made only a "decision to note" a short document that was mostly a blank numbered list. Those blanks were intended to be filled in later with pledges for GHG reductions.

At the time, all those blanks seemed emblematic of a historic failure. But now that the dust has settled and some of them have been filled in, how are we doing? There have been a flurry of reports recently about this, but no firm conclusion. First up is Kevin Parker of Deutsche Bank, who released a report last week saying that the Copenhagen conference, far from being a failure, had produced "the highest number of new government initiatives ever recorded [...] in a four-month period."

Fine. But how does that translate into GHG reductions? Trevor Houser of the Peterson Institute takes a crack at an answer. After adding up the various pledges made so far, and then assuming some additional mitigation from "international finance," he figures that Copenhagen has produced pledges totalling something between 4.17 and 7.29 gigatons of CO2e by 2020. That's a reduction of 7-13% compared to business-as-usual (BAU).

Fine again. But how much of that represents new pledges? Andrew Light and Sean Pool of CAP provide the guesstimate shown on the right. Prior to Copenhagen they figure the world had already agreed to reductions in the range of 3.6-9.0 gigatons. After Copenhagen that went up to 5.0-9.2 gigatons. If you optimistically assume that the real number will be halfway between the low and high estimates, it means that pledged reductions went from 6.3 gigatons to 7.1 gigatons. That's 0.8 gigatons better, or about 1.5% of total estimated GHG emissions in 2020.

Obviously this gets us closer to our goal of preventing a catastrophic rise in temperature over the next several decades. But not a lot closer. And even these numbers have to be taken with a grain of salt. The United States, in particular, is a wild card, since our "pledge" is meaningless unless Congress adopts it and then passes legislation designed to enforce it. Pledges from developing countries should be taken with a grain of salt too: they aren't for hard reductions from a base year, but either for reductions in the amount of GHG emitted "per unit of economic growth" or for reductions "below BAU." This is necessarily pretty fuzzy. What's more, the reductions attributed to "international finance" are sort of dodgy, and the assumption that countries will actually beat their low-end estimates is pretty optimistic.

In other words, Copenhagen still doesn't look very successful to me. Light and Pool give this the best spin possible:

One good outcome of Copenhagen is that the accord is still a work in progress. Our calculations of what can be achieved by current pledges under the accord are not final. They can still be improved. It doesn’t make sense to worry that the commitments made so far put us on a disastrous pathway to a world 3, 4, or more degrees warmer. That would only be a legitimate worry if the Copenhagen Accord had been finalized last December as a legally binding document at the current level of commitments. Instead, we still have time to use the accord to get us to a safer world.

I don't think it would have occurred to me to think of it this way. But I sure hope they're right.

Advertise on MotherJones.com

Coffee Conservatives

| Mon Apr. 5, 2010 4:15 PM EDT

A few days ago I heard the term "coffee conservative" for the first time. I didn't get it until it was explained to me. And now, here's the Raleigh News & Observer to explain it for the rest of us:

To join the Coffee Party in Raleigh, you can't be a screamer, a name-caller, a loud-mouthed zealot or somebody whose idea of politics translates to jabbing a sign in the air, red in the face. All you need are some manners, a good listening ear and a caffeine jones.

Inside a month, this politeness-first political movement has jumped from one meeting at the Hillsborough Street Cup A Joe to five coffee chats scattered across the Triangle. Nationwide, the Coffee Party USA has drawn nearly 200,000 supporters, sipping java and talking turkey in 47 states.

A coffee conservative, then, is a conservative who's not a tea partier. Someone who remains interested in actual policy and doesn't feel the urge to rant tirelessly about decline of the west and the imminent tyranny that Barack Obama is bringing down on us. It's your phrase of the day.

Breaking Up Is Hard To Do

| Mon Apr. 5, 2010 2:09 PM EDT

Steve Randy Waldman argues that obsessing over leverage and capital requirements is a lost cause:

Bank capital cannot be measured. Think about that until you really get it. “Large complex financial institutions” report leverage ratios and “tier one” capital and all kinds of aromatic stuff. But those numbers are meaningless. For any large complex financial institution levered at the House-proposed limit of 15×, a reasonable confidence interval surrounding its estimate of bank capital would be greater than 100% of the reported value. In English, we cannot distinguish “well capitalized” from insolvent banks, even in good times, and regardless of their formal statements.

....Regulation by formal capital has a proud and reasonably successful history, but has been rendered obsolete by the complexity of modern financial institutions. The assets and liabilities of a traditional commercial bank had straightforward, widely acceptable book values. For the corner bank, discretionary modeling mattered only in setting credit loss reserves, and the range of estimates that bank officers, external auditors, and regulators would produce for those reserves was usually pretty narrow (except when all three colluded to fake and forbear in a general crisis). But model complexity overwhelms and destroys regulatory capital as a useful measure for large complex financial institutions. We need either to resimplify banks to make them amenable to the traditional approach, or come up with other approaches more capable of reigning in the brave new world of banking. 

I'm way out of my league arguing about this, but I have a good reason — about which more below. First, though, the argument.

Steve is, of course, right that "regulatory capital" is a surprisingly ephemeral concept. It's hard to measure and it's hard to know even what to measure. (Read his whole post to get the entire gruesome story.) Hell, it's not even entirely clear what the real purpose of bank capital is. And yet, I'm not so sure that measuring capital adequacy is quite as doomed an enterprise as Steve thinks. For example: this hasn't been making the rounds lately, but last year there was a little boomlet in discussion about the value of tangible common equity as a measure of bank solvency. Why? For exactly the reason Steve writes about. Regulatory capital, as it's currenty defined, failed miserably during the credit crash of 2008. Basel II was a disaster, as was the U.S. near-equivalent that allowed banks to judge risk using their own internal models. And as Steve points out, Lehman Brothers was, technically, very well capitalized the day before it collapsed in a heap.

But TCE is a much simpler concept, and much easier to measure. It is, if you want to be simplistic about it, much closer to a measure of "actual money." And over the decade between 1995-2005, TCE ratios dropped dramatically at large banks. I don't know exactly what Lehman's number was before it imploded, but I think it was somewhere in the vicinity of 2.0. That's probably around half of what it was a decade earlier, and quite possibly a third to a quarter what it should have been given the greater risk and complexity of modern banks. If regulators had focused more on the steady drop in TCE during the boom years, the meltdown of 2008 might have been substantially ameliorated.

Now, that's hardly the whole picture. Capital adequacy is a ratio, and even if regulators focus more heavily on TCE they still have a problem with forcing banks to properly value the asset side of their balance sheets. Steve goes into all the reasons that's hair-raisingly difficult, but there are ways of making that simpler too. What's more, there are other rules that, although they don't directly address capital adequacy, have a big effect on it. Reining in off-balance-sheet vehicles is an obvious one, as are rules that put limits on shadow funding sources. More transparent derivative trading would help too. Put all this stuff together and it would go a low way toward making the entire banking system safer. Not perfect, but better.

So why am I wading into this argument in the first place? Because Steve's take is that since trying to govern leverage is hopeless, the only way to make the financial system safer is to radically simplify banking and break up big banks. But that's a counsel of despair. If you think capital adequacy is tough to regulate, what makes you think that we can radically simplify the entire structure of the modern banking system? And if banks will fight tooth and nail to oppose limits on leverage (and they will), what makes you think they'll be any less tenacious about resisting efforts to break them into little pieces?

My take is that that's hopeless. There are things we can do to make banking simpler, but there's just no way that we're going back to the 70s. Not. Gonna. Happen. And the chances that Congress — which is barely willing to approve even watered-down consumer protections — will break up banks the way Teddy Roosevelt broke up Standard Oil? Forget it.

It's useless to declare a problem unsolvable and then suggest instead that we tackle a problem that's even more unsolvable. I don't have much hope that Congress and the Fed are going to crack down on leverage in a way that's anywhere near as broad or as strict as I'd like them to, but there's at least a chance of making progress on this front. If we throw up our hands and declare it impossible, we're effectively giving up on financial reform entirely.

The Calm Before the Storm?

| Mon Apr. 5, 2010 12:32 PM EDT

So how's the housing market looking? Well, prices seem to have stabilized and foreclosure rates are down. Hooray! But Mark Gimein warns that the news is not actually as happy as the realtors' PR machine would like you to believe:

Consider, for instance, California. In the first quarter of 2009, according to the Mortgage Bankers Association, banks started foreclosures on 2.15 percent of all mortgages (that is, roughly one in 50). In the last quarter — the latest period for which data are available — that was down to 1.34 percent, a sizable drop....

But if you conclude from this that more folks have gotten their arms around their mortgages, think again. The number of new foreclosures may have dropped, but the number of people seriously behind on their mortgages has risen — from 4.75 percent of mortgage holders all the way up to 6.93 percent, an increase of close to 45 percent....Thanks to some combination of government pressure, genuine efforts at loan modifications, and reluctance to seize houses and try to sell them in a dismal market, banks are simply letting more debtors fall behind without foreclosing.

....The Realtors' association happily reports that housing prices are rising because of tightening “inventory” — the trade term for “fewer houses for sale” — but underneath this is the scary reality that there are ever more folks seriously behind on their loans and waiting for lenders to take their houses and condos. This is something that lenders are reluctant to do because they still have no one to sell them to. The housing market looks stable only because lenders are avoiding flooding it with foreclosed properties.

There's something more fundamental at work too: housing prices may have stablized recently, but they've done so at a level quite a bit higher than their pre-bubble value. Now, I've long thought that when the housing crash is finally over, prices might actually end up somewhat higher than their historical trend rate, but even if I'm right (never a sure bet) my guess is that prices might end up 10-20% higher, not 30-40% higher, which is where they are now. Bottom line: the housing market still has a ways to fall, and when the foreclosure moratorium finally ends it's likely to spark another 20% fall in housing values. Or maybe more. The fat lady hasn't sung yet.

Strange Bedfellows

| Mon Apr. 5, 2010 12:08 PM EDT

In the LA Times today, Dean Baker has co-authored an op-ed with Kevin "Dow 36,000" Hassett. So a guy who was right about (almost) everything is paired up with a guy who's been wrong about (almost) everything. This has serious implications for the space-time continuum.