Tyler Cowen flags an article today about Young Marmalade, "the UK’s young driver specialist," which offers Britain's youth a discount on their auto insurance if they install a black box in their car that allows their driving habits to be monitored. Apparently you also have to buy your car from Young Marmalade, and it has to be a low-powered car, but it's the black box that's new.

Though not as new as you might think. I've been watching commercials for over a year touting the "Snapshot" program from Progressive Insurance, and it's pretty much the same thing. You get a black box from Progressive, plug it into your car's diagnostic port, and they track your driving habits. If you drive safely, you get a discount. If you don't drive safely — well, they say that nothing will happen at all. You'll just keep paying your usual rate. I have no idea if that's true in real life.

In any case, the whole thing is fairly crude, tracking only enough information to determine how many miles you drive and how often you make sudden stops. No GPS, no information about exceeding the speed limit, or anything like that. Just sudden stops. But how private is this information?

We won’t share Snapshot data with a third party unless it’s required to service your insurance policy, prevent fraud, perform research or comply with the law. We also won’t use Snapshot data to resolve a claim unless you or the registered vehicle owner gives us permission.

That actually seems like a pretty broad exemption, so your driving habits might not be quite as closely held as you'd like. And I wouldn't be surprised if future black boxes get more sophisticated about the amount of information they collect. On the bright side, you can get a 30% discount on your insurance rate! When you consider that most of us are willing to turn over practically our entire private lives to online companies in return for a free song from the iTunes store, that's probably pretty enticing. Welcome to the future.

Suzy Khimm directs our attention today to the latest Gallup Poll about the rich and their taxes. Do they pay too much? Too little? The long-term trend is a pretty spectacular tribute to the power of repetition. After Bill Clinton raised taxes modestly on high earners in 1993, there was a big drop in the number of people who thought the rich paid too little in taxes. That makes sense. But take a look at the next two decades. Capital gains rates on the wealthy were cut in 1997 and the number went down again. In 2001 Bush slashed their taxes and the number went down again. Bush slashed their taxes a second time and the number went down — again. The incomes of the rich skyrocketed during the aughts and the number went down yet more. By 2010, after two decades of skyrocketing incomes and ever-falling taxes, the number of people who think the rich don't pay enough in taxes has dropped by over 20 points!

That's the power of the Republican message machine, and it's pretty impressive. In the latest poll, the number finally went up a bit, and I imagine that shows the power of a countermessage. A combination of the continued recession, a louder and more unified Democratic Party, and the Occupy movement were probably responsible for the blip back up. Fighting back can make a difference after all.

Last year I read an interesting piece by Susan Headden in the Washington Monthly about college placement tests:

Most Americans think of the SAT as the ultimate high-stakes college admissions test, but the Accuplacer has more real claim to the title....When students apply to selective colleges, they’re evaluated based on high school transcripts, extracurricular pursuits, teacher recommendations, and other factors alongside their SAT scores. In open admissions colleges, placement tests typically trump everything else. If you bomb the SAT, the worst thing that can happen is you can’t go to the college of your choice. If you bomb the Accuplacer, you effectively can’t go to college at all.

My college days are long behind me, so I'd never heard of the Accuplacer. But apparently if you apply to a community college, you take it. It doesn't matter if you're a straight-A student. It doesn't matter if you have half a dozen AP tests under your belt. It's off to the Accuplacer lab for you.

This might not be such a terrible thing if the Accuplacer accurately identified kids who needed help to begin college-level work, and if remedial classwork did a good job of providing that help. But Judith Scott-Clayton says today that neither of these things seems to be true:

For students with high school grade-point averages between 3.5 and 4.0, remediation rates have more than doubled (see chart below). This is not a result of high school grade inflation — the percentage of students with G.P.A.’s in this range has not changed — but is consistent with increasingly ubiquitous placement testing.

Screening seemingly prepared students for remediation is questionable for at least two reasons. First, the benefits of remediation are far from obvious: remediation has been referred to as the Bermuda Triangle of postsecondary education, because the majority of those who enter never make it out. Across several rigorous, quasi-experimental studies of the causal impact of remediation, only one found positive effects on college outcomes, while others found null to negative effects.

Second, the tests commonly used to screen for college readiness are only weakly related to college outcomes, as two recent studies by the Community College Research Center show....My own research, using data from a large urban community college system with particularly high remediation rates, estimates that one in four students assigned to math remediation could have passed a college-level math course with a grade of B or better and one in three students assigned to English remediation could have passed freshman composition with a B or better.

If all this is true, it suggests that these placement tests are a waste of time. You can't base high-stakes admissions decisions on them because they're too inaccurate for that, so instead they're used to place students in remedial classes. But that doesn't do much good either. The ones who couldn't handle college-level classes in the first place don't improve much and just end up dropping out. So instead, why not let everyone into freshman classes and just see how they do?

I suppose that might strain class capacity, but at the very least, it sure sounds as if there ought to be a GPA cutoff for these things. If students volunteer for remedial classes, or if they do poorly in their first semester, let 'em take the classes. But if they have a decent GPA, why not let them at least give college a try before sending them off to the Bermuda Triangle?

You all know what a food desert is, right? It's a place where there aren't very many supermarkets. In fact, most often it's a place with no supermarkets, and almost always it's a poor neighborhood. But why do food deserts exist? Perhaps poor neighborhoods just aren't very profitable places to open supermarkets. But poor neighborhoods usually have plenty of bodegas and little corner shops, and if those places can make money why can't a supermarket? What's more, supermarket executives have always been very cagey when they're asked about the phenomenon, which suggests there's more to it than mere profitability. It's a bit of a mystery.

But now there's another question about food deserts: Should we even care about them in the first place? For a long time, they've been a source of concern because supermarkets generally provide better access to fresh fruits and vegetables than little corner stores. Eating well is hard enough for the poor as it is, but it becomes all but impossible if they don't have convenient access to good food in the first place.

Past research has questioned whether access to supermarkets really has a big impact on either obesity or eating habits, but the question has remained ambiguous. However, it's getting less so thanks to several recent studies. In one, Helen Lee of the Public Policy Institute of California concluded that children in poor neighborhoods really do have greater access to fast-food chains and convenience stores than children in richer neighborhoods. But it didn't make any difference: Access to different kinds of stores didn't have any impact on weight gain among elementary-school-aged children.

Adam Drewnowski at the University of Washington has come to similar conclusions. For one thing, he says, it turns out that most people, even in poor neighborhoods, don't shop at the stores closest to their homes. What's more, in a study of various supermarkets in the Seattle area, he found that obesity rates were far more closely linked to income than anything else. "Obesity rates among supermarket shoppers closely tracked both food prices and incomes," he found, but not the kinds of food available. Shoppers at Albertson's, a low-cost chain, were far more obese than shoppers at Whole Foods, even though both provided plenty of access to fresh fruits and vegetables.

Finally, there's Roland Sturm of the RAND Corporation, who took a different approach, using self-reported data from 13,000 California children and teenagers. As reported by the New York Times, "Dr. Sturm found no relationship between what type of food students said they ate, what they weighed, and the type of food within a mile and a half of their homes…Living close to supermarkets or grocers did not make students thin and living close to fast food outlets did not make them fat."

Why does this matter? After all, efforts to make sure that poor neighborhoods have access to a wide variety of food are laudable even if it turns out not to have a big impact on obesity. But it matters nonetheless, because it affects where we focus our money and our energy. If obesity and good nutrition are our goal, we need to look elsewhere. Tackling food deserts, it turns out, probably isn't going to have much impact.

UPDATE: So where should we look? Tom Philpott has some ideas here.

Via the Financial Times, here's an interesting bit of color coded data from HSBC Global showing the level of correlation between various asset classes. The version on the left is from 2005, and it shows about what you'd expect. Deep red means a pair of asset classes are highly correlated — that is, when one goes up or down, the other moves in the same direction — and the red square at the top left represents various U.S. stock market indexes. Unsurprisingly, when the NASDAQ goes up, so do the Dow Jones and the S&P 500. The red square next to it is European stock markets. And the red square at the bottom represents corporate and government bonds. When one kind of bond moves up or down, so do all the rest.

No surprises there. But take a look at the version on the right from April 2012. Practically everything in the top half is moving together. At the bottom right, bonds are moving together even more tightly than before. And stocks and bonds (bottom left) are moving tightly in opposite directions. There's a bit of stuff in the middle that's not correlated with anything else, but that's all.

What this means is that it's hard to diversify a financial portfolio these days because everything is moving in sync. HSBC's view is that this is because the world economy is in such fragile shape that basically everything now depends on how well governments cope. If they cope well, the global economy will recover smartly and pretty much all asset classes will do well. If they cope poorly, everybody and everything is screwed. In other words, when you buy an asset these days — any asset — you're not really making a bet on the asset itself. You're making a bet on how well the governments and central banks of the world handle the current economic mess. No wonder everyone's so nervous. There's no escape from the Great Recession.

According to the Congressional Budget Office, the number of people receiving SNAP benefits (aka food stamps) will continue rising through 2014. Newt Gingrich will no doubt use this as an occasion to snark yet again about Barack Obama being a "food stamp president," but among non-buffoons the takeaway from this is that the Great Recession is still nowhere near over. Technically it might have ended three years ago, but out in the real world things are still mighty fragile for a lot of people.

Over at the Atlantic, James Warren brings us shocking news from a recent meeting of the Midwest Political Science Association: sometimes people cheat on internet surveys. When they correctly say that John Roberts is Chief Justice of the Supreme Court, it might only be because they googled it or asked their daughter-in-law:

[Brad Gomez of Florida State University] noted the tendency for a relatively small but apparently rising number of survey respondents to cheat on Internet and mail surveys. When it comes to the Internet, "It's pretty clear people are cheating," especially when they at first don't know an answer. Robert Luskin of the University of Texas referred to "cheating on steroids" when it comes to both Internet and mail surveys, with respondents perhaps googling a response or asking a nearby family member for help. "You're not necessarily getting the respondent who was randomly selected," he said.

Via Jonathan Bernstein, Lynn Vavreck calls a foul. She did a study last year that recruited subjects at the CBS Research Facility in the Las Vegas MGM Grand Hotel. Half were interviewed face-to-face, and half answered questions on a computer connected to the internet:

Of the 505 people who completed the survey on a computer, only 2 people cheated by looking the answers up on-line. That’s less than one-half of one percent of the respondents....Plenty of people had a hard time answering our fact-based questions, and they knew they were on the Internet, yet very few of them took the time to look up the answers — in fact, almost none of them.

....In the spirit of the popular television show Myth Busters, consider this myth busted!

Hmmm. What do you think of this? Vavreck is right that if you want to claim there's cheating going on, you need to produce some evidence. At the same time, I don't find her study very convincing. There's a big difference between a formal setting like hers and the more relaxed confines of your own house and your own computer. I'd expect a whole lot less cheating in her study than in real life. Hell, I'd probably cheat more at home.

Still, I'm curious what evidence Gomez and Luskin have that cheating is on the rise. As near as I can tell, Americans have an abysmal knowledge of just about everything, and it seems to be abysmal on nearly every survey ever done. What's more, most of us don't seem to give a damn enough about this stuff to even bother cheating. Still, I think we could find out for sure. Vavreck could do a study where half the respondents were interviewed in person (or over the phone) and the other half got questionnaires via mail or the internet. As long as all of them were recruited over the internet, and chosen randomly, better scores from the internet half would suggest cheating, wouldn't it?

In an otherwise tedious case about milk regulation, Appellate Judge Janice Rogers Brown decided to give her inner Randian free rein last week. "Cowboy capitalism" is dead, she moaned, and courts are at fault for not slapping down legislatures both local and national that infringe on economic rights. It was a pretty remarkable performance, all the more so since it really had nothing to do with the case at hand. So what gives? I think Dahlia Lithwick gets it right here:

There’s one other point worth making, before we leave Judge Brown to her open-mic libertarian musings. She is, beyond any doubt, apt to appear on any short list for Mitt Romney’s choice to replace any of the four Supreme Court Justices who are currently in their 70s, some of whom will be 80 by the 2016 elections. In that light, this concurrence looks less like a judicial opinion than a job application.

I have written before how ironic it is that a liberal jurist can be disqualified from a judicial confirmation hearing for expressing a single progressive idea in a law review article, whereas when it comes to conservative judicial nominees extreme and full-throated ideological exhortations are usually an added bonus. For Brown, the choice to write an opinion eviscerating New Deal worker and health protections at precisely the moment these issues are burning up cable television and Tea Party rallies is just smart politics. It’s hard to imagine a liberal shortlister attempting the same and surviving a Supreme Court confirmation bid. Or a confirmation bid of any sort, really.

Yep. Both Sonia Sotomayor and Elena Kagan had to practically disavow any settled opinions on anything, and even so got plenty of rabid opposition from gun groups, abortion groups, and other right-wing groups convinced they saw a glimmer of a shadow of a penumbra of liberal thought in some choice of words a dozen years ago or an ambiguous decision handed down that touched on some hot button issue. But Brown? She just lets it rip. Apparently she's not worried that it will hurt her at all with a future President Romney.

I have a feeling this might become a trend. Conservative judges have been feeling less and less restraint over the past few years about expressing their small-government bona fides, and the recent oral arguments over Obamacare were a kind of high court permission to let politics roam freely in judicial proceedings. I suspect a lot more lower court judges are going to take advantage of that.

UPDATE: I've gotten some pushback on this from various quarters, most of it fair. First, everyone in comments is right that Brown is 62, much too old to be a serious contender for a Supreme Court appointment these days. Second, she was kinda sorta under consideration for the Supreme Court in 2005, but was considered too outspoken to get the job. Third, liberal judges have made similar comments in the past — though I think these comments haven't been quite as broad or radical as Rogers'.

So, yeah, most likely Brown knows she's too old for a promotion, which means she's free to say whatever she damn well pleases. That's not necessarily praiseworthy, but it's probably not a job application either.

Via Steve Benen, here is Mitt Romney's view on the current impasse over extension of the Violence Against Women Act:

Andrea Saul, a spokeswoman for Mr. Romney, said in an e-mail, “Gov. Romney supports the Violence Against Women Act and hopes it can be reauthorized without turning it into a political football.” But she declined to specify which version he supported.

Neither presidents nor presidential candidates have to weigh in on the minutiae of every single legislative tiff. But Romney is taking this to cartoonish extremes these days. He's desperate not to anger the tea partiers who still don't fully trust him, but he doesn't want to do himself any further damage with independents either. So he's punting on everything.

How long can he keep this up? This summer the Obama campaign is going to try to portray Romney as a guy who doesn't really believe in anything, and he sure seems to be going out of his way to make it easy on them.

In a much-cited blog post, Steve Randy Waldman says that our fiscal and monetary response to the Great Recession was weak because, as it turns out, economic growth isn't really our highest priority. We might say it is, but our actions speak louder than our words:

The preferences of developed, aging polities — first Japan, now the United States and Europe — are obvious to a dispassionate observer. Their overwhelming priority is to protect the purchasing power of incumbent creditors....These preferences are reflected in what the polities do, how they behave. They swoop in with incredible speed and force to bail out the financial sectors in which creditors are invested, trampling over prior norms and laws as necessary....They do not pursue monetary policy with sufficient force to ensure expenditure growth even at risk of inflation.

....This preference is not at all difficult to understand. The ailing developed economies are plutocratic democracies. “The people” do have power, but influence is weighted in a manner correlated with wealth. The median influencer in these economies is not a billionaire, but an older citizen of some affluence who has mostly endowed her own future consumption. She would like to be richer, of course. But she is content with her present wealth, and is panicked by the prospect of becoming poorer. For such a person, the depression status quo is unfortunate but tolerable. The risks associated with expansionary policy, on the other hand, are absolutely terrifying.

I have a hard time buying this. The bailout of the banks was way overdetermined. Everyone agreed that a banking collapse would be catastrophic and had to be avoided at all costs. You can argue that we went about it the wrong way, that maybe temporary receivership would have been a better policy for some of the big banks, but it's hard to argue that the mere decision to rescue the banking system favored one particular constituency.

And Steve's "median influencer" is problematic too. I'm willing to buy the idea that the upper middle class in general is the single biggest influence on our political system, but that's not the same thing as "an older citizen of some affluence." It's in a similar ballpark, but it's not the same thing. And the wealthy and the broad middle class are significant influences too.

But put that aside for the moment. It's not the biggest problem here. Rather, it's Steve's claim that the median influencer — whoever it is — "is panicked by the prospect of becoming poorer," which explains our financial system's rabid opposition to inflation higher than 2%. This claim might have made sense 50 years ago, when many of the affluent elderly were coupon clippers. But today it doesn't make sense even for them, and it certainly doesn't make sense for anyone else. Hardly anybody literally lives on a fixed income these days. The elderly middle class lives on Social Security, which is indexed to inflation. The broad middle class has its retirement savings invested in 401(k) funds, which do better when the economy does better. The wealthy have their money invested in a variety of sophisticated vehicles, all of which are hedged against inflation in one way or another. We simply don't live in a world of fixed returns anymore. Unless you're a hedge fund quant making some specific kind of inflation play, there are very few people today who have any reason to fear higher inflation, especially of the moderate, temporary sort that the Paul Krugmans and Scott Sumners of the world advocate.

So....I'm having trouble with this. There's no question that our financial elites are pretty fiercely anti-inflation. And there are certainly a few constituencies who rationally fear inflation: holders of fixed-rate mortgages, small savers limited to the interest rates at their local credit union, and (possibly) those who are heavily invested in low-yielding corporate bonds or muni bonds. But are those really the people influencing Fed policy? I'm not seeing it.

In the end, I guess this is really a request for Steve to write in more detail about this. It's worth figuring out who exactly is influencing Fed policy, as well as central bank policy everywhere else in the world. But central banks have always been pretty rabidly anti-inflation, so I'm not sure you can pin the blame on something specific to the "developed, aging polities" of today. After all, William McChesney Martin didn't much like inflation 50 years ago, and Chinese central bankers don't much like it now. But why?