Seven Times the Supreme Court Got Its Facts Wrong

A ProPublica review adds fuel to a longstanding worry about the nation’s highest court: The justices can botch the truth, sometimes in cases of great import.

Members of the US Supreme Court pose for a group photograph at the Supreme Court building in Washington, DC. Olivier Douliery/AP

Fight disinformation: Sign up for the free Mother Jones Daily newsletter and follow the news that matters.

In 2007, a group of California Institute of Technology scientists working at NASA’s Jet Propulsion Laboratory filed suit against the venerated space agency. Many of the scientists had worked on NASA missions and research for years as outside employees. As part of efforts to tighten security measures after 9/11, in 2004 NASA started requiring outside workers to submit to the same kind of background checks used for federal employees, including questions about drug use. The scientists, some of the nation’s best and brightest, protested and resisted for years, and finally went to court to argue that the checks violated their privacy rights.

The case ultimately made it to the US Supreme Court, where, in 2011, the justices unanimously sided with NASA. Justice Samuel Alito, who wrote the opinion, made a central point of noting that such background checks had long been commonplace in the private sector. Alito even cited a very specific statistic: 88 percent of all private companies in the country conduct such checks, he wrote.

It was a powerful claim in a decision with real consequences for American workers. It was also baseless.

Alito, it turns out, had borrowed the statistic from a brief filed in the case by the National Association of Professional Background Screeners. ProPublica asked the association for the source of its statistic. The association offered a variety of explanations, none of which proved true, and ultimately conceded it could not produce evidence that the 88 percent figure was accurate or say where it came from.

The decisions of the Supreme Court are rich with argument, history, some flashes of fine writing, and, of course, legal judgments of great import for all Americans.

They are also supposed to be entirely accurate.

But a ProPublica review of several dozen cases from recent years uncovered a number of false or wholly unsupported factual claims.

In all, ProPublica found seven errors in a modest sampling of Supreme Court opinions written from 2011 through 2015. In some cases, the errors were introduced by individual justices apparently doing their own research. In others, the errors resulted from false or deeply flawed submissions made to the court by people or organizations seeking to persuade the justices to rule one way or the other.The review found an error in a landmark ruling, Shelby County v. Holder, which struck down part of the Voting Rights Act. Chief Justice John Roberts used erroneous data to make claims about comparable rates of voter registration among blacks and whites in six southern states. In another case, Justice Anthony Kennedy falsely claimed that DNA analysis can be used to identify individual suspects in criminal cases with perfect accuracy.

Some of the mistakes were technical or arguably minor, and it is difficult to determine with certainty if they played a vital part in the court’s reasoning and final judgments.

But the NASA case was not the only one where a mistake involved a core aspect of the court’s ruling on an issue with widespread ramifications.

In 2013, the court issued a unanimous ruling in a case involving Fourth Amendment protections against unreasonable searches by the police. In the case, the court determined that when a drug-sniffing dog signals it smells an illegal drug from outside of a car, police have probable cause to search the entire car without a warrant. Justice Elena Kagan, who wrote the opinion, took on one of the central fears of those worried about innocent people being caught up in such police searches.

Kagan argued that the risk of “false positives”—instances in which a dog might mistakenly identify the presence of drugs—should be based on whether the dogs had been formally certified by police groups as reliable in their performance. She cited material from the Scientific Working Group on Dog and Orthogonal Detector Guidelines to support the court’s position.

However, none of the largest certification groups actually test for the risk of false positives. ProPublica reviewed standards and testing records and interviewed several experts on drug-sniffing dogs, including the head of the working group Kagan cited. He said her confidence in the certification process was misplaced.

“It’s important that it’s not just taken at face value to say just because the dog’s certified with a national organization that means they’re reliable,” said Kenneth Furton, the chairman of the working group, who now is provost at Florida International University.

Karen L. Overall, an applied animal behaviorist who served for a time as the co-chair of the dog working group, said she resigned from the group because she couldn’t endorse guidelines that didn’t insist on statistically measuring the reliability of police dogs.

ProPublica provided its findings on the seven mistakes to the court in early September and asked for interviews with Roberts and the other four justices who wrote majority opinions containing factual errors. The justices declined the requests and did not respond to any of the specific reporting. In an email, a spokeswoman said that, by policy, the court “does not comment on its opinions, which speak for themselves.”

In interviews, former law clerks for Supreme Court justices, including some who argue cases before the high court today, said any errors were surely accidental, produced by talented and devoted people doing complex work under daunting circumstances.

“The court, like any institution, never wants to get it wrong,” said Erin Murphy, a former law clerk for Roberts and now a lawyer with the firm Kirkland & Ellis.

The court’s rulings have been fiercely debated from the start of the republic, but most of the scrutiny has been aimed at justices’ legal reasoning or political bent. The risk of errors—that could cause embarrassment or have lasting legal consequences—has occasionally prompted calls for action. Thirty years ago, Kenneth Culp Davis, a leading legal scholar, went on a speaking tour calling on the court to establish its own research operation.

“When the court lacks the needed information, it usually makes guesses,” Davis told an audience at the University of Minnesota in 1987. “Much of our law is based on wrong assumptions about legislative facts.”

Supreme Court opinions contain two types of facts: “adjudicative” facts, which deal with legal procedure and precedent, and “legislative” facts, which are assertions about the outside world and how real life works.

Former Justice Harry Blackmun conceded in a 1984 opinion that there were limits to the court’s ability to be 100 percent right when it came to real-world facts.

“Like all courts,” Blackmun wrote, “we face institutional limitations on our ability to gather information about ‘legislative facts.’”

Still, this type of information can be important, even decisive, to rulings. In a 2015 opinion, Alito upheld an Arkansas inmate’s right to grow a beard while in prison in adherence to his Islamic faith. Alito accurately wrote the inmate’s belief that his religion called upon him to wear a beard was common to several schools of Islam, which further justified legal protections for the practice.

At least five previous errors in Supreme Court rulings have become public during the past decade, all involving legislative facts.

In a 2002 opinion, Kennedy wrote that untreated sex offenders commit new sex crimes at a startling rate, “estimated to be as high as 80 percent.” The statistic came from a magazine article, which did not provide a source. The article’s author has admitted to legal scholars the number was a guess. Studies of sex offenders indicate the true rate is a small fraction of the one Kennedy used.

A 2008 decision, also by Kennedy, said that within the US criminal justice system, only six states allowed death sentences for defendants convicted of rape committed against a child. That was true, but incomplete. Such crimes were also punishable by death in military courts, under a law passed by Congress two years earlier. The author of a military law blog exposed the omission days after the opinion came down.

Perhaps the most alarming of the previously exposed inaccuracies came in an immigration case, Nken v. Holder. In 2008, the solicitor general’s office, which represents federal agencies before the Supreme Court, misled the justices about a key fact. The office said in a legal brief that the government routinely brings back immigrants it has deported if they later win their cases to stay in the US The court’s opinion repeated the claim.

Records obtained by immigrant legal advocates at New York University show the government does not readmit people who’ve been wrongly deported, and the solicitor general’s office knew this.

Since certain parties like the solicitor general’s office have special standing with the court, errors in their arguments are more likely to be repeated in justices’ opinions, said Nancy Morawetz, the New York University law professor who exposed the falsehood in Nken. “It’s a highly imperfect process.”

In 2003, the court ruled in another matter, Demore v. Kim, that the federal government can detain certain immigrants facing deportation without bail for the entire time courts take considering their cases. Former Chief Justice William Rehnquist described immigrants’ time in cells as “brief,” lasting four months on average and sometimes less as judges worked through the appeals.

The truth was far different. When federal officials correctly analyzed their data, the average time immigrants spent in detention was nearly 13 months, triple what Rehnquist wrote. The solicitor general notified the court of the inaccuracies in August 2016; yet, to date, the error remains in the official opinion on the court’s website.

ProPublica decided to examine the court’s record for factual accuracy after the erroneous sex offender statistic became news earlier this year.

In the course of our examination, ProPublica vetted 83 majority opinions randomly selected from a five-year period, 2011 through 2015, and focused only on legislative facts. Just 24 of the 83 opinions contained such facts.

The research, of course, was far too limited to calculate an error rate for the court. That said, there were plenty of mistakes—seven in 24 opinions with legislative facts.

Our review showed justices appointed by both Democratic and Republican presidents had inaccuracies in their opinions. Three of the mistakes were made by Kennedy, long considered the court’s swing vote.

The chance that justices might rely on suspect material presented to them as part of cases has exploded in recent years. A virtual industry now exists to funnel information to the court through filings called amicus briefs. These come from people or groups that are not parties in a case before the court, but advocate for justices to rule a certain way, frequently by offering their special expertise on the subject at hand. Amicus briefs can be helpful to justices, who need to master an imposing host of issues. They are also risky because courts do not always scrutinize the briefs for accuracy.

Lawyers can introduce inaccuracies as evidence, especially on complex subjects, said Bryan Gowdy, a Florida appellate attorney who has argued before the Supreme Court. Such errors might be more understandable if the stakes weren’t so great.

“When you’re in a case where Betty Smith is suing the Jones Pharmaceutical company, and there’s a mistake like that, well that affects Betty Smith and the Jones Pharmaceutical company,” Gowdy said. “But when you’re at the US Supreme Court and they make a mistake like that, it affects the entire country.”

Through most of the Supreme Court’s history, justices used statutes and legal precedents in their rulings, leaving out facts about the outside world. That shifted a century ago when Louis Brandeis joined the court, bringing a philosophy that judges needed to consider how life works in addition to what the law says. Brandeis, continuing a practice he had pioneered as a lawyer and scholar, regularly reviewed studies he found himself and included their results in opinions.

But adding these kinds of facts introduces the risk of errors. As a result, Davis, the legal scholar, floated the idea of the Supreme Court creating its own research team, modeled on the Congressional Research Service. The service is part of the Library of Congress and has a staff of trained researchers that pursues answers to lawmakers’ questions.

The justices did not follow Davis’ advice.

The court’s law library is a highly regarded and invaluable resource for justices, said Allison Orr Larsen, law professor at the College of William & Mary and former clerk for Justice David Souter. But it is not built to fact check briefs or the court’s opinions.

Every proposed solution for factual errors also causes problems, said Gerald Rosenberg, a University of Chicago professor of law and political science. Rosenberg is the author of “The Hollow Hope,” a hotly debated book arguing the courts are ineffective at propelling societal change.

A court research service suggests the justices are writing laws, he said, not rulings based on the law. At the same time, Rosenberg said ProPublica’s reporting indicates that the justices remain vulnerable to mistakes, both large and small.

“What do we do with the fact that they’re either consciously playing fast and loose,” Rosenberg said, “or they’re just not aware?”

Here are summaries of six of the recent Supreme Court opinions in which ProPublica found errors. The seventh will be the subject of a subsequent article.

A Sampling Of Errors

NASA v. Nelson

In 2011, the justices unanimously held that independent contractors working for the federal government could be subjected to background checks that ask open-ended questions about their private lives, including drug use. Federal employees submitted to such checks. It was the norm in the private sector, too.

At least that’s what Alito asserted in writing the opinion for the unified court. “The questions challenged by respondents are part of a standard employment background check of the sort used by millions of private employers,” Alito wrote.

The court received eight amicus briefs in the case, seven of them from privacy, civil rights and labor advocates supporting the contract employees. The one brief backing the federal government came from a collection of private investigation and background check industry groups.

The industry groups argued this kind scrutiny is routine, and essential in protecting the government and private companies from bad hires.

The filing included a section detailing who submitted the brief. One of the industry groups was the National Association of Professional Background Screeners, which said in the brief that its “clients are among the more than 88% of US companies that perform background checks on their employees.”

The filing doesn’t say where the percentage cited by the background screeners comes from, a deficiency first documented by Larsen, the law professor at the College of William & Mary, in her study of the high court’s use of amicus briefs.

In truth, research into employment screening is scant, and hard numbers nonexistent.

The Society for Human Resource Management has surveyed its members about backgrounding practices multiple times and its reports are the only publicly available information on the subject. The society published survey results in January 2010, shortly before the background check groups filed their brief. The society’s results show that 92 percent of those surveyed check applicants’ references and 74 percent said they perform criminal background checks, but the results don’t include the 88 percent figure.

The society’s survey also reflects practices of a small subsection of American businesses, gathering nearly all its responses from companies that employ 100 or more workers. Less than 2 percent of US companies are that size, according to Census Bureau data.

ProPublica asked the National Association of Professional Background Screeners to provide the basis of its “more than 88%” figure. Melissa Sorenson, the association’s executive director, initially said the statistic was from the human resources society’s survey. After ProPublica informed her the survey did not include that number, Sorenson gave two answers she said were based on her conversations with a lawyer who represented the association on the brief.

First, Sorenson said the association got an advance look at the human resources society’s survey results, “so we were running with preliminary data that wasn’t public.” Preliminary results were slightly different from the published report, she said.

But Michael Aitken, the human resources society’s vice present for government affairs, said he couldn’t find any record his organization had provided anyone with preliminary survey results. And neither the preliminary nor the final results included the 88 percent figure, Aitken said.

Then Sorenson said the background screener association calculated the number by combining data from two separate questions in the human resources society’s survey. One question asked about background checks on job applicants, the other about checks on existing employees, she said.

The survey only asked about checks performed on job applicants, the published results show. Asked again for an underlying source, the background screeners association responded that it had no answer.

“Unfortunately, we have not identified anything in our records to shed further light,” a spokesman said by email.

ProPublica found the background screeners association featured the 88 percent number in its lobbying materials nine months before the human resources society’s survey that included the background-check question was conducted.

Larson, the William & Mary law professor, said she’d assumed the statistic had some basis in reality. “This is much worse than I expected,” she said.

Arizona v. US

Arizona’s state Legislature enacted a law in 2010 to enlist local police in immigration enforcement. It made it a state crime to be in the US illegally and to seek work without legal documentation.

The law required Arizona’s sworn police officers to verify the citizenship status of all people they detained or arrested. And it allowed officers to arrest without warrants people they believed were undocumented immigrants. The federal government sued to block the law, arguing the state law infringed on its powers to manage the nation’s immigration system.

In 2012, the Supreme Court largely sided with federal officials, and struck down most of Arizona’s law. However, it left intact the provision requiring police to check the citizenship of people arrested or detained.

Kennedy wrote the majority opinion. In addition to his legal reasoning, Kennedy argued that the citizenship checks were defensible because of the threat undocumented immigrants presented to Arizona. Specifically, he wrote, “in the State’s most populous county, these aliens are reported to be responsible for a disproportionate share of serious crime.”

Kennedy sourced that statement to a report from the Center for Immigration Studies, a nonprofit that advocates for reduced immigration and strict enforcement. The justice described the report as “estimating that unauthorized aliens comprise 8.9 percent of the population and are responsible for 21.8 percent of the felonies in Maricopa County, which includes Phoenix.”

The center gathered those figures from other sources. The percentage of felony crimes committed by undocumented immigrants originated in a 2008 study published by former Maricopa County Attorney Andrew Thomas, the elected prosecutor. Thomas vehemently opposed illegal immigration, and made local enforcement a top priority. He did not respond to several calls and emails seeking comment.

The study, titled “Illegal Immigration,” determined undocumented immigrants made up 18.7 percent of the individuals sentenced for felony convictions in Maricopa County court in 2007, not 21.8 percent. The 18.7 percent figure, to be sure, is still disproportionately high compared to undocumented immigrants’ share of the population.

The higher number, it turns out, was an estimate, but it is unclear exactly how Thomas produced it. Data tables in the study show 18.7 percent. (Thomas’ study included only 2007 because his office began collecting data on defendants’ immigration status that year.)

Even the lower percentage overstates what portion of the county’s convictions for serious crimes were attributable to undocumented immigrants.

Underlying data published with the study breaks out convictions by the most serious offense involved. Thomas took an aggressive approach to three offenses: smuggling, impersonation and forgery. He prosecuted these cases involving undocumented immigrants as felonies, though they could have been reduced to misdemeanors.

In the cases of smuggling, Thomas went further and prosecuted undocumented immigrants being smuggled into the US as “co-conspirators” in their own smuggling. This turned a group previously treated as crime victims, or at worst as human cargo, into smugglers. No other county attorney in Arizona used the statute that way. Criminal impersonation is the use of another person’s identification information, such as a Social Security number on a job application. Arizona’s forgery statute makes it a felony to possess an identification card with false information.

In 2007, more than 1,500 undocumented immigrants in Maricopa County were convicted of the immigration offenses that Thomas targeted, including 339 for smuggling.

Except for smuggling, impersonation and forgery, undocumented immigrants made up 13.8 percent of defendants convicted of felonies that year.

Thomas commissioned the study to dispute local media reports that said undocumented immigrants were not an outsized share of criminal defendants. “This landmark research belies the claim that illegal immigration and crime are not related,” Thomas said in the press release announcing his results. “To the contrary, our border crisis is directly fueling Arizona’s crime rates.”

Five weeks before Kennedy published Thomas’ statistic in a Supreme Court opinion, Arizona disbarred Thomas. The state bar’s probable cause report details an array of misconduct during his tenure as county attorney, much of it involving “dishonesty, fraud and deceit.”

US v. Windsor

US v. Windsor was one of several landmark Supreme Court decisions recognizing constitutional protections for same-sex marriages. Kennedy wrote the 2013 majority opinion ruling that the Defense of Marriage Act, which limited marriage to unions between one man and one woman, violated same-sex couples’ right to equal protection under the law. As a result, same-sex couples cannot be deprived of federal benefits.

Part of Kennedy’s argument was that the federal government had long treated all marriages authorized by the states as legitimate. This has been the practice even though marriage laws differ from state to state. For example, he wrote, “most States permit first cousins to marry, but a handful—such as Iowa and Washington … prohibit the practice.” Kennedy listed only the two states’ marriage statutes as sources.

The primary elements of his statement are false. Half the states prohibit marriages between first cousins, Iowa and Washington among them.

Five states (Arizona, Illinois, Indiana, Utah and Wisconsin) severely limit marriages between first cousins, requiring the couple to be infertile or both individuals to meet minimum age requirements that range from 50 to 65 years old. Maine requires that first cousins obtain a physician’s certificate of genetic counseling about health risks to children before receiving a marriage license.

A minority of states, 19, permit the practice without limits. At the time Kennedy wrote the opinion, the National Conference of State Legislatures had a webpage tracking marriage between first cousins by state, which any internet search engine could have found.

The error was not significant to the ruling, as Kennedy focused on how the Defense of Marriage Act discriminated against same-sex marriages, which was the law’s intent.

Florida v. Harris

The Fourth Amendment protects people from unreasonable searches by the government, but it’s always been ticklish to define “unreasonable.” In 2013, the Supreme Court considered the use of dogs to detect drugs and justify searches.

A Seminole County, Florida, sheriff’s deputy had pulled over Clayton Harris’ pickup truck one day in June 2006 for an expired registration tag. The deputy saw an open beer can in the cup holder and noted Harris was shaking and breathing rapidly. The deputy asked for consent to search the truck; Harris said no. So the deputy brought out his drug-sniffing dog, Aldo, to smell the outside of Harris’ truck. Aldo signaled that drugs might be somewhere in or near the driver-side door.

The deputy found ample ingredients to make methamphetamine inside the truck and arrested Harris for possession of illegal amounts of pseudoephedrine. But there were no drugs that Aldo was trained to detect. Harris’ defense lawyer asked a state court in Florida to toss out physical evidence from the truck search, because it was warrantless and without probable cause. The dog’s “alert” was unreliable, the lawyer argued. The state court disagreed.

Several years later, the Supreme Court took up the appeal of the Florida case. It upheld the state court’s finding unanimously.

Kagan, in her opinion, discussed how to assess whether alerts from dogs to their police handlers should be trusted. Her concern about mistakes was evident. She expressed skepticism of records detailing how dogs perform while on duty in the field.

“Errors may abound in such records,” she wrote.

Performance records from the field can’t track times a dog failed to smell drugs present in a car, she noted. And instances when a dog alerted handlers to the smell of drugs but the search found none might wrongly suggest the dog was at fault. What if, Kagan wondered, hours earlier there had been a bag of marijuana or heroin in that spot? The dog might have performed perfectly and yet no arrest resulted.

“The better measure of a dog’s reliability thus comes away from the field, in controlled testing environments,” Kagan wrote.

In such a controlled setting, tests could be done to see if a dog was vulnerable to false positives, she wrote. In a footnote, she attributed this confidence to the Scientific Working Group on Dog and Orthogonal Detector Guidelines. The working group is one of 19 assembled by federal agencies to establish best practices and support research to improve specific areas of forensic science.

Quoting from a set of guidelines published by the group in 2010, Kagan wrote that “a dog’s reliability should be assessed based on ‘the results of certification and proficiency assessments.’” That way, she concluded, “you should know whether you have a false positive.”

Controlled tests, to be sure, provide advantages. But the nation’s largest police dog certification organizations do not measure the risk of false positives in their tests. Some altogether exclude inaccurate alerts from their evaluations, instead scoring dog teams only on how well they find drugs hidden in cars and rooms.

Earlier this year, Overall, the animal behaviorist who resigned from the working group, published a report detailing how to assess whether dogs can reliably detect a specific scent. “The minimally acceptable test design is 40 boxes (20 empty and 20 with target),” she wrote. In the context of drug detection, dog teams need to interact with 20 items or areas that contain drugs and 20 that contain none, which are randomly spread throughout a test. That is the minimum to determine a dog’s potential error rate.

Police dog certifiers generally include a small number of “blank searches” in their tests, instances where a dog might indicate the presence of drugs where there are not any.

The National Police Canine Association, for instance, has dog teams search seven areas, four with drug “finds” and three without. However, it does not count false positives in its scoring. “K-9 Team must locate at least three (3) out of the four (4) finds to certify,” the association standards state.

False positives are barely included in the United States Police Canine Association’s evaluation, and by themselves can’t keep a dog team from being certified, scoring forms show. Inaccurate alerts are considered along with the dog’s perceived attentiveness and whether the dog peed during the test. Only false negatives—missing a hidden drug sample—can cause a team to fail.

The National Narcotic Detector Dog Association hides four drug samples in four rooms, its standards state, without any blank search areas. A dog team fails certification if it twice alerts handlers to drugs in the wrong spots before it finishes locating the four real samples.

Tests like these prove nothing, Overall, the behavioral scientist, said, and their results are just “random chance.” Representatives from law enforcement, she said, successfully opposed adding tougher testing requirements to the guidelines.

To many working group members, Overall said, “having a conviction reversed or having to use a more stringent evidence collection standard was a problem.” ProPublica provided Overall’s comments to the working group’s chairman, but he did not give a response.

Maryland v. King

Kennedy wrote the 2013 majority opinion in Maryland v. King, which ruled it was constitutional for police to take biological samples from arrestees by swabbing the inside of their cheeks.

Molecular biology was central to the case, and justices couldn’t avoid dealing in scientific facts. Kennedy wrote a brief description of DNA analysis for human identification, just seven sentences, and referenced his source, “Fundamentals of Forensic DNA Typing,” a textbook by John Butler, a top official at the National Institute of Standards and Technology.

But Kennedy made multiple mistakes in those few sentences, inaccurately defining scientific terms and asserting that DNA analysis is so accurate that it can literally match a single person with no chance of error.

In fact, while DNA profile matches can identify people with great precision, they cannot do so with absolute certainty. The limits of DNA in that sense were described amply in the textbook Kennedy referenced.

Butler, the textbook’s author, told ProPublica that Kennedy’s sentence overstated the reliability of DNA analysis.

“To be able to say, ‘It is that individual,’ you’d have to sequence the entire genome,” Butler said. “And even then, you could have an identical twin.”

The error was not critical to Kennedy’s argument, which emphasized that cheek swabs are not invasive, and that the DNA material taken primarily contains identifying code rather than genetic traits.

Shelby County v. Holder

Congress passed the Voting Rights Act in 1965 to stop several southern states from denying African Americans their constitutional right to vote. The law initially required six states—Alabama, Georgia, Louisiana, Mississippi, South Carolina and Virginia—to get federal approval for their election laws and any contemplated changes. The original act expired after five years, but Congress renewed the landmark civil rights law repeatedly and, in 2006, extended it another 25 years.

In a 2013 case called Shelby County v. Holder, the Supreme Court, in a 5-4 decision, determined that it was no longer necessary to keep the six states under federal oversight. America had changed, the court concluded. Chief Justice John Roberts, writing for the majority, called the “extraordinary and unprecedented” requirements of the Voting Rights Act outdated and unfair.

To illustrate his point, Roberts constructed a chart and published it in the body of the opinion. It compared voter registration rates for whites and blacks from 1965 and 2004 in the six southern states subject to special oversight. Roberts assembled his chart from data in congressional reports produced when lawmakers last renewed the act. The data displayed clearly that registration gaps between blacks and whites had shrunk dramatically.

But some of the numbers Roberts included in his chart were wrong.

The chart suggested that rates of registration for blacks in 2004 had matched or even outstripped those for whites. But Roberts used numbers that counted Hispanics as white, including many Hispanics who weren’t US citizens and could not register to vote, which had the effect of inaccurately lowering the rate for white registration.

There is no question great strides had been made in black voter registration in Georgia, which reached 64.2 percent in 2004. However, white registration was 68 percent, not 63.5 percent, as Roberts’ chart claimed. The rate of registration for whites exceeded that of blacks by 4 percent, rather than trailing it.

Similarly, the chief justice’s chart asserted that in Virginia, the rate of registration for whites was just 10 percent higher than the rate of registration for blacks, a narrowing that would have reflected enormous progress. But the actual gap, removing erroneously counted Hispanics, was 14.2 percent.

The argument Roberts was making—that the progress in southern states had been so substantial that there was no longer a need for the US Department of Justice’s exacting oversight—might have remained persuasive. But the data he used as evidence was not true.

How did Roberts arrive at his numbers?

Roberts had relied on a report generated by the Senate Judiciary Committee from 2006. The committee’s staffers went to the right source: the US Census Bureau’s post-election survey in 2004. The survey provides estimates of voter registration and turnout by state, gender, race and ethnicity, and citizenship.

But the staffers went to the wrong set of numbers for white voters. They pulled voter registration rates for “white alone” to represent white voters, perhaps unaware of how the census bureau handles race and ethnicity.

“White alone” means all people identified as being part of the white racial group. The census considers ethnicity separately from race. If a person identifies as Hispanic, they will also be counted as part of at least one racial group (i.e. white, black, Asian, Native American, other).

Most Hispanics are counted as “white alone” under race. Which is why the Census Bureau provides separate numbers for the category “white non-Hispanic alone,” usually right next to the “white alone” figures.

To those familiar with Census Bureau data, the difference is well understood. Researchers frequently convert the Hispanic-origin ethnicity into its own racial group when analyzing disparities. The Census Bureau itself does so in reports using its election survey data.

Roberts’ chart, however, did not use generally accepted definitions of race.


Our team has been on fire lately—publishing sweeping, one-of-a-kind investigations, ambitious, groundbreaking projects, and even releasing “the holy shit documentary of the year.” And that’s on top of protecting free and fair elections and standing up to bullies and BS when others in the media don’t.

Yet, we just came up pretty short on our first big fundraising campaign since Mother Jones and the Center for Investigative Reporting joined forces.

So, two things:

1) If you value the journalism we do but haven’t pitched in over the last few months, please consider doing so now—we urgently need a lot of help to make up for lost ground.

2) If you’re not ready to donate but you’re interested enough in our work to be reading this, please consider signing up for our free Mother Jones Daily newsletter to get to know us and our reporting better. Maybe once you do, you’ll see it’s something worth supporting.

payment methods


Our team has been on fire lately—publishing sweeping, one-of-a-kind investigations, ambitious, groundbreaking projects, and even releasing “the holy shit documentary of the year.” And that’s on top of protecting free and fair elections and standing up to bullies and BS when others in the media don’t.

Yet, we just came up pretty short on our first big fundraising campaign since Mother Jones and the Center for Investigative Reporting joined forces.

So, two things:

1) If you value the journalism we do but haven’t pitched in over the last few months, please consider doing so now—we urgently need a lot of help to make up for lost ground.

2) If you’re not ready to donate but you’re interested enough in our work to be reading this, please consider signing up for our free Mother Jones Daily newsletter to get to know us and our reporting better. Maybe once you do, you’ll see it’s something worth supporting.

payment methods

We Recommend


Sign up for our free newsletter

Subscribe to the Mother Jones Daily to have our top stories delivered directly to your inbox.

Get our award-winning magazine

Save big on a full year of investigations, ideas, and insights.


Support our journalism

Help Mother Jones' reporters dig deep with a tax-deductible donation.