Facing mounting bipartisan pressure, the Internal Revenue Service announced yesterday that it’s walking back plans to deploy facial recognition software to identify taxpayers.
In November, the IRS signed an $86 million contract with identity verification startup ID.me, announcing that it would require taxpayers to provide personal, identifying materials, including selfies, to access their tax records. Privacy and civil rights advocates responded immediately, forming a coalition of close to 20 groups—from the National Lawyers Guild to the Council on American-Islamic Relations—that criticized the “destructive results of facial recognition technology…from police using it to track Black Lives Matter protesters, to wrongful arrests, to manipulative marketing.” The IRS plan, those groups said, “would have expanded the scope of these harms and impact the lives of millions more people.”
Opponents have found support on both sides of the aisle. 15 Republican and five Democratic senators have demanded an accounting from IRS Commissioner Charles Rettig, along with members of the Congressional Progressive Caucus, who followed suit with a separate letter. Democratic Sen. Ron Wyden, of Oregon, raised facial recognition’s history of bias in a separate letter, calling it “simply unacceptable to force Americans to submit to scans using facial recognition technology as a condition of interacting with the government.”
The 15 Republican senators opposing the plan called ID.me’s verification process “intrusive,” arguing that a government clearinghouse of “personal information on a reported 70 million individuals, including biometric data, ID.me could be a top target for cyber-criminals, rogue employees, and espionage.”
ID.me attempts to verify users of digital services by collecting a variety of personal documents, from government-issued IDs, passports, and birth certificates to “video selfies” and interviews with ID.me employees. In addition, the company would have compelled taxpayers to sign three separate, binding contracts, including a “Biometric Data Consent and Policy” which would allow the company access to users “fingerprints, voiceprints, scans of a hand, facial geometry recognition and iris or retina recognition.”
Companies using biometric data and facial recognition technology to verify identities share a history of discriminating against—and sometimes excluding—Black people, other people of color, trans and gender-non-conforming individuals, and women generally. Black computer scientists Joy Buolamwini and Timnit Gebru found that facial analysis algorithms misclassify Black women 35 percent of the time, despite a near-perfect match rate for white men. The same technology, deployed by police departments, has led to wrongful convictions of Black people.
“We understand the concerns that have been raised,” Rettig, the IRS head, said in a public statement. “Everyone should feel comfortable with how their personal information is secured, and we are quickly pursuing short-term options that do not involve facial recognition.”
While the IRS has halted plans to use ID.me’s facial recognition technology, the company still has contracts in effect with nine other federal agencies, including the Social Security Administration and Department of Veterans Affairs, and works with 30 state unemployment offices. Digital rights group Fight for the Future, which led the anti–ID.me coalition, now plans to focus on opposing those contracts as well.