"We and others in the field have predicted for a long time that there would be misidentifications. We predicted there would be abuse. We predicted there would be state surveillance, not just after-the-fact forensic face identification," says Alvaro Bedoya, the founding director of Georgetown Law’s Center for Privacy & Technology. "And all those things are coming true. Anyone who says this technology is nascent has not done their homework."

At Wednesday’s House hearing, witnesses similarly emphasized that facial recognition technology isn’t just a static database, but is increasingly used in sweeping, real-time, nonspecific dragnets—a use of the technology sometimes called "face surveillance." And given the major shortcomings of facial recognition, especially in accurately identifying people of color, women, and gender nonconforming people, the witnesses argued that the technology should not currently be eligible for use by law enforcement. Joy Buolamwini, a Massachusetts Institute of Technology researcher and founder of the Algorithmic Justice League, says she calls the data sets used to train most facial recognition systems "pale male" sets, because the majority of the photos used are of white men.

"Just this week a man sued Uber after having his driver’s account deactivated due to [alleged] facial recognition failures," Buolamwini told the Committee on Oversight and Reform on Wednesday. "Tenants in Brooklyn are protesting the installation of an unnecessary face-recognition entry system. New research is showing bias in the use of facial analysis technology for health care purposes, and facial recognition is being sold to schools. Our faces may well be the final frontier of privacy."

Representatives across the political spectrum said on Wednesday that the committee is ready to develop bipartisan legislation limiting and establishing oversight for facial recognition’s use by law enforcement and other US entities. But tangible results at the federal level have been scarce for years. And advocacy in the private sphere has faced major hurdles as well. On Wednesday, for example, Amazon shareholders rejected two proposals related to reining in use of the company’s controversial Rekognition facial identification software to allow for research into privacy and civil rights safeguards.

Still, with facial recognition’s ubiquity becoming increasingly apparent, privacy advocates see 2019 as a potential turning point.

"I think it’s too late to stop the proliferation of facial recognition tech. Both government and corporate actors are using it in new ways every day," says Tiffany Li, a privacy attorney at Yale Law School’s Information Society Project. "Hopefully we reach a critical point where we start really working on those problems in earnest. Perhaps that moment is now."

Sourced through Scoop.it from: www.wired.com