Everybody’s afraid of facial recognition tech.
Civil liberties activists warn that the powerful technology, which identifies people by matching a picture or video of a person’s face to databases of photos, can be used to passively spy on people without any reasonable suspicion or their consent. Many of these leaders don’t just want to regulate facial recognition tech — they want to ban or pause its use completely.
Republican and Democratic lawmakers, who so rarely agree on anything, have recently joined forces to attempt to limit law enforcement agencies’ ability to surveil Americans with this technology, citing concerns that the unchecked use of facial recognition could lead to the creation of an Orwellian surveillance state.
Several cities, such as San Francisco, Oakland, and Somerville, Massachusetts have banned police use of the technology in the past year. A new federal bill was introduced earlier this month that would severely restrict its use by federal law enforcement, requiring a court order to track people for longer than three days. And some senators have discussed a far-reaching bill that would completely halt government use of the technology.
But the reality is that this technology already exists — it’s used to unlock people’s iPhones, scan flight passengers faces instead of their tickets, screen people attending Taylor Swift concerts, and monitor crowds like at Brazil’s famous Carnival festival in Rio de Janeiro. Its prevalence has created a delicate situation: proponents of the tech, such as law enforcement and technology manufacturers, downplay facial recognition’s power. They play up its potential to crack open cold criminal cases or reunite missing children with their families.
Meanwhile, opponents warn of how quickly the powerful tech’s use could spiral out of control. As an example, they point to China, where the technology is regularly used to surveil and oppress an ethnic minority. The solution may be somewhere in between — there are cases when use this tech can do good, especially if it’s carefully regulated and the communities impacted by it are in control of how it’s used. But right now, that looks like an ideal scenario that we’re still far from achieving.
“What we really need to do as a society is sort through what are the beneficial uses of this technology and what are the accompanying harms — and see if there are any roles for its use right now,” Barry Friedman, faculty director of NYU Law’s Policing Project, a research institute that studies policing practices, told Recode.
Rolling out government use of facial recognition the right way, tech policy leaders and civil liberties advocates say, will involve a sweeping set of regulations that democratize input on how these technologies are used. Here are some of the leading ways that the US government is using facial recognition today, and where experts say there’s a need for more transparency, and for it to be more strongly regulated.
Everyday police use
The most famous examples of law enforcement’s use of facial recognition in the US are the extreme ones — such as when police in Maryland used it to identify the suspected shooter at the Capital Gazette newspaper offices.
But the reality is, as many as one in four police departments across the US can access facial recognition according to the Center on Privacy and Technology at Georgetown Law. And at least for now, it’s often in more routine criminal investigations.
“We haven’t solved a murder because of this — but there’s been lots of little things,” said Daniel DiPietro, a public information officer at the Washington County, Oregon police department. Washington County was one of the first police departments in the country to use Amazon’s facial recognition product, called Rekognition, in its regular operations, beginning in 2017.
DiPietro referenced a case where the police department used a screenshot from security video footage to search for someone who was accused of stealing from a local hardware store.
Last year, the county says it ran around 1,000 searches using the tool — which it says it only uses in cases where there is reasonable suspicion that someone has committed a crime. The department doesn’t measure how many of those searches led to a correct or incorrect match, according to DiPietro.
Here’s how it works in Washington County: If officers have a photo, oftentimes from security camera footage, of someone who has committed a crime, they can run it against the jail booking database, and turn up potential matches in a matter of seconds. Before, the department says this process used to take days, weeks, or longer — as police would search manually through a database of 300,000 booking pictures, rack the brains of hundreds of colleagues, or send media notices to try to identify suspects.
DiPietro told Recode that officers only use the tools when there’s probable cause that someone has committed a crime, and only matches it to jail booking photos, not DMV databases. (This sets Washington County apart — several other police departments in the US do use DMV databases for facial recognition searches.) He also said the department doesn’t use Rekognition to police large crowds, which police in Orlando, Florida, tried to do — and failed to do effectively, after running into technical difficulties and sustained public criticism.
The Washington County police department made these regulations at will, in part it says because of conversations it had with members of the community. Their rules are a step toward transparency for the department, but exist in a broader piecemeal and self-mandated landscape of rules and regulations. And like with most other police departments who use facial recognition, critics say there’s often little oversight to make sure that officers are using the tool correctly. A report from Gizmodo last January suggested that Washington County police were using the tool differently than how Amazon recommended and had lowered the confidence threshold for a match to below 99 percent.
In the absence of facial recognition regulation, it’s easy to see the potential for overreach. In an interview with tech media company SiliconANGLE from 2017, Chris Adzima, a senior information systems analyst for the department, spoke about how video footage can enhance the tool’s capabilities — even though the department currently says it has no plans to use video in its surveillance, for now.
Washington County is just one of hundreds of law enforcement agencies at the local, state, and federal level that use facial recognition. And because it uses Rekognition — a product made by Amazon, perhaps the biggest and most scrutinized tech giant — police there have been more public about its use than other law enforcement agencies that use similar, but less known, tools.
Join the Open Sourced Reporting Network
Open Sourced is Recode by Vox’s year-long reporting project to demystify the world of data, personal privacy, algorithms, and artificial intelligence. And we need your help. Fill out this form to contribute to our reporting.
Some law enforcement agencies are simply worried that sharing more information about the use of facial recognition will spark backlash, Daniel Castro, vice president of the DC-based tech policy think tank, Information Technology and Innovation Foundation (ITIF), told Recode.
“I’ve heard from at least one law enforcement agency saying ‘we’re doing some of this work but it’s so contentious that it’s difficult for us to be transparent, because the more transparent we are, the more questions are raised.’” Castro said.
Much of the fear about facial recognition technology is because the public knows little about how it’s used, or whether it’s been effective in reducing crime. In absence of any kind of systemic federal regulation or permitting process — the little we do know is from stories, interviews, public reports, and investigative reports about its prevalence.
And even for police departments that are forthright about how they use the technology, like Washington County, they often don’t collect or share any tangible metrics about its effectiveness.
“Too often we are relying on anecdotes without knowing how many times it isn’t successful — what’s missing from this debate is any kind of empirical rigor,” Friedman told Recode.
Friedman said that with better data, the public might have a better understanding of the true value of facial recognition technology, and if it’s worth the risks.
The bias problem
For racial minorities and women, facial recognition systems have proven disproportionately less accurate. In a widely cited 2018 study, MIT Media Lab researcher Joy Buolamwini found that three leading facial recognition tools — from Microsoft, IBM, and Chinese firm Megvii, were incorrect as much as a third of the time in identifying the gender of darker skinned women, as compared to having only a 1 percent error rate for white males.
Amazon’s Rekognition tool in particular has been criticized for displaying bias after the ACLU ran a test on the software that misidentified 28 members of Congress as criminals, disproportionately providing false matches for black and Latino lawmakers. Amazon has said that the correct settings weren’t used in the ACLU’s test because the organization set the acceptable confidence threshold to 80 percent — although it was later reported that this is the default setting in the software, and one that some police departments seem to be using in training materials.
Presumably, bias issues in facial recognition will improve over time, as the technology learns and data sets improve. Meanwhile, proponents argue that while facial recognition technology in its current state isn’t completely bias-free, neither are human beings.
“[People] want to compare what we’re doing with some perfect status quo, which doesn’t exist,” said Eddie Reyes, the director of public safety communications for 911 in Prince William County, Virginia, who spoke at a recent ITIF panel. “Human beings can be biased, human beings make mistakes, human beings get tired … facial recognition can do things much better.”
But that’s not necessarily true, critics argue: When human beings with innate, even unconscious, biases build algorithms and feed those algorithms data sets, they amplify their existing biases in the tech they build.
And facial recognition can be harder to hold accountable than a human being when it makes a mistake.
“If an individual officer is discriminating against a person, there’s a through line or a causal effect you can see there, and try to mitigate or address that harm,” said Rashida Richardson, director of policy research at AI Now Institute, “But if it’s a machine learning system, then who’s responsible?”
The technology that determines a match in facial recognition is essentially a black box — the average person doesn’t know how it works, and often the untrained law enforcers using it don’t either. So unwinding the biases built into this tech is no easy task.
Just trust us
Another hurdle facial recognition tech will have to clear: Convincing communities they can trust their police departments to wield the powerful tool responsibly.
Part of the challenge is that in many cases, public trust in police officers is divided, especially along racial lines.
“It’s easy to say yes, ‘we should trust police departments,’” said Richardson, “but I don’t know of any other circumstances in government or private sector where ‘just trust us’ is a fair model. If an investor would say, ‘Just trust me with your money, trust me’ — no one would think that’s reasonable, but for some reason under law enforcement conditions it is.”
Some tech companies, such as Microsoft and IBM, have called for government regulation on the technology. Amazon said earlier this year that it’s writing its own set of rules for facial recognition that it hopes federal lawmakers will adopt. But that raises the question: Should people trust companies any more than police to self-regulate this tech?
Other groups such as the ACLU have created a model for local communities to exert oversight and control over police use of surveillance technology, including facial recognition. The Community Control Over Police Surveillance laws, which the ACLU developed as a template for local regulation, empowers city councils to decide what surveillance technologies are used in their area, and mandate community input. More than a dozen cities and local jurisdictions have passed such laws, and the ACLU says efforts are underway in several others.
Overall, there may be benefits of law enforcement’s use of facial recognition technology — but so far, Americans are relying on police department anecdotes with little data points or accountability. As long as police departments continue to use facial recognition in this information vacuum, the backlash against the technology will likely grow stronger, no matter the potential upside.
Passing robust federal level legislation regulating the tech, working to eradicate the biases around it, and giving the public more insight into how it functions, would be a good first step toward a future in which this tech inspires less fear and controversy.