Innovations in technology are constantly shaping the way societies operate, with profound implications for our quality and way of life. Of course, the development and use of technology is itself shaped by society’s norms, values and priorities. In democratic societies, we generally expect our public institutions to act in the interest of the citizenry by weighing the benefits of technology against the potential for error or abuse. This extends to the criminal justice system, which employs a range of technology to investigate, prosecute and prevent criminal behavior.

Today, new technologies are revolutionizing the criminal justice system, just as DNA analysis did when it first emerged over three decades ago. The gathering of DNA evidence is now routine procedure for police departments and has resulted in millions of criminal convictions, as well as some exonerations. Similarly, the use of advanced biometrics, such as digital fingerprints, palm prints, iris recognition, and facial recognition, has now become standard practice within many law enforcement agencies and departments. As of 2016, at least one-quarter of the law enforcement agencies across the United States had access to a facial recognition system, and that number is growing.

Facial recognition technology allows police to identify suspects linked to criminal activity as well as murder victims or patients suffering from Alzheimer’s, who cannot identify themselves. This technology is also widely used outside of law enforcement, for example, when checking-in for a flight, unlocking a cellphone, or tagging photos on social media. Some private-sector employers, including Intel, are using facial scanning to monitor and control access to their properties.

While these tools can provide critical protection to law enforcement and the public, their use also raises important questions around privacy, consent and racial bias.

“People have a tendency to believe technology is infallible, that it doesn’t make errors,” says Corey White, a senior vice president at Future Point of View. “Yet if an algorithm has been built on a biased, uneven, or incomplete data set, it can make mistakes that can be very harmful. Imagine if a facial recognition technology is unable to recognize someone or misidentifies a person. Imagine if that technology is being used by law enforcement. It is not hard to picture this leading to a confrontation between a police officer and a citizen. As we are all well aware, these confrontations can turn violent. In that way, a simple misunderstanding could escalate into catastrophe.”

In an effort to harness the evolution of biometric technologies, the FBI has created a biometric identification database program called Next Generation Identification which stores the fingerprints, iris scans, DNA profiles, voice identification profiles, palm prints, and photographs of millions of Americans, many of whom have neither been convicted nor suspected of a crime.

The system employs facial recognition technology to analyze collected images. The Washington Post reported last year that the FBI’s facial-recognition search has access to local, state and federal databases containing more than 641 million face photos.” This means that at least half of all American adults are in a law enforcement facial recognition database and their images are being analyzed without their knowledge.

The growing reach of facial recognition technology has led to concern amongst civil liberties and privacy rights advocates about potential violations of due process and the use of this technology for mass surveillance. Authoritarian countries, like China, are enthusiastic consumers of facial recognition technology, which they use in conjunction with vast networks of cameras to track peoples movement and activities.

In the U.S., activists have expressed fears that facial scanning could be used to identify and detain protestors as well as undocumented immigrants. U.S. Immigration and Customs Enforcement (ICE) officials  are known to have mined driver’s license photographs from DMV databases in states that grant licenses to undocumented immigrants.

Public scrutiny of facial recognition technology has increasingly zeroed in on the risk of wrongful arrest and conviction based on its less than stellar track record in identifying people of different races. Research shows that the accuracy of the technology largely depends on the quality of the image and color of a person’s skin. People with darker skin are more likely to be falsely identified, with black women experiencing “high one-to-one false match rates according to a report by the U.S. Government’s National Institute of Standards and Technology.

The same report found that in images used by domestic law enforcement, Native Americans produced the highest false positives, followed by African Americans and Asian populations. Across all races, women are more likely to produce false positives than men.

Following nationwide protests against police brutality sparked by the killing of George Floyd, tech companies involved in the development and sale of facial recognition systems have come under pressure to limit its use. In June, Microsoft announced that it would halt the sale of the technology to police departments until Congress regulates its use. For its part, Amazon has instituted a one-year ban on the use of its tool by police departments, although it is unclear whether or not it will continue sales to federal law enforcement agencies like ICE.

IBM has gone even further, saying it will get out of the facial-recognition business altogether.In the absence of federal legislation, some cities, including San Francisco, have stepped up to bar law enforcement from using facial recognition systems within their jurisdiction.

Not everyone is in favor of keeping facial recognition systems out of the hands of police. Proponents of the technology argue that in so doing, we run the risk of undermining public safety by limiting the detection and prevention of crime, as well as our ability to find missing children and adults. As the technology improves and corrects for racial and gender bias, false identifications should decrease across all demographics.

“Technology companies are recognizing the dangers of bias in facial recognition technologies, and this is heartening,” says White.

“The fact that these companies are acknowledging this problem and their role in it is great. However, a ban of facial recognition in law enforcement is a short term solution. Facial recognition technology will be used by law enforcement. It is too powerful a tool, the databases are already available, and the technology is rapidly developing. The genie is out of the bottle, and it won’t be put back in.”

Algorithms are, nevertheless, imperfect and the potential for wrongful arrest and conviction on the basis of facial recognition technology remains a real threat to Americans’ constitutional rights. As the use of these systems continues to expand in scope, we must look to our public institutions, namely Congress, to legislate a balance between the protection of public safety and respect for our civil liberties alongside the development of technology.

White concludes, “In the end, our desire for security is in conflict with our right to privacy. As these technologies evolve, from the facial recognition cameras that surround us to the health data coming off our wearables to the location tracking in our smartphones, we will constantly be in a battle between public safety and personal privacy.”

Written By Shirin Wertime, Research Associate