Logo
Sound journalism, grounded in facts and Biblical truth | Donate

Facing the algorithms

As police turn to facial recognition technology to identify suspects and solve cases, critics worry about privacy and false accusations


Robert Julian-Borchak Williams Illustration by Krieg Barrie

Facing the algorithms
You have {{ remainingArticles }} free {{ counterWords }} remaining. You've read all of your free articles.

Full access isn’t far.

We can’t release more of our sound journalism without a subscription, but we can make it easy for you to come aboard.

Get started for as low as $3.99 per month.

Current WORLD subscribers can log in to access content. Just go to "SIGN IN" at the top right.

LET'S GO

Already a member? Sign in.

Robert Julian-Borchak Williams was at his job at an automotive supply company on Jan. 9, 2020, when Detroit police called and told him to report to Detroit’s 3rd Precinct: The caller said police intended to arrest him—at work if necessary—but wouldn’t say why. Taking it as a prank call, Williams replied, “Well, I’m leaving work in 15 minutes, so unless you can get here in 15 minutes you can come to my home.”

As Williams pulled into his driveway in Farmington Hills, Mich., an hour later, a police cruiser immediately parked behind him. Officers jumped out and handcuffed him as his wife and two children watched. Authorities charged him with grand larceny for a $3,800 theft committed six months prior at a high-end watch store.

Police held Williams in jail overnight. The next day, during questioning, police showed him two grainy photos from the store’s surveillance cameras as evidence of his guilt. But the photos showed another man, not Williams. He held one of the photos next to his face and said, “No, that’s not me. I hope y’all don’t think all black people look alike.”

At that point, detectives revealed that a computer program had matched the store’s surveillance images with Williams’ 10-year-old driver’s license photo. It seemed the computer had made a mistake. Authorities held Williams for 30 hours and released him on $1,000 bond. At a court hearing two weeks later, the prosecutor dropped the case.

Police have long used a combination of witness testimony and shoe-leather detective work to track down criminals and solve crimes—scouting for fingerprints and DNA evidence, placing suspects in lineups, staking out homes and businesses, and reviewing hours of surveillance footage. In the 21st century, new technology has entered the mix: State and federal law enforcement agencies, including the FBI, have begun using facial recognition technology (FRT) to identify suspects and victims. Proponents say it is an essential and powerful crime-fighting tool. Most recently, police used facial recognition to identify individual participants at the Jan. 6 riot at the U.S. Capitol.

But privacy experts and defense attorneys cite problems with the technology. National Institute of Standards and Technology (NIST) testing of FRT software revealed inaccuracies in identifying women and minorities. Police department policies governing its use are either nonexistent or ill-defined. And, as Williams’ case demonstrates, a false identity match can result in a wrongful arrest.

In a similar case in November, New Jersey resident Nijeer Parks sued officials and police in the city of Woodbridge, N.J., claiming police wrongly arrested and imprisoned him in January 2019 based on a mistaken facial recognition match.

Within the last two years, the state of Washington and the cities of Boston; San Francisco; Minneapolis; Madison, Wis.; Portland, Maine; and Portland, Ore., have banned governmental use of facial recognition technology or restricted its use to specific situations, such as identifying and locating “victims of human trafficking, child sexual exploitation or missing children.” In restricting the technology, officials cited concerns about accuracy and citizen privacy.

Still, government and commercial entities are implementing FRT at an increasing pace, and no national law or policy regulates its use. The technology’s growth raises an important question: Do its crime-fighting benefits outweigh concerns about privacy and accuracy?

A video surveillance camera hangs from the side of a building in San Francisco, Calif.

A video surveillance camera hangs from the side of a building in San Francisco, Calif. Justin Sullivan/Getty Images

TO LEARN MORE, I spoke with Ryan Gable, a police constable in Montgomery County, Texas. We met in Gable’s Precinct 3 office, where dark wood shelves behind his desk hold personal mementos, including Texan football souvenirs, family photos, and a humorous quote: “I love you more than bacon.”

First elected to head Precinct 3 in 2012, Gable has served in law enforcement since 1993. He began his career investigating narcotics as an undercover officer in neighboring Harris County (home of Houston). With a staff of 80, including 69 officers, Precinct 3 serves a growing population of more than 250,000 residents.

In 2019, Gable’s precinct purchased a facial recognition software app from Clearview AI, a private tech startup that provides a facial recognition search system for police. He’d heard about the software while attending a meeting of private investors evaluating various technologies for use by law enforcement. After a free 2-3 month trial use of the app, the precinct purchased a discounted license.

Government and commercial entities are implementing FRT at an increasing pace, and no national law or policy regulates its use.

Here’s how the software works: A police official uploads a person’s image into the app, which returns a series of possible identity matches. The software compares the image against other images in Clearview’s database—3 billion total, according to the company. Clearview assembled its massive database by scraping images posted online, including those in public social media accounts on Instagram, LinkedIn, and Facebook. It claims 2,400 law enforcement agencies worldwide use its system.

According to Gable, the app has proven valuable in multiple cases. He describes one particular fraud investigation: Using counterfeited driver’s licenses, someone withdrew money from bank accounts throughout Texas, stealing the identities of 17 individuals.

One bank’s surveillance video provided a particularly clear image, which the Precinct 3 criminal investigative division ran through the Clearview app. The software produced mugshots found via Google of a female who, after further police investigation, became the suspect in the case. Police haven’t yet found her, but an open warrant for her arrest is visible to any officer who might pull her over for a traffic violation.

Facial recognition doesn’t necessarily provide police with an automatic suspect. But Gable says the Clearview app has provided his investigators with leads in situations where none existed, including cases of robbery, shoplifting, online solicitation of a minor, and home invasions in Houston. The technology can also help track down a known victim or suspect.

Losing access to facial recognition would be a significant loss for Gable’s investigators. “Particularly for technology crimes,” Gable says, “with video and pictures of young people being abused, the biggest impact is not being able to identify victims in horrible situations.”

Although his department also uses other means of investigation—including social media, internet searches, and in-person inquiries by boots-on-the-ground officers—Gable claims the app gives police a distinct ability to put names to anonymous people in videos.

Others agree. Former New York City Police Commissioner James O’Neill called facial recognition technology a “uniquely powerful tool in our most challenging investigations.” Sheriff Bob Gualtieri of Pinellas County, Fla., told NBC News, “The technology has changed policing almost entirely for the better.”

Attendees interact with a facial recognition demonstration during the Consumer Electronics Show in Las Vegas.

Attendees interact with a facial recognition demonstration during the Consumer Electronics Show in Las Vegas. Joe Buglewicz/The New York Times/Redux

MANY PRIVACY advocates are alarmed by the growing reach of facial recognition.

Early last year, The New York Times ran a series of articles about Clearview AI, a small company that until then had remained largely unknown. Founder Hoan Ton-That marketed the software by offering trial versions in “cold call” emails to law enforcement personnel.

Reaction to the Times coverage was swift. In New Jersey, Attorney General Gurbir S. Grewal subsequently instructed all of the state’s prosecutors to ban police use of the Clearview AI app.

Twitter, Google, YouTube, Venmo, and LinkedIn sent cease-and-desist letters to Clearview, attempting to stop the app from mining their platforms for pictures. Responding to criticisms of privacy infringement, Ton-That argued Clearview does nothing more than run a “Google-­type” search for images already posted online.

In Illinois, the American Civil Liberties Union sued Clearview under the state’s Biometric Information Privacy Act, challenging the company’s collecting of online images without explicit consent. Additional lawsuits in California, Virginia, and New York alleged similar privacy law violations.

Amid last summer’s protests against police brutality, IBM, Microsoft, and Amazon.com halted or suspended sales of their respective FRT applications to police agencies.

In 2016, Georgetown Law’s Center on Privacy and Technology published “The Perpetual Line-Up,” the first of three reports based on a yearlong investigation. Georgetown sent over 100 records requests to law enforcement agencies, surveying their use of “face recognition and the risks it poses to privacy, civil liberties and civil rights.” The study concluded that the faces of 1 in 2 U.S. adults (117 million people) could be found in government databases, compiled from driver’s license photos and booking mugshots.

If the training set is not diverse enough or large enough, it will be biased.

Clare Garvie, a lead author for the Georgetown study, said there exists a cognitive bias by police to assume “the machine got it right”—as evidenced in the arrest of Robert Williams.

The machines don’t always get it right.

Faces, like fingerprints, include biometric features unique to each person. Facial recognition software uses specialized algorithms to map features such as the shape of the chin, size of the forehead, or the distance between the eyes. In this process, the image is converted into a numerical code. The computer can then compare that code with other encoded images, searching for a match.

However, researchers like Garvie question the accuracy of the technology, especially in the context of police investigations: For example, a low-quality or altered “probe” image—the image submitted for matching to the database—can result in misidentification. DataWorks, the FRT software used by the New York Police Department, allows the user to edit an imperfect probe image before running the match. This editing, known as “face normalization,” includes actions such as rotating the image from a side to a front view, inserting features such as eyes, or performing a mirrored effect on a partial face. Garvie compares these actions to adding your own DNA to a DNA sample.

Another problem is the “inherent bias” in FRT software. Arun Ross, a Michigan State University computer science professor, said that bias stems from the training sets—large databases of facial images—used to develop the software. Instead of telling the computer which features it should use, programmers ask the computer to decide: The computer scans hundreds, thousands, or millions of faces in an image database to create an FRT software algorithm.

But if that training set is not diverse enough or large enough, it will be biased. If the training set involves subjects mainly from a certain demographic group (Caucasian Americans, for example), then the resulting software won’t perform well with other demographic groups (African Americans, Asians), resulting in less accurate matches. Reliable FRT software needs a training set sufficiently diverse in terms of gender, race, age, lighting of images, and facial expressions.

How many faces would constitute a sufficient training set? “That is the million-dollar question,” said Ross.

Joy Buolamwini, an MIT researcher, has shown that widely used FRT systems are much more likely to make errors detecting dark female faces than light male faces. A 2019 conference paper Buolamwini wrote with co-researcher Deborah Raji asserted that Amazon’s facial recognition software, Rekognition, mislabeled darker-skinned women as men 31 percent of the time.

Amid these concerns about software accuracy is the overarching problem that police can use the systems without forensics training. Jerome Greco, a public defender in the Digital Forensics Unit of the Legal Aid Society in New York City, notes that although developers offer courses in biometric forensic technology, there’s no standardized training requirement for law enforcement personnel running the FRT matches.

MIT researcher Joy Buolamwini

MIT researcher Joy Buolamwini Steven Senne/AP

IN SEPTEMBER, THE DETROIT CITY Council approved by a 6-3 vote the continued use of facial recognition technology by its police department. Councilman James Tate, who is black and voted with the majority, received criticism from his black constituents. While acknowledging the regrettable case of Robert Williams, Tate told The New York Times, “What I don’t want to do is hamper any effort to get justice for people who have lost loved ones.” Tate believes “with oversight, law enforcement would be better off using facial recognition software than not.”

In Massachusetts in December, lawmakers passed a police reform bill curtailing the use of FRT except with a warrant or in cases of potentially imminent death or injury. Gov. Charlie Baker, citing examples where FRT helped to convict a child sex offender and a double murderer, only agreed to sign the bill after the state Senate relaxed restrictions that were in an earlier draft. He had told The Boston Globe he was “not going to sign something that is going to ban facial recognition.”

Ross believes legislation alone cannot address the tensions: It will also require advancements in privacy enhancing technology embedded in FRT systems.

The trade-off between liberty and safety remains a struggle in every society. Facial recognition technology, like the internet, is a neutral tool—not inherently good or evil. But how police use it will determine whether it becomes a boon or bane to society.

Michael King, a professor of computer science and cybersecurity at Florida Institute of Technology, participated in a recorded panel discussion last August at IEEE.tv titled “Facing the Truth: Benefits and Challenges of Facial Recognition.” King described the conflict society must grapple with:

“If, for some reason, I happen to be wrongfully arrested as a result of some false match that occurred with face recognition, I’ll be probably the first one marching to some city hall somewhere saying ‘Ban the technology completely,’” he said. “But if the following week somebody hits me across the head as I’m pumping gas at some convenience store and the only thing you have is an image, I’ll be right back down to city hall saying ‘Give me that technology, I need it to try and identify this person.’”

To ban or not to ban? That is the question—with no simple answer.

—Maryrose Delahunty is a graduate of the World Journalism Institute mid-career course

A video surveillance camera hangs from the side of a building in San Francisco, Calif.

A video surveillance camera hangs from the side of a building in San Francisco, Calif. Justin Sullivan/Getty Images

Virtually unmasked

Your COVID-19 face mask may protect your face, but not your identity.

Managers of commercial businesses, residential buildings, sporting venues, and other groups that use facial recognition technology for security purposes found they needed an upgrade during a pandemic: Could software identify the partially concealed faces of mask wearers?

Some firms say yes. Researchers at the Switzerland-based Tech5 and the California-based Trueface are exploring ways to identify a face above the nose.

Software providers are making progress. In the United States, the National Institute of Standards and Technology published a study in November that found significant improvements in facial recognition technology used on masked faces.

The Department of Homeland Security (DHS) also noted improved performance at its annual biometric testing event. Median test results showed the technology could identify masked faces with a 77 percent success rate. The best-performing software identified masked faces with 96 percent accuracy. Officials at DHS hope the improving technology will allow air travelers to keep their masks on, instead of removing them at airport security checkpoints.

Japanese companies may be doing even better. Reuters reported Japan’s NEC Corp. began selling a facial identity verification system in October that operates on masked faces with an advertised accuracy rate of 99.9 percent. —M.D.


Maryrose Delahunty

Maryrose is a WORLD correspondent, a graduate of World Journalism Institute, and a practicing attorney.

COMMENT BELOW

Please wait while we load the latest comments...

Comments