Clearview AI Clients Leaked

Facial recognition, People on the street with blurred faces
Photo source: Marsh Viral


Do You Know Where Your Face Is?

Pictures of you exist, right now, in all kinds of places. When you obtain a driver’s license, your mug is entered into a government database. Same goes for some gym memberships, or the ID card you use to enter your midtown office building. Most of all, though, we willingly send pictures of ourselves to friends and family, and post them to social media accounts.

You accept this level of exposure because it’s controlled. Your face can be on the internet, and so long as it’s where you intend for it to be, you’ll probably be fine. It’s only when people or entities you don’t know begin using your face for reasons you can’t control that privacy becomes a concern.

In recent years, a Vietnamese-Australian former model and current tech entrepreneur named Hoan Ton-That has challenged the notion that you have the right to control where your pictures are. He did so in collaboration with Richard Schwartz, a New York City politician, and with funding from such notable sources as Peter Thiel and the CEO of AngelList, by developing an app called Clearview AI.


Hoan Ton-That, Hoan Ton-That sitting with the laptop
Hoan Ton-That; photo source: The New York Times


Clearview AI

Clearview AI, according to its founder, is a little like Google Search, but for faces. The concept is simple. Clearview culls the internet for images of people’s faces, collects them in a database, and uses facial recognition software to match new images to those it already has.

For a law enforcement officer, such a tool is invaluable. Consider, as one minor example, the story of Heather Reynolds, who was caught on camera stealing two grills and a vacuum from an Ace Hardware store in Clermont, Florida last November. Typically, in a case like this, police can try cross-referencing the perpetrator’s image with known criminals in the area, or publicize the image to try and crowdsource the search. Reynolds’ case took far less effort, because Clearview AI was able to not only analyze her facial features, but also effectively match one of her tattoos visible in the security cam footage with how it appeared in one of her Facebook photos. She was locked up within a month’s time.

Clearview and its proprietors cite cases like this, as well as its use in much more serious criminal investigations, as explanations for its rapid rise in popularity. After only two years on the market, in January, the company told the New York Times that they’d racked up over 600 clients in law enforcement. It’s worth noting that in interviews with Fox and Buzzfeed, Ton-That claimed that his company only did business with government and law enforcement, and was focused on North American clients.

February Leak

Late last month, those statements were revealed to be lies. An anonymous individual leaked the company’s client list to Buzzfeed. It included the FBI, DOJ, and plenty of law enforcement agencies. It also included clients from repressive dictatorships such as the United Arab Emirates. And plenty of private companies made the list: Walmart, Wells Fargo, Equinox, Macy’s, the NBA and more. Over 2,200 organizations were named, in total.

Many of the organizations listed had only free trials, but every one of them used Clearview for at least one search. Some of the companies named in the leaks have since denied any ties with Clearview. Other companies claimed that certain employees used the software, without authorization from management.

What makes this story newsworthy?

What Makes Clearview Different

Google has reverse image search, and Facebook can recognize you in a photo before you’re tagged. Staple tech brands use facial recognition AI throughout their most popular products.

So Clearview facial recognition isn’t particularly unique, and it’s not necessarily more powerful than other software out in the open market. What separates them is their means of gathering input data.

In order to build an effective facial recognition machine, you need lots of data–lots of faces, in this case, under different angles and hairstyles and lightings, which can be referenced when a new image of someone like Heather Reynolds is uploaded for analysis.

Clearview gets all their images–over three billion, according to the company–by culling social media platforms like Facebook, YouTube and Instagram. And they don’t ask nicely before they do it. Facebook, Google and Twitter have all sent Clearview cease-and-desist letters. Just this month, Apple suspended it from the app store for violating terms of use. But Clearview claims to have a first amendment right to what it deems public information. They can use that excuse because today’s legal framework surrounding facial recognition technologies is slim to none. And because Clearview has support from law enforcement around the country, they’ve skirted any major legal recourse.

Hacker, or Hacked?

At this point in time, it’s unclear whether Clearview’s business model is legal or not. Until companies like Clearview are brought to court, the U.S. legal system will remain inadequate in addressing such threats. The company will likely continue to operate, profitably, until that time. And citizens, who know far less than we think about where our data is and who can use it, will be the products that they sell to corporations and law enforcement who may do with it what they wish.

Clearview’s client list was stolen just a few days back. But long before that, Clearview stole pictures of millions of citizens–citizens who to this day largely don’t even know they exist.

If you’re reading this, there’s a good chance your face is in their database right now.


About the author: 
Nathaniel Nelson writes the internationally top-ranked “Malicious Life” podcast on iTunes, hosts programs on blockchain and SCADA security, and contributes to AI and emerging tech blogs.