“I’ve been covering privacy for ten years, and I know that technology like this in public hands is a nightmare scenario.”

Kashmir Hill, Tech reporter for the New York Times, in The Daily

In late January 2020, Clearview AI made headlines across the country as Silicon Valley’s newest shady scheme. However, this startup has created a tool that extends beyond anything other than tech giants. Nonetheless, the U.S. government is attempting to construct a facial recognition database that contains more than three billion photos from Facebook, Youtube, Venmo, and other social media websites.

Initially, Clearview AI maintained a statement that its service would exclusively be used as a tool for law enforcement agencies. This tech can help track down outliers in society and dangerous individuals such as child molesters, murderers, and suspected terrorists. The NY Times reported in February that Clearview AI has helped identify numerous victims of child abuse in exploitative videos posted on the internet.

However, Clearview AI’s client list extends beyond law enforcement, including major names such as Best Buy, the NBA, and Macy’s. Despite a promise to terminate contracts with private corporations, the startup has continued to search for new ways to expand its reach outside of law enforcement agencies.

In the wrong hands

For many reasons, Clearview AI is an unprecedentedly dangerous technology, especially when applied with negligence. But the concern extends beyond its capabilities to limit user privacy. Women from a large range of backgrounds have reacted strongly to the idea of Clearview AI and similar facial recognition software systems.

The major concern? Stalking.

“So imagine this technology in public hands. It would mean that if you were at a bar and someone saw you and was interested in you, they could take your photo, run your face through the app, and then it pulls up all these photos of you from the internet. It probably takes them back to your Facebook page. So now they know your name, they know who you’re friends with, they can Google your name, they can see where you live, where you work, maybe how much money you make…”

Kashmir Hill

In October of 2019, The Guardian put out a story on a Japanese pop star who was assaulted by a man who had used a social media post, as well as Google Street view, to stalk and pinpoint her location. Clearview AI could potentially provide easier access to similar information.

The idea of a public database easily accessible to the general public is immediately terrifying, especially to women who need to move discretely for safety concerns. “Once again, women’s safety both online and in real life has come second place to the desire of tech startups to create — and monetize — ever more invasive technology,” says Jo O’Reilly, a privacy advocate for UK-based ProPrivacy.

Stalking has been increasingly detrimental for both women and men online. Digital abuse complaints make up nearly 15% of calls to the National Domestic Violence Hotline in 2018.

A takeaway

The rapid spread of startup tech services, especially AI services, is both astonishing and exciting. However, the spread comes at a cost. As tech expands, laws and regulations have failed to be mandated at a relatively similar pace. Until tech services such as Clearview are adequately regulated, how much of a threat do they pose to society? And an even better question: how do we account for groups that are disproportionately affected by these new technologies?

Read Also:
Have Women Leaders Really Managed the Pandemic Better?
Sex Workers & How Your Stigmatization Results In Sexual Violence
Sanctuary Cities Save Women’s Lives