Commentary
Description

Over the weekend, another new dystopian-sounding facial recognition application hit the headlines. This time, it was a little-known startup, Clearview AI, which is providing identity-matching software to law enforcement agencies in the United States.

Stories about how facial recognition is being used by law enforcement aren’t that surprising these days. But the Clearview AI revelations, published by the New York Times, made the tech industry sit up. Here was a company that, even in a world of increasingly invasive facial recognition applications, had crossed a line. They scraped the open Web, collected billions of photos of people, and built an app enabling users to match their own pictures of a person with the photos in that vast database, and with links to pages on the Web where those photos appeared.

This kind of application — breathtaking in scale, deeply invasive in implementation — has long been technically possible; it just wasn’t something technology companies were keen to do (or at least, to be seen as doing).

Up until recently, conversations about facial recognition technology haven’t usually gone much further than whether we should or shouldn’t ban it. There has been no middle ground. Supporters are on the side of law and order, whatever that takes; opponents are radical leftists with a disregard for public safety or luddites opposed to technological progress. The many different choices made in designing and deploying the various tools and methods that fall under the umbrella of “facial recognition” — some of them sensible, others careless, some downright ugly — tend to get lost along the way.

Read the full article on Inside Story.

Publication Details
Publication Year:
2020