On Tuesday, San Francisco barred local authorities from using facial recognition technology, arguing its risk to “civil rights and liberties substantially outweighs its purported benefits.”

San Francisco’s airports, private businesses and individuals can still use the technology.

As Somerville, Massachusetts and Oakland, California discuss similar bans, advocates claim facial recognition helps police swiftly identify and arrest suspects.

Microsoft developed facial recognition technology for US Immigrations and Customs Enforcement; Amazon sells its own ‘Rekognition’ software to police, saying it improves “quality of life.”

But the technology—largely trained on white Caucasian males—struggles to differentiate between African American faces and female faces, and could deepen pre-existing gender and racial biases.

MIT researchers found Amazon’s ‘Rekognition’ mistook nearly 1 one in five women for men, and almost one in three darker-skinned women for men. Earlier studies found IBM, Microsoft, and Megvii’s recognition identified white men’s faces far more accurately.

Data and software research groups, activists, lawmakers and even industry leaders agree AI requires regulation—especially before integration into public services like healthcare and law enforcement.

Yet the US has no federal laws regulating AI’s use in citizen surveillance.

How should governments legislate artificial intelligence? By enforcing individual rights to privacy—corporate transparency—public ownership of key projects?

 

Credit for this article's header image goes to Getty.