San Francisco has become the first U.S city to ban the use of facial recognition technology (FRT).
The system is a method of identifying or verifying the identity of an individual using their face. It can be used to identify people in photos, video, or in real-time.
Campaigners say this is a victory for civil rights, privacy and the rule of law and the authorities have taken a stand that we all have a basic right to anonymity and privacy while out in public. City leaders ruled that it was “incompatible with a healthy democracy”.
Those in favour of the move said the technology as it exists today is unreliable and represented an unnecessary infringement on people’s privacy and liberty.
Facial recognition will not be allowed to be used by local agencies, such as the city’s transport authority, or law enforcement.
Additionally, any plans to buy any kind of new surveillance technology must now be approved by city administrators.
“In the mad dash towards AI and analytics, we often turn a blind eye to their long-range societal implications which can lead to startling conclusions,” said Kon Leong, who is the CEO of ZL Technologies.
Police forces in the UK have now been urged to abandon its pursuit of facial recognition software and follow the lead of San Fransisco.
What is clear is that there is a new battle between privacy and surveillance brewing. The technology is advancing at a rapid rate, but the software is implemented and used before the full consequences are understood.
The Home Office has previously said facial recognition can be an ‘invaluable tool’ in fighting crime.
But campaigners say that as there is no legislation for it in the UK and that it is being used without regulation.
Several UK police forces have been trialling controversial new facial recognition technology, including automated systems which attempt to identify the faces of people in real time as they pass a camera.
Metropolitan Police
The Metropolitan Police have been trialling the use of facial recognition in different parts of London, using cameras to scan passers-by to find matches on watch lists.
Earlier this year, a man was fined £90 for refusing to show his face to police trialling the new facial recognition software.
The man pulled his jumper up above his chin as he walked past Met Police officers trialling Live Facial Recognition software in east London.
A TV crew filmed the moment as officers swooped on the man, told him to ‘wind his neck in’ then handed him the hefty penalty charge.
Those against the move argue the cameras are a breach of privacy. Privacy rights group Big Brother Watch – who were protesting the use of cameras on the day – was also filmed telling an officer: ‘I would have done the same.’
Police said they made three arrests thanks to the cameras on the day in question and a total of eight people, including two 14-year-old boys, were arrested during the course of the trial.
Ethnicity
Critics of using facial recognition technology say that black and minority ethnic people could be falsely identified and face questioning because police have failed to test how well their systems deal with non-white faces. Campaigners say the tech has too many problems to be used widely.
Documents from the police, Home Office and university researchers show that police are aware that ethnicity can have an impact on such systems but have failed on several occasions to test this.
The ability of facial recognition software to cope with black and ethnic minority faces has proved a key concern for those worried about the technology, who claim the software is often trained on predominantly white faces.
Minutes from a police working group reveal that the UK police’s former head of facial recognition knew that skin colour was an issue. At an April 2014 meeting, Durham Police Chief Constable Mike Barton noted “that ethnicity can have an impact on search accuracy“.
He asked CGI, the Canadian company managing the police’s facial image database, to investigate the issue, but subsequent minutes from the working group do not mention a follow-up.
Apple
Apple introduced its new facial recognition technology in 2017. Known as FaceID, Apple’s facial recognition system is unlocking devices with smiles.
According to Forbes, in 2018 the FBI used FaceID to gain access to the data on a suspect’s phone. Unlike a passcode, active consent is not required to access information protected by facial recognition. The ‘key’ is in our facial features.
China
Facial recognition technology is already part of daily life in China.
Already about 200 million surveillance cameras are scattered around the country — to track big spenders in luxury retail stores, catch identity thieves, prevent violent crime and find criminals. In fact, nearly every one of its 1.4 billion citizens are in China’s facial recognition database.
What is clear is that there needs to be more questions asked about the use of facial recognition.
We need to be aware that facial recognition is advancing rapidly and that privacy laws do apply and that you need to be aware of your rights. Facial recognition is another area where privacy and expectations are difficult to resolve. Are our faces personal property, or public data? It depends on how that data is acquired and used, as well as the consent of and impact on the individual.