London’s criminals would be best advised to avoid the outer borough of Croydon this summer as police roll out the UK’s first permanently installed live facial recognition cameras on its suburban streets.
The enthusiasm of the British police for adopting advanced biometric techniques to tackle crime has fuelled concern that the UK could unwittingly descend to the status of a surveillance state.
The UK was one of the most prolific adopters of CCTV, the video surveillance systems that have now spread from the public sphere to private homes.
With ever more sophisticated monitoring and tracking techniques coming down the line, the threat exists of a dystopian future in which individuals would have to assume that someone is watching them all the time.
Expanding surveillance—a legal grey area
A recent report said that use of existing surveillance methods was rapidly expanding beyond the police, while a new generation of biometric technologies was claiming to be capable of remotely inferring people’s emotions, attention levels or even truthfulness.
The report by the UK’s Ada Lovelace Institute called for urgent, risk-based legislation to establish clear rules to govern the use of new technologies which were rapidly expanding within what it called a legal grey area.
The current patchwork of governance was creating legal uncertainty and putting fundamental rights at risk
The institute, established up in 2018 to ensure that data and AI work for the benefit of individuals and society, said the current patchwork of governance was creating legal uncertainty and putting fundamental rights at risk.
A joint investigation by Liberty Investigates and the Guardian newspaper, published in the same week, meanwhile found that accelerating police use of live facial recognition was becoming the norm in England and Wales.
It quoted internal documents showing the number of faces scanned doubled to nearly five million in the last year. Although there was no specific legislation governing facial recognition use, the state was nevertheless investing in expanding police access to the full range of its image stores, including passport and immigration databases.
Are new technologies in fighting crime worth the risk?
There is nothing inherently sinister in the desire of Britain’s under-funded and under-staffed police forces to use new technologies to fight crime. Opinion is mixed, however, on whether the perceived benefits are worth the risk to civil liberties.
Campaigners fear mass surveillance undermines the presumption of innocence that underlies common law. Facial recognition works by capturing live footage which software scans in real time and compares with individuals on police watchlists.
With facial recognition, everyone identified is a suspect until proved otherwise
The British have traditionally been under no obligation to disclose their identity to the authorities unless there was a reasonable suspicion they were involved in a crime. With facial recognition, everyone identified is a suspect until proved otherwise.
A Green party member of the London Assembly, Zoe Garbett, commenting on a trial in Croydon in which 128,518 facial scans led to just 133 arrests, said: “This means that over 120,000 people in Croydon were tracked by the police for no reason at all.”
She also echoed widely expressed fears that a focus on poorer, ethnic minority areas would lead to the further over-policing of black citizens. Research has shown that recognition software is less accurate in identifying women and people of colour.
Rebecca Vincent of the civil rights group Big Brother Watch was even more blunt, commenting on the Croydon experiment that: “It’s time to stop this steady slide into a dystopian nightmare and halt all use of LFR [live facial recognition] technology until legislative safeguards are introduced.”
Whatever the pros and cons of the police’s move to high-tech crime fighting, it may turn out to be just the tip of the surveillance iceberg.
Voluntary and involuntary monitoring
In other ways, both voluntary and involuntary monitoring are becoming part of daily life. Growing numbers of absent householders now fretfully check their front doors from their phones, as suppliers tap new markets for CCTV.
The UK’s IPPR charity, which campaigns for a fairer society, reported last month that new technologies were radically transforming the surveillance of workers. Facial recognition, biometric tracking, and keystroke monitoring had been widely adopted without the consent of employees. Those in lower paid and lower skilled jobs were most likely to be watched.
Somewhat ironically, China’s cyberspace regulator has ruled that individuals should not be forced to verify their identity using facial recognition
Anxious parents are meanwhile busy geo-tracking their offspring on the increasingly rare occasions they allow them out on their own. Polling shows half of Britons think that’s a good thing.
In 2018, the digital technology commentator Tim Unwin wrote a spoof article in which the microchipping of all children from the age of eight weeks becomes a legal requirement in April 2025.
In real life, the proposed measure applied to pet dogs. Unwin explained he wanted “to make us all think about what the future will hold, whether we want a future like this, and what we should do about the seemingly inevitable path towards all humans becoming cyborgs.”
Mass surveillance of the citizenry used to be thought of as the preserve of autocracies such as China. It seems, however, that democracies like Britain are just as keen on monitoring the public, while insisting it’s for their own good.
Somewhat ironically, China’s cyberspace regulator has ruled that individuals should not be forced to verify their identity using facial recognition, an increasingly common practice at hotel check-ins and private gated communities.
China is a pioneer and supplier of surveillance technology and employs it to identify not only criminals but also dissidents and ethnic minorities. As developers come up with ever more sophisticated means to track us, let’s not go down that route.