Met police deploy facial-recognition technology in Oxford Circus – ComputerWeekly.com

Andrey Popov – stock.adobe.com
London police have revealed the results of their latest deployment of live facial-recognition (LFR) technology in Oxford Circus, which resulted in three arrests and roughly 15,600 people’s biometric information being scanned.
The Metropolitan Police Service (MPS) said its LFR deployment on Thursday 7 July outside Oxford Circus was part of a long-term operation to tackle serious and violent crime in the borough of Westminster.
Those arrested include a 28-year-old man wanted on a warrant for assault of an emergency worker; a 23-year-old woman wanted for possession with intent to supply Class A drugs; and a 29-year-old man for possession with intent to supply Class As and failures to appear in court.
Those arrested were engaged and detained by officers following alerts from the vehicle-mounted LFR system, which enables police to identify people in real time by scanning their faces and matching them against a database of facial images, or “watchlist”, as they walk by.
According to the post-deployment review document shared by the MPS, the deployment outside Oxford Circus – one of London’s  busiest tube states – generated four match alerts, all of which it said were ‘true alerts’. It also estimates that the system processed the biometric information of around 15,600 people.
However, only three of the alerts led to police engaging, and subsequently arresting, people. Computer Weekly contacted the MPS for clarification about the fourth alert, which said that the LFR operators and engagement officers were unable to locate the individual within the crowd.
The last time police deployed LFR in Oxford Circus on 28 January 2022 – the day after the UK government relaxed mask wearing requirements – the system generated 11 match alerts, one of which it said was false, and scanned the biometric information of 12,120 people. This led to seven people being stopped by officers, and four subsequent arrests.
Commenting on the most recent deployment, Griff Ferris, a senior legal and policy officer at non-governmental organisation Fair Trials, who was present on the day, said: “The police’s operational use of facial-recognition surveillance at deployments across London over the past six years has resulted in countless people being misidentified, wrongfully stopped and searched, and even fingerprinted. It has also clearly been discriminatory, with black people often the subject of these misidentifications and stops.
“Despite this, the Metropolitan police, currently without a commissioner, in special measures, and perpetrators of repeated incidents evidencing institutional sexism and racism, are still trying to pretend this is a ‘trial’. Facial recognition is an authoritarian surveillance tool that perpetuates racist policing. It should never be used.”
In response to Computer Weekly’s questions about whether the MPS has recreated operational conditions in a controlled environment without the use of real-life custody images, it said: “The MPS has undertaken significant diligence in relation to the performance of its algorithm.” It added that part of this diligence is in continuing to test the technology in operational conditions.
“Alongside the operational deployment, the Met tested its facial-recognition algorithms with the National Physical Laboratory [NPL]. Volunteers of all ages and backgrounds walk past the facial recognition system…After this, scientific and technology experts at the NPL will review the data and produce a report on how the system works. We will make these findings public once the report has been completed,” it said.
In the “Understanding accuracy and bias” document on the MPS website, it added that algorithmic testing in controlled settings can only take the technology so far, and that “further controlled testing would not accurately reflect operational conditions, particularly the numbers of people who need to pass the LFR system in a way that is necessary to provide the Met with further assurance”.
In June 2022, the Ryder Review – an independent legal review on the use of biometric data and technologies, which primarily looked at its deployment by public authorities – found that the current legal framework governing these technologies is not fit for purpose, has not kept pace with technological advances, and does not make clear when and how biometrics can be used, or the processes that should be followed.
It also found that the current oversight arrangements are fragmented and confusing, and that the current legal position does not adequately protect individual rights or confront the very substantial invasions of personal privacy that the use of biometrics can cause.
“My independent legal review clearly shows that the current legal regime is fragmented, confused and failing to keep pace with technological advances. We urgently need an ambitious new legislative framework specific to biometrics,” said Matthew Ryder QC of Matrix Chambers, who conducted the review. “We must not allow the use of biometric data to proliferate under inadequate laws and insufficient regulation.”
Fraser Sampson, the UK’s current biometrics and surveillance camera commissioner, said in response to the Ryder Review: “If people are to have trust and confidence in the legitimate use of biometric technologies, the accountability framework needs to be comprehensive, consistent and coherent. And if we’re going to rely on the public’s implied consent, that framework will have to be much clearer.”

The lack of legislation surrounding facial recognition in particular has been a concern for a number of years. In July 2019, for example, the UK Parliament’s Science and Technology Committee published a report identifying the lack of a framework, and called for a moratorium on its use until a framework was in place.
More recently, in March 2022, the House of Lords Justice and Home Affairs Committee (JHAC) concluded an inquiry into the use of advanced algorithmic technologies by UK police, noting that new legislation would be needed to govern the police force’s general use of these technologies (including facial recognition), which it described as “a new Wild West”.
The government, however, has largely rejected the findings and recommendations of the inquiry, claiming here is already “a comprehensive network of checks and balances” in place.
While both the Ryder Review and JHAC suggested implementing moratoria on the use of LFR – at least until a new statutory framework and code of practice are in place – the government said in its response to the committee that it was “not persuaded by the suggestion”, adding: “Moratoriums are a resource heavy process which can create significant delays in the roll-out of new equipment.”
Asked by Computer Weekly whether the MPS would consider suspending its use of the technology, it cited this government response, adding: “The Met’s use of facial recognition has seen numerous individuals arrested now for violent and other serious offences. It is an operational tactic which helps keep Londoners safe, and reflects our obligations to Londoners to prevent and detect crime.”
Before it can deploy facial-recognition technology, the MPS must meet a number of requirements related to necessity, proportionality and legality.
For example, the MPS’s legal mandate document – which sets out the complex patchwork of legislation the force claims allows it to deploy the technology – says the “authorising officers need to decide the use of LFR is necessary and not just desirable to enable the MPS to achieve its legitimate aim”.
In response to questions about how the force decided the 7 July deployment was necessary, the MPS claimed: “The deployment was authorised on the basis of an intelligence case and operational necessity to deploy, in line with the Met’s LFR documents.”
In terms of the basis on which the deployment was deemed proportionate, it added: “The proportionality of this deployment was assessed giving due regard to the intelligence case and operational necessity to deploy, whilst weighing up the impact on those added to the watchlist and those who could be expected to pass the LFR system.”
The LFR deployment, according to the MPS review document, contained 6,699 image in the watchlists, scanned 15,600 people’s information, and generated four alerts, leading to three arrests.
The justifications outlined to Computer Weekly by the MPS regarding necessity and proportionality are exactly the same as those provided after its last Oxford Circus LFR deployment in late January 2022.
The MPS’s Data Protection Impact Assessment (DPIA) also says that “all images submitted for inclusion on a watchlist must be lawfully held by the MPS”.
In 2012, a High Court ruling found the retention of custody images – which are used as the primary source of watchlists – by the Metropolitan Police to be unlawful, with unconvicted people’s information being kept in the same way as those who were ultimately convicted. It also deemed the minimum six-year retention period to be not proportionate.
Addressing the Parliamentary Science and Technology Committee on 19 March 2019, then-biometrics commissioner Paul Wiles said there was “very poor understanding” of the retention period surrounding custody images across police forces in England and Wales.
He further noted while both convicted and unconvicted people could apply to have their images removed, with the presumption being that the police would do this if there was no good reason not to, there is “little evidence it was being carried out”.
“I’m not sure that the legal case [for retention] is strong enough, and I’m not sure that it would withstand a further court challenge,” he said.
Asked how it had resolved this issue of lawful retention, and whether it could guarantee every one of the 6,699 images in the 7 July watchlists were held lawfully, the MPS cited section 64A of the Police and Criminal Evidence Act 1984, which gives police the power to photograph people detained in custody and to retain that image.
It added that the custody images are also held in accordance with Management of Policing Information Authorised Police Practice (MOPI APP) guidelines.
In July 2019, a report from the Human Rights, Big Data & Technology Project based at the University of Essex Human Rights Centre – which marked the first independent review into trials of LFR technology by the Metropolitan Police – highlighted a discernible “presumption to intervene” among police officers using the technology, meaning they tended to trust the outcomes of the system and engage individuals that it said matched the watchlist in use even when they did not.
On how it has resolved this issue, the MPS said it had implemented additional training for officers involved in facial-recognition operations.
“This input is given prior to every LFR deployment to ensure officers are aware of the current systems capabilities. LFR is a tool that is used to help achieve the wider objectives of the policing operation, it does not replace human decision-making,” it said. “Officers are reminded during the training of the importance of making their own decisions on whether to engage with a member of the public or not.”
With customers, employees and investors pressuring companies to go green, IT needs to take a lead on sustainability. Learn some …
Companies should create a ‘workforce ecosystem’ that considers value creation from both the internal and external workforce, …
IT leaders are learning how to implement blockchain, a distributed ledger technology, within their organizations. Use this …
The Department of Justice’s cyber review report warned that the lines between conventional cybercriminal activity and national …
VMDR offers automated asset identification, threat prioritization and patch management. But do companies need another …
The Sophos X-Ops team aims to create an AI-assisted security operations center using the cybersecurity vendor’s research and …
Juniper has added three features to its AIOps networking assistant to improve troubleshooting and give more insights into the …
A Florida man has been charged with running a counterfeit operation that duped hospitals, schools, government agencies and the …
Vendors are pushing heavily on the benefits of predictive analysis to automatically identify and remediate network issues. But …
Blockchain is most famous for its cryptocurrency applications, but data centers can employ it for a variety of business-related …
Nvidia’s QODA platform bridges the chasm between quantum and classical environments. It could set the stage for quantum …
IBM’s new line of lower-end Power servers packs more processing power for smaller IT shops to deliver AI services faster, with a …
Oracle and Microsoft are extending their partnership to enable Microsoft cloud users to directly provision and access Oracle …
Deploying databases on different cloud platforms offers various benefits. Here’s a set of 10 best practices for building a …
The event data streaming vendor’s new update introduces features to make it easier to integrate into a multi-cloud data stack …
All Rights Reserved, Copyright 2000 – 2022, TechTarget

Privacy Policy
Cookie Preferences
Do Not Sell My Personal Info

source

Leave a Comment