THE DELHI Police, which has used facial recognition technology to identify those accused in major clashes that have taken place in the capital over the past two years, considered a match to be “positive” if there was an accuracy rate of 80 per cent, according to records obtained by digital rights group Internet Freedom Foundation (IFF) under the Right To Information (RTI) Act.
The records, shared under two RTI requests and reviewed by The Indian Express, throw light for the first time on how the Delhi Police uses facial recognition matches during investigations.
Facial recognition technology essentially maps, analyses and confirms the identity of a face in a photograph or video, typically using computer-generated filters to transform images into numerical expressions that can be compared. The key parameters include distance between the eyes and that from forehead to chin.
So far, the Delhi Police have used the technology to identify people suspected to have been involved in the 2020 riots, the clashes that broke out at Red Fort in 2021 during the farmers’ protest and the Jahangirpuri riots earlier this year.
The RTI records show that the police carries out “empirical investigation” for facial matches that have an accuracy of over 80 per cent before initiating any legal action. But even in cases where the accuracy is less than 80 per cent, the police considered it a “false positive result” which is again subject to “due verification with other corroborative evidence”, the records show.
According to experts, this suggests that even people, for whom the accuracy match may be lower, remain on the police radar. “Even if the technology does not give a sufficient enough result, the Delhi Police could continue to investigate anyone who may have gotten a very low score. Thus, any person who looks even slightly similar could end up being targeted, which could result in targeting communities who have been historically targeted,” said Anushka Jain, associate counsel, IFF.
To put the 80 per cent threshold in context, the American Civil Liberties Union (ACLU) ran a test in 2018 on Amazon’s facial recognition system, Rekognition, keeping the threshold at 80 per cent. The system ended up falsely associating 28 images of US Congress members with mugshots in a criminal database.
In response, Amazon had said that “while 80 per cent confidence is an acceptable threshold for photos of hot dogs, chairs, animals or other social media use cases, it wouldn’t be appropriate for identifying individuals with a reasonable level of certainty”.
Yet, the 80 per cent accuracy number is also representative of the improvement in facial recognition systems deployed by the Delhi Police.
In 2018, the Delhi High Court had reportedly suggested that the police should use facial recognition software to find some women who went missing from an illegal placement agency. However, the police informed the court that the software returned with only a two per cent match, which was “not good”.
The IFF received RTI responses from the Delhi Police on the issue in June after filing second appeals with the Central Information Commission (CIC). The police had earlier declined to reveal any information in response to RTI requests filed in 2020 and 2021, according to the IFF.
Privacy activists say it is unclear why the Delhi Police has chosen 80 per cent as the threshold between positive and false positive matches. The Delhi Police did not respond to repeated queries emailed by The Indian Express.
The activists contend that the usage of technologies like facial recognition in the absence of a basic data protection framework in the country could potentially pose risks for communities that have been historically targeted.
In the US, at least three individuals have been wrongfully arrested since 2019 after being misidentified by facial recognition technology. In all three cases, the individuals were men from the African-American community, which has a history of being wrongfully targeted by law enforcement agencies in the country.
As such, at least 13 cities in the US, including San Francisco and Massachusetts, have banned their police from using facial recognition technology. In 2020, Microsoft and Amazon stopped selling their facial recognition systems to the police after questions were raised about the technology’s reliability.
In their RTI response, the Delhi Police declined to reveal whether they have made any arrests of alleged perpetrators using the technology, citing a section of the RTI Act that exempts them from disclosing information that could impede investigations.
On August 11, The Indian Express reported that the Delhi Police used the technology to “confirm” the presence of several accused at the site of the Jahangirpuri riots.
In one of their RTI responses, the Delhi Police said that it considers citizens’ privacy to be “sacrosanct” but conceded that it is yet to carry out any analysis of how the usage of technologies like facial recognition could potentially impact privacy.
Carrying out privacy impact assessments is a requirement in some other parts of the world, including under the European Union’s General Data Protection Regulation (GDPR) — but it is currently not necessary under Indian laws.
“The need for these impact assessments is necessary because these are extremely new and untested technologies that are being used by law enforcement agencies, which could lead to irreversible effects and damage to a person,” IFF’s Jain said.