Adrian Timberlake examines bias in facial recognition technology, public trust in the police and whether widespread cameras may help to reduce discrimination.
Britain is not at war with new technologies, but with both real and perceived power imbalances between authorities and the public. Those in society who already feel discriminated against fear that facial recognition will hand even more power to authorities, but what if these systems could be used to ensure that the scales of power are balanced?
The apparent ‘bias’ of facial recognition in mismatching people with darker skin and women more often than those with lighter skin and males has been widely reported. Although an explanation on this has yet to be agreed among experts, one of the reasons could be that the technology has been tested on people with lighter skin more often during trials and, therefore, the technology has ‘learnt’ how to recognise lighter-skinned faces more than it has learnt how to recognise darker-skinned faces.
In the UK, this explanation would make sense. A Gov.uk report published in August 2018 reveals that “the total population of England and Wales was 56.1 million, and 86 per cent of the population was white”, according to the most recent Census. This means that in facial recognition trials surveying the general public, if we were to generalise, about 80 per cent of people the camera ‘saw’ would have been white.
The debate of whether facial recognition could worsen racism has highlighted a far bigger issue and ultimately points to a societal problem – Britain is less diverse than most people would like to believe, and ethnic minority groups and women are still under-represented in authoritative positions, such as in government and the police.
The House of Commons library reported that, as of September 2019, there were “52, or just over eight per cent, of members of the House of Commons from non-white ethnic backgrounds” (https://researchbriefings.parliament.uk/ResearchBriefing/Summary/SN01156), however, “if the ethnic make-up of the House of Commons reflected that of the UK population, there would be about 90 non-white members.”
Additionally, in September 2019, the number of women members of the House of Commons was reported to be “an all-time high”, but is still only 211 members or 32 per cent.
There is similar lack of representation in UK police, with Gov.uk reporting that, at the end of March 2019, “93.1 per cent of police officers were from the white ethnic group and 6.9 per cent were from other ethnic groups” (https://www.ethnicity-facts-figures.service.gov.uk/workforce-and-business/workforce-diversity/police-workforce/latest). The representation of women in policing is a slightly lower percentage than that in government at 30.4 per cent. As of March 31, 2019, there were 37,428 female police officers across the 43 police forces in England and Wales.
While developers must strive to improve the accuracy of facial recognition in correctly identifying ethnic minority groups and women, it seems that too much focus has been placed on inaccuracies of a still developing technology, masking a greater issue.
The demographics that facial recognition is reportedly biased against also have the least representation in the two authorities that are likely to use and control the technology. Conscious debate focuses on failings of technology but, subconsciously, this is a conversation about unfavourable power dynamics. The issue with gaining public trust in the technology is perhaps not that it may make a mistake, but that certain demographics lack trust in how authorities would handle such a mistake.
As developers, we know that the systems we build today could vastly improve security and public safety in the future, if used ethically. When we look at the statistics, it is easier to understand why under-represented groups of people may be wary of facial recognition.
However, advanced and accurate facial recognition may help to pave the way towards ending discrimination against ethnic minorities and women. Victims of crime face being judged by a predominantly white male system, wherein lack of evidence leaves room for biases and prejudices to skew judgment. This has been evidenced many times by the courts in cases of violence against women not being taken seriously enough.
Facial recognition can potentially identify perpetrators of crime as well as providing irrefutable proof that a crime occurred and exactly how it played out, making it harder for people in authority to bring individual biases into the case.
In police use of automatic facial recognition body-worn cameras, the scrutiny will go both ways. It may help to deter horrific attacks on police officers and may also encourage officers to conduct themselves correctly. Facial recognition trials have not benefitted by being kept a secret, such as those at London’s King’s Cross. The lack of transparency concerning trials appears to have increased fear of facial recognition, but while headlines tend to focus on mistrust of the technology itself, what is actually suffering is trust in the authorities – the technology is only the catalyst for discussion.
It is not necessarily a problem for facial recognition to misidentify someone, if the suspect is treated in a fair and respectful manner and no consequences for the person are incurred after finding that it was a mismatch.
A potential problem with a mismatch could be watch list data. The data for facial recognition camera watch lists could include information from arrest records, criminal records or any police involvement. Potentially, an innocent person could be arrested in the case of a mismatch, but then an arrest record may exist for them. They could then appear on police watch lists, leading to unjustified suspicion and perhaps even more arrests, which would increase their risk level.
This is why we need ethical guidelines and regulations on how data is stored and used. The above scenario would look a lot less ugly with a regulation that all data in connection to a person found to be misidentified is deleted as soon as their innocence is ascertained.
Much of the discussion around facial recognition technology has been around privacy and the function of the technology itself. We should really be talking about the existing power imbalances in society, how we can ensure these are not worsened or continue with advances in technology and what ethical guidelines are needed to protect and use data.
Adrian Timberlake is chief technical director at security and surveillance specialists Seven Technologies Group and an expert in security and surveillance solutions for the police.
Comment