AI is creeping into many different areas of our lives. Algorithms recommend Netflix shows to us, control which adverts we see when we are online, decide who should be approved for loans, and even determine when our dishes dry in the dishwasher. Technology has been praised for helping us to automate mundane tasks, freeing up our time and providing convenient services. However, there is a fundamental danger that modern technology can also produce biased outcomes when programmed by humans, who have experienced decades of social conditioning and norms containing biased behaviour and thought patterns.

In 2017, a Black man named T.J. Fitzpatrick attempted to wash his hands in a Marriott hotel bathroom in Atlanta, Georgia. The soap machine would not dispense soap into his hands. T.J Fitzpatrick assumed the machine was broken until his White friend used the same dispenser which expelled the soap into his hand. This was not a coincidence. They repeatedly tried the machine until they came to the realisation that the dispenser discriminated on the basis of skin tone. The dispenser used near-infrared technology, which will only work if the light emitted by the sensor is reflected back. Hands with darker skin pigments generally reflect less light and so the dispenser did not work for most Black people. Although the soap dispenser likely was not designed to be discriminatory, the lack of testing using a range of skin tones has inadvertently created a biased soap dispenser.[1]

Not being able to wash your hands is an inconvenience, but it is unlikely to have a long-term impact on someone’s life – at least prior to the Coronavirus pandemic. There are systems utilising AI where bias is prevalent and likely to produce dangerous outcomes for the civil liberties and even lives of the people it discriminates against.

AI and the Police

In the US, automated systems are being used to predict where crimes are most likely to occur. These predictions are then used by police departments when deciding where to send their ranks, and how often.

PredPol is a program that was created by UCLA scientists who were working with the Los Angeles Police Department in an aim to use scientific analysis of crime data to identify patterns of criminal behaviour. PredPol is now used by more than 60 police departments in the US to identify neighbourhoods at high risk of serious crimes.

Automated programs have been used increasingly in the past few years for two main reasons. Firstly, budget cuts have led to predictive tools being used as a replacement for police officers in an attempt to ‘do more with less’ resources. Secondly, there is a widespread belief that algorithms are more objective than humans. They were initially introduced to make decision-making in the criminal justice system fairer.[2]

However, researchers have questioned whether PredPol is biased. Historical data from police practices can create a feedback loop through which algorithms can be biased towards a certain neighbourhood.[3] PredPol says that it does not base its algorithm on arrests data, which carries a higher risk of being biased because the data is predominantly based on human decisions to arrest rather than whether a conviction was made.[4] This appears to be a sensible decision, as according to the US department of Justice figures, you are more than twice as likely to be arrested if you are Black than if you are White.[5]

However, critics argue that it is next to impossible to remove bias from predictive policing. If data derives from over-enforcement in African-American or Hispanic communities, or lack of enforcement in wealthier areas, these biases become ingrained in the algorithms.[6] This is likely to lead to increased numbers of convictions in areas where the police spend more time looking for crime.

AI and the Courts

Researchers are attempting to establish how AI can be utilised in court rulings. In the UK, scientists at University College London have devised a tool which can weigh up legal evidence and moral questions of right and wrong, to accurately predict the result in hundreds of real-life cases. The AI “judge” has reached the same verdicts as judges at the European Court of Human Rights in almost four in five cases involving torture, degrading treatment and privacy. Although it is unlikely that AI will replace judges in the near future, a tool could be created to assist judges in their rulings.[7]

The National Bureau of Economic Research in the USA has developed software to measure the likelihood of defendants fleeing or committing new crimes whilst they are awaiting trial, which is used to decide whether the individuals should remain detained before their trial. The algorithm assigns a risk score based on information such as where and when the person was detained, the rap sheet of the accused, and age. The tool has been programmed with data from hundreds of thousands of New York criminal cases. It is reported to be more effective at assessing risk than judges.[8]

If the data being used is inherently biased, programmes designed to increase efficiency, accuracy and consistency in court rulings are likely to further perpetuate biased outcomes and deny people of their liberty.

AI and the Military

Professor Noel Sharkey’s work on autonomous weapons also known as “killer robots” highlights that military weapons, designed to select and kill a target using facial recognition technology, is incredibly dangerous. Research has shown that the darker the skin, the harder it is for these machines to properly recognise the face.[9]

“In the laboratory you get a 98% recognition rate for white males without beards. It’s not very good with women and it’s even worse with darker-skinned people. The laboratory results have shown it comes to the point where the machine cannot even recognise that you have a face.”[10]

Professor Sharkey argues that algorithms are so “infected with biases” that their decision-making process could not be fair or trusted. He is advocating for a moratorium on all “life-changing decision-making algorithms in Britain because they are not working and have been shown to be biased across the board.”[11] The risk of innocent people being killed by “killer robots” because of a flawed facial recognition algorithm is enormous.

What can be done?

Professor Sharkey believes that AI decision-making machines should undergo the same level of scrutiny and testing that pharmaceutical drugs face before they are released onto the market. His suggestion is that the “systems should be tested on millions of people, or at least hundreds of thousands of people, in order to reach a point that shows no major inbuilt bias.”[12]

If humans are testing these machines for bias, and humans are also biased, how can we guarantee that a testing system will work? If you test pharmaceuticals on humans or animals, you know what you are looking for: an adverse reaction, some irregularity – it is more of an objective test. But when testing programmes for biases, how do we know what to look for, and how can we be sure that humans will recognise these biases – especially if the biases also exist in the human brain?

In some cases, bias will be more obvious, such as if the “killer robots” repeatedly fail to recognise darker faces during testing. However, it will be more difficult to decide objectively whether a system like PredPol is being biased when directing police offers to certain neighbourhoods if the data being fed into the algorithm is inherently biased.

The Algorithmic Accountability Act of 2019 in the US requires companies to assess their automatic decision systems for risks of “inaccurate, unfair, biased, or discriminatory decisions.” It is currently at the bill stage, but if passed, it would apply to the AI systems that platforms increasingly deploy to detect and counter hate speech, terrorist material and disinformation campaigns, and would require the platforms to conduct fairness assessments of these AI systems and fix issues of bias uncovered in these studies.[13]

Similarly, the Innovative and Ethical Data Use Act of 2018 draft bill proposed by Intel, asks for companies to “determine, through objective means, that such processing, and the results of such processing, are reasonably free from bias and error, and that [
] data quality obligations [
] are met”.[14] In practice, it is unclear what “reasonably free from bias” means and how this will be measured.

Neither of these bills have passed Congress on their own, but similar provisions might become part of a new US privacy law currently under consideration in Congress.[15] It remains to be seen whether legislation in the US or UK can effectively combat the issues raised in this article.


Camilla studied LLB Law at the University of Kent and graduated in 2012. Since graduating, she has spent almost eight years resolving financial disputes at the Financial Ombudsman Service. Camilla is also general manager of thestudentlawyer.com, and is currently applying for training contracts as she plans to train as a commercial solicitor.


[1]Will Douglas Heaven, ‘Predictive policing algorithms are racist. They need to be dismantled’ (MIT Technology Review, 17 July 2020) <https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/>accessed 18 August 2020

[2]ibid

[3]Danielle Ensign, Sorelle Friedler, Scott Neville, Carlos Scheidegger, Suresh Venkatasubramanian, ‘Runway Feedback Loops in Predictive policing’ (Proceedings of Machine Learning Research, 22 December 2017) < https://arxiv.org/pdf/1706.09847.pdf> accessed 18 August 2020

[4]Karen Hao, ‘AI is sending people to jail – and getting it wrong’ (MIT Technology Review, 21 January 2019) <https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai/> accessed 18 August 2020

[5]OJJDP Statistical Briefing Book, ‘Estimated number of arrests by offense and race, 2018’ (Office of Juvenile Justice and Delinquency Prevention, 31 October 2019) <https://www.ojjdp.gov/ojstatbb/crime/ucr.asp?table_in=2> accessed 18 August 2020

[6]Ethan Baron, ‘Predictive Policing Used AI Tested by Bay Area Cops’ (Government Technology, 11 March 2019) <https://www.govtech.com/public-safety/Predictive-Policing-Using-AI-Tested-by-Bay-Area-Cops.html> accessed 18 August 2020

[7]OdĂ©lio Porto Junior, ‘How can artificial intelligence affect courts?’ (Institute for Research on Internet and Society, 12 March 2017) <https://irisbh.com.br/en/how-can-artificial-intelligence-affect-courts/> accessed 18 August 2020

[8]ibid

[9]Henry McDonald, ‘AI Expert calls for end of UK use of ‘racially biased’ algorithms (The Guardian, 12 December 2019) <https://www.theguardian.com/technology/2019/dec/12/ai-end-uk-use-racially-biased-algorithms-noel-sharkey> accessed 18 August 2020

[10]ibid

[11]ibid

[12]ibid

[13]Mark MacCarthy, ‘An Examination of the Algorithmic Accountability Act of 2019’ (Institute for Information Law, 24 October 2019) <https://www.ivir.nl/publicaties/download/Algorithmic_Accountability_Oct_2019.pdf>accessed 18 August 2020

[14]Innovative and Ethical Data Use Act of 2018, USA Senate, 4(a)

[15]Mark MacCarthy, ‘An Examination of the Algorithmic Accountability Act of 2019’ (Institute for Information Law, 24 October 2019) <https://www.ivir.nl/publicaties/download/Algorithmic_Accountability_Oct_2019.pdf>accessed 18 August 2020