Overview
Technology has a significant impact on the enjoyment of human rights. The algorithms employed by governments and companies to make judgments regarding the justice system, employment, social assistance, and credit access are rampant with racial and gender bias.[1] States are systematically surveilling and intimidating human rights advocates, journalists, judges, and lawyers with new techniques and the Israeli NSO spyware gadget is a good example of this. [2] Leveraging on the use of technology, government agencies are offering services in ways that are purportedly intended to increase efficiency, but instead, serve to deepen and impair economic inequality. It is believed that the power of advanced technologies like artificial intelligence (AI), spyware, and the digital state necessitates a fundamental transformation in how we think about technology and personal activity. The work begins by defining the terms artificial intelligence and algorithm, as well as the applications in areas such as human resource management, transportation, supermarket buying, and so forth, as well as the problems associated with data collection, biases in hiring processes, and score grading. It concludes by stating that the need for regulation in this area is crucial because the tools designed to be helpful are becoming discriminatory and are becoming a weapon of control, It also states that due to the complexity involved, a single regulation from the government would not be able to resolve this issue; rather, co-regulation through a government agency would.
Definition of AI and Algorithm
Software applications[1] in the digital age are essential components of our society and economy. Their algorithmic foundations[2] shape our reconstruction of reality as prioritization machines and oracles of knowledge; by providing personalised offerings, they determine what we buy, read and learn.[3] There is no precise definition of the term ‘AI’ and ‘algorithms’ in law or amongst experts in the field due to the complexity in terms of classification such as narrow and general AI, as well as function and usage involved with the systems. However, in 2021, the EU parliament moved to regulate AI by proposing a new legal framework called the Artificial Intelligence Act, therefore, a definition is necessary to inform readers and legislators about the scope of the law and what other areas need to be strengthened.[4] The Commission provided the first definition of AI in its Communication on AI for Europe[5] which was subject to scrutiny. The High-Level Expert Group revised it further[6] that “Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. From the definition, it appears that AI can be either hardware or software, which means that a physical device like Alexa or a coding programme both qualify as AI. It also suggests that human design is a requirement for AI. The problem with this is that an AI can make another one without human interference; these are known as autonomous AI, and they need to be machines that can perceive their surroundings and process both organised and unstructured data.AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions.”[7]
Bamberger[8] says AI systems can be divided into three categories; Artificial Narrow Intelligence (ANI) which has a limited set of capabilities; Artificial General Intelligence (AGI) which is on par with human capabilities, and Artificial Superintelligence (ASI) which is superior to humans.[9] Currently, we operate on the ANI which runs on machine learning as stated above. This system survives on data that is built for knowledge. Intelligence is based on knowledge, it helps us communicate, make judgments, recognise objects, comprehend circumstances, and plan strategies.[10] We store millions of pieces of information in our memory that we utilise every day to make sense of the environment and our interactions with it.[11]
We use AI and algorithms daily;[12] for example, your coffee machines or refrigerators are all examples of artificial intelligence and algorithms because they work on the principle of ‘if this is turned on, then this should happen.’[13] However, when we have to feed this software with a dataset[14] to help automate decisions, things get complicated. This has proven effective while also causing some harm to society, the problem with this type of upgrading is that it might lead to a variety of unethical actions such as data breaches and discrimination.[15]
I. Data, Algorithm, Machine Learning versus AI
For this article and any upcoming policy discussions, it is essential to define the two main AI components, data and algorithm. Doing so will help users of these systems better understand how AI is developed and used, expose any flaws in the system, and demonstrate that AI does not simply appear out of thin air. AI can be built into the hardware.[16] Machine learning,[17] a division of AI, teaches computers to deduce similarities from a collection of data to determine the steps required to attain a goal, which then leads to the output generated by the AI. The inputted dataset is fed into the machine learning and processed by the computer to give an output which we later see as the result on the screen. Though AI-based goods can work autonomously by detecting their environment rather than following a set of pre-determined instructions, their behaviour is mostly limited and constrained by their designers.[18] The goals that an AI system should optimise are chosen and programmed by humans. For example, in autonomous driving,[19] the algorithm analyses real-time data from “sensors scanning”[20] the complete environment (pedestrians, road, signage, other vehicles, etc.). To establish the car's direction, acceleration, and speed to arrive at a specific location based on the data acquired, the algorithm adjusts the road scenario and outside conditions, including other drivers' behaviour, to deliver the most comfortable and safe driving possible.[21]
Algorithms, on the other hand, are a set of automated instructions used in software to do a given task. Algorithms, according to Gillespie, are “encoded techniques for solving a problem by changing input data into the required output.”[22] Algorithms can be as simple as "if-then"[23] conditions or as complicated as a mathematical series. Unlike AI, which relies on machine learning, which is software, an algorithm is a set of programming instructions needed to create machine learning.[24] However, there is a distinction between Machine Learning and AI in that the former can only work with structured data, whilst the latter can work with both structured and unstructured data.[25]
Furthermore, you might wonder how AI works so well. The answer is that it employs a building block known as an algorithm. As previously said, AI is an algorithm that follows a set of instructions. The system is fed a lot of data for this purpose, and just like humans, the more data the AI is given, the more effective it becomes. The ability of AI to learn is one of its unique characteristics; the more labelled images of cows and zebras an image-recognition algorithm is fed, the more likely it is to recognise a cow or a zebra. Constant learning, on the other hand, has some drawbacks. Although accuracy improves over time, the same inputs that generated one result yesterday may produce a different result tomorrow because the algorithm has been changed by the data it has received in the meantime.[26]
Likewise, certain functions that are only performed by humans can now be performed by AI.[27] The result of this is citizens and institutions will become more vulnerable to AI-assisted acts and decisions, which may be difficult to comprehend and effectively contest when necessary.[28]
II. Problems Faced by the Use of AI & Algorithms
The big question to answer concerning the use of AI and algorithms is ‘how do we regulate the development and use of these systems?’ Algorithms that regulate information on social media sites risk stifling free expression and sway public discourse, and biometric mass surveillance (such as facial recognition, fingerprint etc.) risk violating our right to privacy while also discouraging democratic engagement.[29] Algorithms rely on large amounts of personal data, which is regularly collected, processed, and stored in ways that violate our data protection rights. How do we regulate the use of the information gathered by these machines? The AI system has been educated through the information it gathers via this medium; who should be blamed if this information leaks? Although artificial intelligence has the potential to benefit society by making commodities and systems safer, it also can induce harm. For instance, during the recruitment process, algorithmic bias has the potential to perpetuate existing inequalities in our societies, resulting in discrimination and alienation of minorities.[30]
Hiring algorithms, for example, are more likely to favour men over women and white people over black people since the data they're provided indicates that ‘successful candidates' are often white men.[31]Moreover, such concerning patterns are often hidden. Constant camera surveillance is used in airports, business environments, and homes, where a large amount of biometric data is collected and stored in different places. The medium and how these things are used are not made known. Moderation of content has also required the application of legislation aimed to protect fundamental rights (including personal data and privacy protection and non-discrimination), as well as safety and liability-related issues, which are the key hazards associated with the usage of AI. Yet, regulation risks undermining freedom of expression as the system would be used to knock off content that seems not to be 'right’ by the system.
These harms can be both physical (loss of life and property damage) and mental (loss of privacy, constraints on the right to freedom of expression, human dignity, and workplace discrimination,). Moreover, the harms and issues faced using these systems are often a result of the underlying issues in the dataset fed to it..[32] The allegedly impartial or unbiased AI systems simply repeat flaws in the data they are educated on or transfer specific ways of thinking into code. Cathy O'Neil has proposed that
For instance, Amazon[34] had to abandon its recruiting algorithm because it was biased against women. This was because the information fed to the algorithm suggested that men were more successful than women working with Amazon. AI systems that produce skewed findings have made the news recently. Apple's credit card algorithm, for example, has been accused of discriminating against women, prompting a probe by the New York Department of Financial Services.[35] In most cases, the issue arises from the data utilised to train the AI; if the data is skewed, the AI will pick up on it and may even exacerbate it. Individuals who identify as disabled or have been diagnosed with disabilities have a significant impact on AI systems.[36]
Furthermore, AI improves the ability to track and analyse people's daily habits. For example, there is a concern that AI might be used by state authorities or other entities for mass illegal surveillance or by employers to observe how their employees act by processing the data illegally without informing the data subject, in violation of EU data protection and other standards. AI may be used to retrace and de-anonymize data about people by analysing massive amounts of data and detecting relationships between them, posing new personal data protection hazards even in datasets that do not contain personal data.[37] These systems' developers can learn the behavioural, psychological, and physical tendencies of the individuals living under their control, and they use this knowledge as a weapon to make decisions and subtly affect those people's actions.
As AI systems are increasingly used to assess if someone is sufficiently qualified, who is suitable for social care, how spaces are built, and who is eligible for citizenship benefits, disabled persons will be disproportionately affected by how these algorithms interpret normalcy and difference.[38] Biometric scanner systems driven by AI have also been shown to be biased in a variety of ways, particularly towards women and those with darker skin tones. In addition to concerns with accuracy, Google's early face id, which was incorporated within the Google Photos app, misidentified two Black people as gorillas. The error caused Google to disable the gorilla and chimpanzee classifiers in the system, and this hasty fix was still in place nearly three years later.[39]
III. Conclusion
In conclusion, these concerns are arising as a result of the use and application of AI and algorithms in our daily lives; we are now at a stage where everything we do touches on this. Although the application is unique in cases of employment, privacy, content moderation and surveillance as stated above, the challenges are distinct and they require a different model for their regulation. They have a wide range of effects, including territoriality because they occur on the internet. It is necessary to regulate the system sooner rather than later. We may consider the issues discussed in this article as minor at this time, but the truth is that with the use of AI and the algorithm we are confronted with making decisions based on someone's interests every day, which is the basis of the problem we would face.
[1] The term "software application" is herein understood as a code-based overall system which has an external relationship to users.
[2] Ralf Hartmut Güting and Stefan Dieker, ‘Datenstrukturen und Algorithmen’ (Aufl, 2018) p. 33
[3] Cary Coglianese and David Lehr, ‘Regulating by Robot, Administrative Decision Making in the Machine-Learning Era’, (Georgetown Law Journal 105, 2017), S.1147–1223, https://georgetownlawjournal.org/articles/232/regulating-by-robot/pdf.
[4] COM/2021/206 final ‘Proposal for A Regulation of The European Parliament And Of The Council Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts’ (Hereinafter called the AIA)
[5] COM/2018/237 final ‘The European Economic and Social Committee and The Committee of The Regions Artificial Intelligence for Europe.’
[6] COM (2019) ‘Guidelines on trustworthy AI by High-Level Expert Group on Artificial Intelligence High Level’p.8.
[7] ibid.
[8] Kenneth A Bamberger, ‘Technologies of Compliance: Risk and Regulation in a Digital Age’ (2010) 88(4) Texas Law Review 669, 690–93, 701–2.
[9] Eban Escott ‘What are the 3 types of AI? A guide to narrow, general, and super artificial intelligence’ (Codebots, 24 October 2017) https://codebots.com/artificial-intelligence/the-3-types-of-ai-is-the-third-even-possible (Accessed 9th May 2022).
[10] Reed C, Kennedy E, Silva SN. 2016 Responsibility, autonomy, and accountability: legal liability for machine learning, Queen Mary University of London, School of Law Legal Studies Research Paper No. 243/2016. See https://ssrn.com/abstract=2853462.12
[11] Kenneth A Bamberger, ‘Technologies of Compliance: Risk and Regulation in a Digital Age’ (2010) 88(4) Texas Law Review 669, 690–93, 701–2.
[12] Edwards, Lilian/Veale, Michael, Slave to the algorithm? Why a 'right to explanation' is probably not the remedy you are looking for, Duke Law & Technology Review 16, (2017), S. 18–84.
[13] Artificial Intelligence: Rise of the Machines, Economist (May 9, 2015), <http://www.economist.com/news/briefing/21650526-artificial-intelligence-scares-peopleexcessively-so-rise-machines> (accessed 4th May 2022).
[14] Thomas C Redman, ‘If Your Data Is Bad, Your Machine Learning Tools Are Useless’, Harvard Business Review (online, 2 April 2018) <https://hbr.org/2018/04/if-your-data-is-bad-your-machine-learning-tools-are-useless> (accessed 4th May 2022).
[15] Burrell, J. 2016 How the machine ‘thinks’: understanding opacity in machine learning algorithms. Big Data Soc. 3, 1–12.
[16] S Levin, ‘Face-reading AI will be able to detect your politics and IQ, professor say’ (Guardian, 12th September 2017) <https://www.theguardian.com/technology/2017/sep/12/artificial-intelligence-face-recognition-michal-kosinski> (accessed 4th May 2022).
[17] J Angwin and J Larson, ‘Machine bias’ (ProPublica, 23rd May 2016 <https://www.propublica.org/article/machine-bias- risk-assessments-in-criminal-sentencing> (accessed 4th May 2022).
[18] M U Scherer ‘Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies’ 2017 (Harvard Journal of Law & Technology) 29(2).
[19] ‘Why self-driving cars must be programmed to kill. (MIT Technology Review, 22 October
2015) <https://www.technologyreview.com/s/542626/why-self-driving-cars-must-be-programmed-to-kill/> (accessed 4th May 2022)
[20] ibid.
[21] Joshua Ellul, Stephen McCarthy, Trevor Sammut, Juanita Brockdorff, et al, ‘A Pragmatic Approach to Regulating Artificial Intelligence: A Technology Regulator’s Perspective’ (2021)
[22] Gillespie, T. (2013) ‘The Relevance of Algorithms.’ In Gillespie, T, Boczkowski, P & Foot, K. (eds.) Media Technologies: Essays on Communication, Materiality and Society, Cambridge. MIT Press: MA
[23] Jiri Panyr, ‘Information Retrieval Techniques in Rule-based Expert Systems’ in Hans-Hermann Bock and Peter Ihm (eds), Classification, Data Analysis, and Knowledge Organization (Springer, 1991) 196.
[24] Dirk Helbing, ‘Societal, Economic, Ethical and Legal Challenges of the Digital Revolution: From Big Data to Deep Learning, Artificial Intelligence, and Manipulative Technologies’ in Dirk Helbing (ed), Towards Digital Enlightenment — Essays on the Dark and Light Sides of the Digital Revolution (Springer, 2018) 47.
[25] Shagufta Praveen and Umesh Chandra, ‘Influence of Structured, Semi-Structured, Unstructured data on various data models’ (2017) International Journal of Scientific & Engineering Research Volume 8, Issue 12, 8
[26] François Candelon, Rodolphe Charme di Carlo, Midas De Bondt, and Theodoros Evgeniou ‘AI regulation is coming’ Harvard Business Review Home (October 2021)
[27] Wachter S, Mittelstadt B, Russell C. In press. Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv. J. Law Technology
[28] COM(2020) 65 ‘White Paper on Artificial Intelligence - A European approach to excellence and trust’
[29] Nick Bostrom and Eliezer Yudkowsky. 2014. The ethics of artificial intelligence. The Cambridge handbook of artificial intelligence 1 (2014), 316–334.
[30] Wachter S, Mittelstadt B, Floridi L (2017) Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law 7(2):76–99
[31] Jascha Galaski ‘AI Regulation: Present Situation and Future Possibilities’ (Liberties, September 08, 2021) <https://www.liberties.eu/en/stories/ai-regulation/43740> (accessed 4th May 2022).
[32] ibid.
[33] Cathy O’Neil, ‘Weapons of Math Destruction, How Big Data Increases Inequality and Threatens Democracy’, New York, 2016; Cathy O'Neil (2016.09.01), How algorithms rule our working lives, The Guardian, retrieved from <https://www.theguardian.com/science/2016/sep/01/how-algorithms-rule-our-working-lives> (accessed 4th May 2022).
[34] Isobel Asher Hamilton Why it's totally unsurprising that Amazon's recruitment AI was biased against women (Insider, 13th October 2028) https://www.businessinsider.com/amazon-ai-biased-against-women-no-surprise-sandra-wachter-2018-10?r=US&IR=T (accessed 29th June 2022).
[35] Yavuz, Can. (2019). ‘Machine Bias Artificial Intelligence and Discrimination.’ 10.13140/RG.2.2.10591.61607.
[36] S.M West, M Whittaker and K Crawford, ‘Discriminating Systems: Gender, Race and Power in AI’ (AI Now Institute, 2019). <https://ainowinstitute.org/discriminatingsystems.html.> (accessed 4th May 2022).
[37] Wachter S, Mittelstadt B, Floridi L (2017) Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law 7(2):76–99
[38] M Whittaker, M Alper, O College, L Kaziunas and MR Morris, ‘Disability, Bias, and AI’ (AI Now Institute, 2019) <https://ainowinstitute.org/disabilitybiasai-2019.pdf> (accessed 4th May 2022).
[39] Vincent J, ‘Google Fixed Its Racist Algorithm By Removing Gorillas From Its Image-Labelling Tech’ (The Verge 12th January 2017) <https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai> (accessed 4th May 2022).
[1] Yavuz, Can. (2019). ‘Machine Bias Artificial Intelligence and Discrimination.’ 10.13140/RG.2.2.10591.61607.
[2] Stephanie Kirchgaessner ‘FBI confirms it obtained NSO’s Pegasus spyware’ (The Guardian, 2nd February 2022) <https://www.theguardian.com/news/2022/feb/02/fbi-confirms-it-obtained-nsos-pegasus-spyware> (accessed 4th May 2022).