AI stands for ‘Artificial Intelligence’ and is a current buzzword in all sectors, from manufacturing, to business, to the legal sector. AI can be viewed as one of the most important modern developments for the future and this can be seen by the rapid increase in AI-related spending; the IDC predicts that worldwide spending on AI will be in the region of $110 billion by 2024, a 119.6 per cent increase from $50.1 billion in 2020.[1] It is for this reason that it is so important to have a firm basic understanding of this modern technology that may infuse itself into every day working and personal life.

John McCarthy describes AI as being “the science and engineering of making intelligent machines”.[2] So the question is: what is an intelligent machine? One answer was delivered by Allen Newell who believed that “intelligent machines”, in the context of AI, was to mean the ability of machines to reciprocate “knowledge-level” ability.[3] To understand the way in which AI technology aims to achieve this requires consideration of AI terminology and its processes.

What is encompassed under the umbrella term ‘AI’?

The most common types of technology that fit under the term AI are based on machine learning and/or deep learning (which is a subset of machine learning). These two technologies are focused on the ability of a machine to reciprocate the human-like development cycle: plan and design; implement; test; analyse and learn; redevelop; and then repeat the cycle. The reciprocation of the cycle aims to improve efficiency and accuracy and can be seen to incorporate what McCarthy and Newell mean by “intelligent machines” showing "knowledge-level" ability in real-time.

AI technology is also sometimes described as weak/strong. When described as weak AI (also called Narrow AI or Artificial Narrow Intelligence (‘ANI’)), the developer is indicating that the AI technology is capable of only performing specific tasks.[4] However, when described as strong AI (also referred to as Artificial General Intelligence (‘AGI’)) the machine more accurately represents human-level intelligence by being able to solve numerous problems and, in some cases, autonomously choosing which problems to solve without human intervention.[5] Beyond this, there is the conceptual development of AI that is capable of showing intelligence that is superior to that of humans; known as Artificial Super Intelligence (‘ASI’), which would also fall under the umbrella term of ‘strong AI’.[6] Now that we have a fundamental grasp of AI concepts, it is important for us as lawyers to understand the shortfalls and opportunities this technology poses.

What are the potential challenges of AI?

It is often found that the legal framework develops much more slowly than the technological advancement and/or sector change. This is because an assessment of the weaknesses and potential challenges of the advancement is needed before the production of effective regulation and control. AI is no different.

This section will briefly highlight some of the challenges AI faces (non-exhaustive list). For further reading, Rowena Rodrigues in the article Legal and human rights issues of AI: Gaps, challenges and vulnerabilities has produced a literary review of the current situation and provides an excellent starting point to explore these potential challenges in more detail.[7]

From Rodrigues’s article it becomes clear that some of the key challenges include (but are not limited to):

  • Transparency, both in terms of the algorithm and storage of private data (including cybersecurity) – who has access? What happens and who is liable in a data breach?
  • Legal personality – does the AI system itself have a legal personality?
  • Intellectual propertywho owns the AI created output(s)? Who owns the data that the AI programme uses to learn from?
  • Unfairness and bias – what if the data being used has a built in bias and the programme perpetuates this?
  • Liability – considering the substantial amount of input from various parties and the system itself, who is accountable when things go wrong?[8]

Despite the potential challenges that AI poses, there are also many opportunities for the legal sector. The following is a non-exhaustive list derived from potential threats[9]:

  • The legal framework is typically reactive to changes in industries and sectors. Therefore, the legal sector, with its clients, can have an active role in shaping future legislation by utilising their experience and position as users and advisers of AI technology.
  • Engaging with the ethical debate across regions – it is unlikely that one code will satisfy all regions. Therefore, firms with geographical scope and in-depth local understanding are in a prime position to advise on policy development.
  • Advising, drafting, and negotiating on ‘Smart Contracts’ – including advising on contractual and tortious liability.
  • Advising on consumer protection regulation and requirements.
  • Advising on IP applications & licences with respect to AI technology.

The legal sector is in a prime position to play an active role in the AI market. This can be from an advisory position, but also as users through harnessing the technology to facilitate faster, cheaper and more effective services to their clients.

Further reading:

Insight, 'AI: Artificial intelligence and the legal profession' (The Law Society, 1 May 2020) <> accessed 2 May 2021

Matt Bartlett, ‘Solving the AI accountability gap: Hold developers responsible for their creations’ (Towards Data Science, 5 April 2019) <> accessed 2 May 2021

Neil Sahota, 'Will A.I. Put Lawyers Out Of Business?' (Forbes, 9 February 2019) <> accessed 2 may 2021

Rachel Vanni, 'How Artificial Intelligence Is Transforming the Legal Profession' (Kira, 8 May 2020) <> accessed 2 may 2021

[1] IDC, ‘Worldwide Spending on Artificial Intelligence Is Expected to Double in Four Years, Reaching $110 Billion in 2024, According to New IDC Spending Guide’ (IDC, 25 August 2020) <>  accessed 2 May 2021

[2] John McCarthy, ‘What is Artificial Intelligence?’ (McCarthy, initial publication 2004, revised 2007) <> accessed 2 May 2021

[3] Allen Newell, Unified Theories of Cognition (Harvard University Press, 1994)

[4] IBM Cloud Education, ‘Artificial Intelligence (AI)’ (IBM, 3 June 2020) <> accessed 2 May 2021

[5] ibid

[6] ibid

[7] Rowena Rodrigues, ‘Legal and human rights issues of AI: Gaps, challenges and vulnerabilities’ (2020) 4 Journal of Responsible Technology 10005

[8] Lee Gluyas & Stefanie Day, ‘Artificial Intelligence – Who is liable when AI fails to perform?' (CMS, 2018) <> accessed 2 May 2021

[9] Simmons & Simmons LLP, ‘TechNotes – Top 10 issues for Artificial Intelligence’ (Simmons & Simmons LLP, 3 July 2019) <> accessed 2 May 2021