Every year, Artificial Intelligence seems to evolve faster than the year before. Consequently, regulating bodies—be those national or international—strive to keep up in regulating its development in line with ideals enshrined in the General Data Protection Regulation (GDPR), such as express consent for data processing. This article will highlight 5 recent regulations and why they are important to the future of AI.

1: European Union: AI Strategy, April 2018

The EU has attempted to maintain its position as the front-runners in AI development through outlining its ethos on AI development within the AI strategy. The first key point within the strategy was to ensure new technologies "benefit people and society as a whole" in like with the Charter of the Fundamental Rights of the EU, without sacrificing privacy of people's data. Secondly, the Strategy emphasised the need for establishing clear safety and liability standards, potentially to reduce reluctance over AI use.

Overall, whilst the Strategy's goals are more general ideals than concrete legislation, its importance lies in the subsequent regulation it may inspire; either through the European Parliament resolutions (which may clarify measures the member states agree upon), the recent European Commission White Paper on Artificial Intelligence, or more locally within the EU member states.

Other EU legislation to check out: White Paper on Artificial Intelligence, European Commission High Level Expert Group Guidelines

  1. United Kingdom: ICO and Alan Turing Institute Guidance on AI Explainability, June 2020

The second most recent and perhaps the most detailed ICO guidance to date, this document helps to 'explain the processes, services and decisions delivered or assisted by AI'. It splits into three sections; basics of explaining AI, explaining AI in practice and implications of explaining AI on the implementing organisation.

One reason why the guidance is particularly important is because it confirms a definition of Artificial Intelligence which splits it into two types; those with and without a 'human in the loop' of the AI process. The split was suggested by article 22 GDPR which initially implied solely automated AI must be used only if authorised by law. So, the guidance expands on this idea by stating solely automated AI may only be used 1) given an express legal basis 2) if 'necessary for entering into or performing a contract between the individual and the organisation' or 3) is based on the individual's explicit consent.

The guidance is not binding under the Data Protection Act 2018. However, it elucidates 'good practice' for AI implantation by businesses; a good practice which seems to confirm the importance of explainability set by the EU AI Strategy similarly to AI with black box algorithms.

Further recent ICO guidance: ICO Guidance on AI and Data Protection, July 2020

  1. United Kingdom Office for Artificial Intelligence and World Economic Forum Guidelines for AI procurement, June 2020

This guidance is significant in focusing on AI implementation within teams working in central government rather than purely focusing on businesses. It includes practical implementation steps with suggested solutions to any issues during or after the procurement.

Its importance lies not only in suggesting an international standard but also in suggesting avoidance AI with "black box" algorithms. These algorithms cannot be accessed by teams should an individual inquire about the AI's decision, thus hindering explainability and interpretability which the GDPR seeks to enforce. Hence, future AI may be shaped by those two factors as a core feature of the technology rather than an ancillary requirement.

  1. United States: White House's 10 principles on Government AI regulation, January 2020

Across the Atlantic, the US has kept up with its European competition. Outlining the ten principles which include non-discrimination, public participation and trust in AI, as well as more economically focused principles of  risk assessment/management and inter-agency coordination, the White House sent a message to the federal states. Although the principles are similarly general to the EU Strategy, these national principles are crucial to the recently founded National Institute for Standards and Technology (NIST) as an organisation tasked with promoting research, knowledge and public-private partnerships in the AI field.

Hence, these principles signal for a more formalised approach in AI implementation across the nation as a whole, but may also serve as tenets for the NIST when coordinating AI implementation standards on a federal level, perhaps through other agencies. NIST is about to get busy.

  1. China: White Paper on AI Standardisation, May 2020

Can we really finish an article about technology without mentioning China? The White Paper largely contains similar points (e.g human interests, transparency and liability) which is significant because the similarity of ethical principles regarding AI is seemingly at odds with the country's growing use of AI within the surveillance and military sectors. Furthermore, the sheer amount of data available for accurate AI machine learning puts China at the forefront of the industry's development. Hence, China's White Paper is a crucial indicator to the world that even AI use as massive and diverse as China's may still follow EU-style principles in the future.

Further analysis of China's AI use: https://www.forbes.com/sites/cognitiveworld/2020/01/14/china-artificial-intelligence-superpower/

Overall, international legislation seems to focus on establishing human-centric standards out of general and somewhat vague principles. The accentuation on explainability within AI implementation seems to portray a balancing act between AI innovation efficiency and the rights of the humans exposed to its decisions. The balance currently stricken is bad news for 'black box' and solely automated AI.

Perhaps the regulators seek to gradually introduce a more laissez-faire approach, but that first requires setting more stringent standards. The similarity of China and US's approaches to the EU and UK's strategies (and vice versa) despite the differences in their political systems may imply a shift toward a more uniform AI regulation which can level out the playing field for future AI collaboration and creation.

Writer: Anastasia (Stacy) Stepanova

Editor: Conor Courtney