Felix Winter
He is studying law in the 6th semester at Humboldt University Berlin and spent two semesters at the Northumbria University at Newcastle. He completed internships at the public prosecutor of Berlin and a renowned patent attorney. He is especially interested in competition law and artificial intelligence.

I. Introduction

New technologies were always followed by amendments within liability systems. For instance, challenges caused by the invention of the railroad and motor-powered cars resulted in the introduction of strict liability legislation in many European countries.[1] Nowadays, society is confronted with a similar but new challenge, whose repercussions will become even stronger than of those that came before: artificial intelligence. Although it is difficult to break down the numerous problems of artificial intelligence, for the purpose of clarity, this essay will focus on the two biggest unique features: complexity and unpredictability. Inter alia due to these challenges, the Expert Group on Liability and New Technologies by the European Commission has recently concluded that “[i]t is possible to apply existing liability regimes to emerging digital technologies, but (…) due to the limitations of existing regimes, doing so may leave victims under- or entirely uncompensated. The adequacy of existing liability rules may therefore be questionable”.[2] Whether this quotation deserves support or rejection will be examined by this essay using the example of product liability.

II. Complexity and Unpredictability

The complexity of autonomous systems is caused by the fact that modern-day hardware is already a composition of numerous parts, for which a high technical knowledge is required. This hardware combined with the increasing number of digital components such as artificial intelligence creates a highly complex technology, that is not comparable to the stereotype of potential harmful sources anymore, which is the pivotal point of the current legal liability framework. For example, victims will face extreme difficulties in finding out the reason for accidents with autonomous vehicles inter alia because of their interaction with other autonomous systems or cloud services.[3] The second major challenge is unpredictability. One cannot know in advance what steps an autonomous system will take to achieve its goal.[4] Some systems are even programmed to have a certain openness and to allow external data to be processed by the system.[5] If no one can say for sure how a product will develop, who should then have to bear the danger and risk? In short, the problem of artificial intelligence is the complete uniqueness, for which the current liability regulations are not made.

III. Product Liability as Solution?

One possible way to handle artificial intelligence under the current legal framework is the product liability based on directive 85/374/EWG.[6] In general, product liability has three main advantages. Firstly, it enables the consumer to sue the located producer for the whole damage instead of seeking contribution from every party in proportion to their fault.[7] Secondly, also the producers benefit by being able to price the risk of the damages into the costs of the products or disclose such risk factors in a prospectus when trying to find investors.[8] Thirdly, the possible liability under the directive might encourage AI developers to design products with intensified safety and control mechanisms.[9] However, the product liability directive also faces problems regarding complexity and unpredictability of AI.

  1. Problems Regarding Complexity

a) Software as Product

The first requirement of the directive is a movable product.[10] This raises the question whether software can be seen as movable. Otherwise, cases like a self-driving vehicle, which collides with a pedestrian because of an error in the software, or an erroneous health app for insulin therapy, which puts the patient’s life in danger, could not be covered.[11] The prevailing opinion across Europe stipulates the physicality of the product as the decisive requirement, which software per se do not have. Neither is it explicitly included in the product definition such as electricity.[12] According to this opinion, the application depends on the fact whether or not software is supplied on a tangible medium. Where software is downloaded over the Internet, no “product” shall be involved and thus, the directive not be applicable.[13] However, this distinction is arbitrary. Linking liability for software to the distribution on a fixed carrier was from the outset an auxiliary construct. It might have worked at a time, where software was contributed with the help of corporeal storage devices such as USB flash drives or CDs. In times, at which software is regularly downloaded from the cloud, it would be wrong to continue down this path.[14] Instead, the ratio of the directive should be determined, in order to apply it to the digital market: The core purpose of the product liability is to encourage the manufacturer of industrially manufactured goods to carefully assess the liability and to protect the user from damage during the use of the product. Software developers should not be exempted from this duty.[15] Against this proposal it could be argued that the wording of article 2 shows very clearly - because of the explicit mention of electricity - that the legislator is aware of the problem of non-physical goods. However, this does not necessarily mean, that incorporeal goods except electricity should be excluded from the scope of the directive. The reference to electrical energy can also be interpreted in a sense of clarifying that the requirement of physicality for products under the member states’ law should not prevent the application. For this reason, the legislator expressly mentioned electricity, which was at the time being the only incorporeal good that was of relevance because of its industrial production. Nowadays, also software is produced industrially and distributed on a massive scale. In conclusion, if the directive were rewritten today, software would certainly be explicitly mentioned alongside electricity.[16] In view of the immense importance of software, the existing legal uncertainty as to whether software is covered by the directive is hardly acceptable. A clarification by the European legislator would therefore be desirable.[17]

b) The Requirement of a Product Defect

The major requirement in a product liability claim is the ascertainment of a product defect. A product is defective, when it does not provide the safety, which a person is entitled to expect.[18]

It is necessary to distinguish between two sub-species of product defect. The first one is a manufacturing defect, which can be established relatively easy because the harming product differs from the blueprint used by the manufacturer during the production of the item. An example of this would be the incomplete installation of software in autonomous cars.[19]

More difficult is the design defect, which means that a product was manufactured correctly but due to its design it poses a risk to consumers. Usually, the whole product line is affected.[20] In the case of autonomous systems, the application of the standard for design defects requires an inquiry of the software code by the courts. Software errors must be identified, that could have been avoided by an alternative program, which would have worked equally well but would have avoided the accident.[21] The new challenge posed by AI for the concept of design defects is to find a suitable comparison. One could compare the performance of an autonomous system to a similar product operated by humans. Concerning autonomous cars, it would practically result in a human driver test: whenever a reasonable human driver would have been able to avoid an accident caused by an autonomous system, the algorithm would be determined to be defective in design.[22] Although reports have shown that the amount of accidents will dramatically decrease by the use of self-driving vehicles, they will still occur.[23] However, the problem of this strategy is that autonomous systems cannot avoid every accident, that a human would have been able to avoid but also vice versa. For example, self-driving vehicles will never speed or be drunk, whereas they might fail to recognise simple things such as lorries, although every human would have identified the lorry as such. This was the case, when a self-driving Tesla caused the first fatal accident by autonomous systems.[24] On balance, the comparison to a similar product operated by human beings is not preferable because it would demand the autonomous systems to have a certain standard, which they cannot fulfil.[25]

Bearing in mind that algorithms are trained to evolve through self-learning and not programmed with “if … then” commands, the concept of design defect should not focus on a particular car but rather on the whole fleet of cars designed by the same manufacturer. It should be system orientated.[26] The decisive question must be whether the autonomous system as a sum of all products with the same algorithm caused an unreasonable amount of accidents overall. Whether the individual accident could have been avoided by a human driver, should not be of relevance.[27] Unfortunately, such a system-orientated concept will have negative effects for the competition on the market. The algorithm, which caused the harm, will be compared to ones by other manufactures, in order to ascertain a design defect. The consequence will be the creation of an optimal algorithm test discriminating all algorithms except the best one. Only one algorithm on the market, that would have avoided the particular accident, will be sufficient to find the algorithm, which caused the harm, to be defective. The manufacturer of the safest algorithm will have huge financial advantages compared to their competitors, because latter could be sued for every accident caused by their weaker algorithms. Hence, this approach will increase the first-mover advantage and oppress competition in the market.[28]

c) Burden of Proof

The injured person bears the burden of proving defect, damage and the causal link between both.[29] Unsurprisingly, the burden of proof poses the biggest obstacle for victims seeking compensations from the producer as studies have shown.[30] Likely, this obstacle will even exacerbate with the rise of digital products because of their nature and lack of transparency.[31]

However, the positive flip side of the digitalisation is that opportunities are created to monitor the actions of autonomous systems better and to store this information for the benefit of the victim. The data offered to victims, courts and regulators are comparable to data, that is already available in the case of an airplane crash, and will significantly diminish the burden of proof on victims and courts.[32] For this reason, the Expert Group of the Commission proposed that a “failure to log, or to provide reasonable access to logged data, should result in a reversal of the burden of proof in order not be to the detriment of the victim.”[33] Partially, this is already the case under e.g. the German Road Traffic Act, whose Section 63a introduced a right for victims of motor accidents to have access to the “black box” of a car equipped with autonomous driving functions.[34] The purpose is to help the victim to identify the cause of the accident and to assess whether the automated system or the human driver was responsible.[35]

Such access rights will probably remove the difficulties of proving the defect of the product. However, this cannot be concluded for sure due to the few autonomous products operating on the market at the time being. If it turned out that victims still faced a disproportionate obstacle to prove the defectiveness with regard to autonomous systems, two remedies must be considered. The first one would be to remove the burden of proving the defect and substitute it with the mere burden of proving the damage. It would reverse the burden of proof and result in the challenge for the producer to prove that the product was not defective at the time when the damage occurred.[36] The second, even more extreme measure would be to abandon the concept of defect and introduce a system of pure strict liability for autonomous systems. The consequence would be a liability of the manufacturer for any harm caused by the autonomous system, unless the damage was a result of the fault of the victim, a third party or force majeure.[37] In conclusion, the burden of proof should not be subject to legislative changes, lawmakers should not sharpen their liability system but provide relief for the victims by introducing access rights.

d) Including Data and Privacy in the Definition of Damage

At a time when data and information are becoming increasingly more important due to new technologies, it is questionable whether the directive can afford to cover only damages to tangible property and exclude data from the definition of damage.[38] Therefore, the Expert Group of the Commission recently concluded, that “[t]he destruction of the victim’s data should be regarded as damage, compensable under specific conditions”,[39] because “[w]ith much of our lives and our “property” becoming digital (…) it is no longer appropriate to limit liability to the tangible world”.[40] One can be optimistic that in the future cases, where information are disclosed and the victim’s privacy is extremely infringed, fall within the scope of the directive.[41] Victims could claim damages, when for instance, their health data is made available to a third party by using a defective app or information of them recorded by smart speakers, which are vulnerable to cybercrime, are leaked to a third party. If the Commission follows this advice, it would also consequently need to abolish the 500 € threshold, which prevents victims from pursuing their damages under the directive if they are below the sum 500 €.[42] Otherwise, they must find a way how the value of damages such as the loss of data or the infringement of privacy can be evaluated. Even if they did, the value would be likely below 500 € and result in an ineffective extension of the directive in practice.[43]

  1. Problems Regarding Unpredictability

a) Assumption that Products are Static Once Released

The European Product Liability is based on the assumption that the product does not change once it has left the product line. However, this is not the case with artificial intelligence.[44] It becomes particularly evident by the use of the phrase “the time when the product was put into circulation”[45], which is the foundation for defences such as the development risk clause or the definition of defect. Concerning the latter, article 6 rules that only the time, at which the product is placed on the market, is relevant.[46] Thus, once a product has been put into circulation without defect, it cannot become defective subsequently through a further development in science and technology. In short, there is no obligation to monitor products.[47] This means that later changes to the product do not lead to a defect. Consequently, a product should remain flawless even if defective software updates are installedafterwards.[48] This is problematicin the case of automated products, that require continuous compliance with regulations, as for example automated vehicles.

Another question is the decisive point in time for products that change themselves through artificial intelligence. On the one side, the system develops new and independent solutions after being placed on the market, on the other side, this learning process is based on algorithms, that were already part of the product at the time when it was put on the market.[49] Especially, if the learning process of such a system is not based on technically inadequate programming but on the openness of the system, and if this leads to dangerous behaviour after the system has been placed on the market, the question arises, whether the manufacturer should be liable for such “black box effects”.[50] It could be argued that the producer is aware of the openness of the system and the risks involved, and therefore should be only exempted from liability for damage caused by atypical risks.[51] The Expert Group by the Commission also goes in this direction when concluding that “[m]anufacturers of products or digital content incorporating emerging digital technology should be liable for damage caused by defects in their products, even if the defect was caused by changes made to the product under the producer’s control after it had been placed on the market.”[52]

b) Development Risk Clause

Related to the former paragraph is the already mentioned development risk clause. According to article 7, the producer is inter alia not liable if the defect could not be discovered with available knowledge.[53] This is particularly problematic regarding new technologies, whose safety is not fully known yet. The effect of the clause is that consumers must bear the risk of such new technologies, while the producers benefit from the distribution of inventive products.[54] Therefore, the Expert Group of the Commission concluded that “the development risk defence (…) should not be available in cases where it was predictable that unforeseen developments might occur.”[55]

c) 10-Years Limitation

Article 11 requires the member states to introduce a maximum limit of ten years, since the product was put into circulation. After this time, no action can be taken.[56] Problematic is that for some products like drugs or emerging technologies, the damage is likely to become visible rather late.[57] For this reason, it was argued that the ten-year time limit should be removed for these kinds of products. For instance, a new smart product, which is commercialised for children, might emit an unusually high degree of radiation. This could lead to infertility of its users, which would be unfortunately diagnosed years later and therefore prevent the victim from suing damages under the directive.[58] The Commission justifies this restriction on the grounds that it strikes a fair balance with the producer’s strict liability and creates an incentive to invest in new technology.[59] Furthermore, an extension might lead to significantly increasing costs for producers, which would be disproportionate to the advantage for the victims.[60]

It is important to consider the ruling of the ECHR. In exceptional circumstances like the late appearance of damages, the right of access to justice could be violated by a 10-year limitation.[61] Based on this, a preferable solution would be a compromise, that on the one hand the ten-year limitation is maintained, whereas on the other hand the limitation bears a flexibility for very unusual circumstances.[62] Alternatively, the time limitations could depend on the type of damage. For example, the ten-year limitation could be maintained for damage to property but prolonged for personal injuries.[63]

III. Conclusion

In conclusion, the directive remains a powerful tool because of its ability to strike a fair balance between consumer protection and producer’s interests. The Expert Group deserves support in regard to the ascertainment that the current product liability may leave victims under- or entirely uncompensated. However, this does not mean that the directive is not suitable for future technologies. If the identified uncertainties were removed, the directive could work properly also concerning emerging technology. Firstly, software must be seen as a product. Secondly, the comparison must be the sum of all products with the same algorithm. Thirdly, the damage of data should trigger product liability. Fourthly, the ten-year limitation period should depend on the type of damage.

In general, it is necessary that the product liability directive is clear and easy to apply. Otherwise, no one wins. Consumers will have difficulties in claiming their rights and producers will think twice before putting innovative products on the market. Once this uncertainty has been removed, sticking with the directive is the right way forward rather than hastily overturning the legal system, as would be the case, for example, with the introduction of a separate legal personality for autonomous systems.

[1]Gerhard Wagner, ‘Robot Liability’ (2018) < https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3198764> accessed 17 April 2020, 1.

[2]Expert Group on Liability and New Technologies – New Technologies Formation,  Liability for Artificial Intelligence and other emerging technologies (European Union 2019), 19.

[3]Expert Group on Liability and New Technologies – New Technologies Formation (n 2) 32-33.

[4]Roman Yampolskiy, ‘Unpredictability of AI’ (2019) < https://arxiv.org/pdf/1905.13053.pdf> accessed 17 April 2020.

[5]Expert Group on Liability and New Technologies – New Technologies Formation (n 2) 33.

[6]Duncan Fairgrieve, ‘Product Liability in the United Kingdom’ [2019] EuCML 170.

[7]Jacob Turner,  Regulating Artificial Intelligence(Palgrave Macmillan 2018), 94.

[8]Jacob Turner (n 7) 95.

[9]Horst Eidenmüller, ‘The Rise of Robots and the Law of Humans’, [2017] No. 27 Oxford Legal Studies Research Paper, 8.

[10]Art. 2 85/374/EWG.

[11]Astrid Seehafer and Joel Kohler, ‘Künstliche Intelligenz: Updates für das Produkthaftungsrecht?’ [2020] EuZW 213, 214.

[12]Art. 2 85/374/EWG.

[13]Duncan Fairgrieve and Eleonora Rajneri, ‘Is Software a Product under the Product Liability Directive?’, [2019] IWRZ 24.

[14]Gerhard Wagner, ‘Robot, Inc.: Personhood for Autonomous Systems?’ [2019] Vol. 88, No. 2 Fordham Law Review 591, 604.

[15]Gerhardt Wagner, ‘ProdHaftG’ in Franz Jürgen Säcker, Roland Rixecker, Hartmut Oetke and Bettina Limperg (eds),  Münchener Kommentar zum Bürgerlichen Gesetzbuch(7th edn, C.H. Beck 2017), § 2 [19].

[16]Gerhardt Wagner (n 15) § 2 [20].

[17]Astrid Seehafer and Joel Kohler (n 11) 214.

[18]Art. 6 85/374/EWG.

[19]Gerhard Wagner (no 14) 604.

[20]FindLaw Attorney Writers, ‘Product Liability: Manufacturing Defects vs. Design Defects’ (Find Law, 30 January 2017) https://corporate.findlaw.com/litigation-disputes/product-liability-manufacturing-defects-vs-design-defects.html accessed 17 April 2020.

[21]Gerhard Wagner (no 14) 605.

[22]Gerhard Wagner (no 14) 605.

[23]National Highway Traffic Safety Administration, ‘Federal Automated Vehicles Policy - Accelerating the Next Revolution In Roadway Safety’ (September 2016) https://www.transportation.gov/sites/dot.gov/files/docs/AV policy guidance PDF.pdf accessed 17 April 2020, 5.

[24]Will Oremus, ‘The Tesla Autopilot Crash Victim Was Apparently Watching a Movie When He Died’ (Slate, 01 July 2016) < https://slate.com/business/2016/07/tesla-autopilot-crash-victim-joshua-brown-was-watching-a-movie-when-he-died.html> accessed 17 April 2020.

[25]Gerhard Wagner (no 14) 604 - 605.

[26]Mark Geistfeld, ‘A Roadmap for Autonomous Vehicles: State Tort Liability, Automobile Insurance, and Federal Safety Regulation’ [2017] 105 California Law Review 1611, 1654 - 1657.

[27]Gerhard Wagner (no 14) 606.

[28]Gerhard Wagner (no 14) 606.

[29]Art. 4 85/374/EWG.

[30]Commission staff working document (no 41) 25-26.

[31]Herbert Zech, ‘Liability for Autonomous Systems: Tackling Specific Risks of Modern IT’ (May 2018) in Sebastian Lohsse, Reiner Schulze, Dirk Staudenmayer (eds.),  Liability for Robotics and in the Internet of Things(Nomos/Hart forthcoming) https://ssrn.com/abstract=3195676 accessed 17 April 2020, 6.

[32]Gerhard Wagner (n 1) 14.

[33]Expert Group on Liability and New Technologies – New Technologies Formation (n 2) 4.

[34]Section 63a (3) StVG.

[35]Gerhard Wagner (n 1) 14.

[36]BEUC, ‘Review of Product Liability Rules - BEUC Position Paper’ (2017) https://www.beuc.eu/publications/beuc-x-2017-039_csc_review_of_product_liability_rules.pdf accessed 17 April 2020, 7.

[37]Gerhard Wagner (n 1) 14.

[38]DG Communications Networks, Content & Technology by Deloitte,  Study on emerging issues of data ownership, interoperability, (re-)usability and access to data, and liability (European Union 2017), 124.

[39]Expert Group on Liability and New Technologies – New Technologies Formation (n 2) 4.

[40]Expert Group on Liability and New Technologies – New Technologies Formation (n 2) 59.

[41]Piotr Machnikowski,  European product liability: an analysis of the State of the art in the era of new technologies(Intersentia 2016), 11.

[42]Art. 9(b) 85/374/EWG; Charlotte de Meeus, ‘The Product Liability Directive at the Age of the Digital Industrial Revolution: Fit for Innovation?’ [2019] EuCML 149, 151.

[43]Charlotte de Meeus (n 42) 152.

[44]Jacob Turner (n 7) 98.

[45]e.g. Art. 6 85/374/EWG.

[46]Art. 6 85/374/EWG.

[47]Astrid Seehafer and Joel Kohler (n 11) 215.

[48]Benjamin Raue, ‘Haftung für unsichere Software’ [2017] NJW 1841, 1843.

[49]Astrid Seehafer and Joel Kohler (n 11) 215.

[50]Astrid Seehafer and Joel Kohler (n 11) 215.

[51]Peter Rott, ‘Rechtspolitischer Handlungsbedarf im Haftungsrecht, insbesondere für digitale Anwendungen, Gutachten im Auftrag des Verbraucherzentrale Bundesverbandes e.V.’ (May 2018) https://www.vzbv.de/sites/default/files/downloads/2018/05/04/gutachten_handlungsbedarf_im_haftungsrecht.pdfaccessed 17 April 2020, 34.

[52]Expert Group on Liability and New Technologies – New Technologies Formation (n 2) 3-4.

[53]Piotr Machnikowski (n 41) 78.

[54]Jacob Turner (n 7) 98

[55]Expert Group on Liability and New Technologies – New Technologies Formation (n 2) 43.

[56]Art. 11 85/374/EWG.

[57]Duncan Fairgrieve, Geraint Howells and Marcus Pilgerstorger, The Product Liability Directive: Time to Get Soft, [2013] 4 JETL, 14.

[58]Charlotte de Meeus (n 42) 153.

[59]Commission, ‘Green Paper on Liability for Defective Products’ COM(1999) 396 final http://aei.pitt.edu/1217/1/defective_products_gp_COM_99_396.pdfaccessed 17 April 2020, 27.

[60]Piotr Machnikowski (n 41) 96.

[61]_Howald Moor and Others v Switzerland,_App no 52067/10 and 41072/11 (ECtHR, 11 March 2014), [79].

[62]Charlotte de Meeus (n 42) 153.

[63]European Commission, ‘Minutes of the Meeting of the Expert Group on "Liability and New Technologies - Product Liability Formation’ (2018) https://ec.europa.eu/transparency/regexpert/index.cfm?do=groupDetail.groupMeetingDoc&docid=22625 accessed 17 April 2020, 6.