A recent European Commission proposal will have an impact on UK artificial intelligence (AI) providers. On 28 September 2022, the European Commission proposed a Directive on non-contractual civil liability rules for AI (AI Liability Directive), complementing the AI Act proposed in April 2021. The proposal was accompanied by proposed revisions to the existing Product Liability Directive (85/374/EEC).
As the UK has left the EU, the UK will not need to incorporate the AI Liability Directive into its national law. However, UK providers who place on the market or put into service AI systems in the EU will be subject to the AI Liability Directive. The UK Government also published a White Paper on AI on 29 March 2023 with five principles that regulators should consider for practical guidance. We look at these below.
The Product Liability Directive is the backbone of the current liability framework in the EU for defective products but its parallel application has posed challenges. While it provides for strict liability, national fault-based liability regimes require the claimant to establish fault, damage and causation. However, the characteristics of AI, including the complexity of products, services and the value chain, make it difficult for victims to identify the human behaviour that causes damage. This is particularly challenging in the case of autonomous AI. Therefore, the AI Liability Directive has been proposed to simplify the legal process of claiming compensation for damage caused by AI such as damage suffered as a result of privacy breaches and unlawful discrimination based on algorithms.
The AI Liability Directive applies to non-contractual civil liability but not criminal liability, and it will not apply retrospectively. The first of its two major provisions is Article 3, which provides that national courts may order the disclosure of relevant evidence about specific high-risk AI systems that are suspected of having caused damage.
It seeks to address the concern that victims’ inability to access evidence bars them from claiming compensation. To request the disclosure of evidence, the potential claimant must establish the plausibility of their claim. The courts can only order the disclosure of relevant evidence which is necessary and proportionate to support such a claim. In determining proportionality, they must consider the legitimate interests of all parties, including third parties and particularly with respect to the protection of trade secrets and confidential information. However, there is no definition of what is and what is not relevant evidence. Failure to comply with an order to disclose or preserve evidence will give rise to a presumption of the defendant’s non-compliance with a relevant duty of care.
Article 4 of the AI Liability Directive seeks to address the challenges victims face in establishing the causal link between non-compliance with a duty of care and the output produced by an AI system or the failure of an AI system to produce an output which has caused damage. It provides for a presumption of causality where the claimant has demonstrated:
- non-compliance with a relevant duty of care;
- that it can be considered reasonably likely, based on the circumstances of the case, that the fault has influenced the output produced by the AI system or the failure of the AI system to produce an output; and
- that the output produced by the AI system or the failure of the AI system to produce an output gave rise to the damage.
The European Parliament and the Council will need to formally adopt the AI Liability Directive under the ordinary legislative procedure before it can come into force. As stated, the UK will not need to incorporate the AI Liability Directive into its national law and the directive would only be relevant when doing business in that market.
On 29 March 2023, the UK Government published a white paper on AI regulation. The government will empower existing regulators, such as the Competition and Markets Authority and Health and Safety Executive, to come up with context-specific approaches that are tailored to the way AI is actually being used in the industries they monitor, rather than introduce a new single regulator for AI governance. The white paper outlines five principles that these regulators should consider, which are:
- Safety, security and robustness;
- Appropriate transparency and explainability;
- Accountability and governance; and
- Contestability and redress.
Regulators will issue practical guidance to set out how to implement these principles in their sectors over the next 12 months, and legislation could be introduced to ensure that regulators consider these principles consistently.
This article was first published on 19 October 2022 and updated on 20 April 2023.
Enjoy That? You Might Like These: