Key points for artificial intelligence laws and regulations


8th November 2023

We will have all heard of artificial intelligence (AI) by now. But what actually *is* AI? And why is it unique from a legal perspective? In this article we delve into artificial intelligence laws and regulations.

Defining AI

The first thing to note about AI is that there is not yet a commonly accepted definition of what it actually is. Most people will have an idea of what AI is, but the lack of standardisation means everyone will have different ideas. For the lawyers among us, we all (secretly or not), love a definition. But this is for good reason – imagine trying to read a contract without defined terms. It would be almost impossible to understand what the contract means, and it would create a lot of uncertainty. The same is true of AI.

However, a common theme being adopted across regulators, lawyers and authors when talking about AI is the concept of autonomy. The latest draft of the EU’s AI Act refers specifically to autonomy:

'Artificial Intelligence System' means a machine-based system that is designed to operate with varying levels of autonomy [..].

To be clear, “autonomy” is not the same as “automated”. Automated technology follows a set logic: if X + Y, then Z. Take, for example, automated technology like smart thermostats: if the temperature in a room (X) drops below a certain threshold (Y), then smart thermostat will switch on the heating (Z). There was no discretion (or autonomy) on the part of the system. These systems are static and predetermined.

In contrast, AI systems are dynamic. They can change and adapt autonomously, in a way that is unpredictable. Take for example machine learning technology. This is currently the most common type of AI technology. Machine learning technology contains “neural nets”. These neural nets contain different nodes. Each node represents a small piece of information, which are linked by connections. These connections help the system hold, process and understand information that is inputted. These connections change over time, and therefore neural nets improve (or, are meant to) over time as the system makes new connections and is fed more information.

So now, with the introduction of AI, we have technology which not just replicates what humans can do, but can make decisions independent of humans and in some instances go beyond what humans can do.

This is unique from a legal perspective because existing laws are not set up to deal with non-human decision makers, making artificial intelligence laws and regulations complex.

Regulating AI

The challenge now is in regulating non-humans. There are three approaches that are usually taken when approach regulation:

  • (1) no regulation, which is typically favoured by the USA and proponents argue is best for innovation;
  • (2) soft regulation, like that adopted by Singapore which follows a system of detailed guidance but without penalties for noncompliance; and
  • (3) total regulation, which the EU generally favours, where compliance is compulsory and there are significant sanctions for non-compliance.

The UK is still undecided in what direction to follow, but in the meantime bodies such as OECD and UNESCO have released detailed guidance containing value based principles which refer to concepts such as transparency, explainability, and accountability which should be referred to when approaching and programming AI systems. The Information Commissioners Office (ICO) has also released guidance on AI and data protection to help organisations remain data protection compliant when using AI, as well as guidance on AI decision making, which sets out practical guidance to help explain the processes, services and decisions delivered or assisted by AI.

Unacceptable risk categories (such as social scoring) will be prohibited with little exception, while high risk to minimal risk will require a variety of conformity assessments, transparency obligations, or no obligations. The AI act includes steep penalties for non-compliance and proposes fines which can reach up to 30million EUR, or 6% of global income. The latest amendments were adopted by Members of the European Parliament on 14 June 2023 and the draft text is now subject to negotiations between EU Member States and the European Commission. This can be a lengthy process, but it is anticipated the Act will be adopted before June 2024. It is likely that this will set the benchmark for future AI legislation, so we are monitoring this space closely and will release updates when they become available.

If you need legal advice on tech issues, speak to our specialist technology lawyers.

Helping dynamic tech companies adapt and grow

Contact our tech lawyers

Arrange a call

Enjoy That? You Might Like These:


articles

23 April -
Now settled in the idyllic Welsh coastal township of Laugharne, famous for its connection to the poet Dylan Thomas, Rob Thomas’ daily routine is a far cry from bustling London,... Read More

articles

8 April -
We have recently seen the first tranche of Economic Crime and Corporate Transparency Act 2023 changes crystalise, which we summarise in this article. Read More

articles

27 March -
The UK Government is intent on driving through innovative change to position the London Stock Exchange as a market of choice for UK and overseas companies seeking a listing. Alongside... Read More