The final frontier – a lawyer’s voyage into AI


Posted by Ollie Clymow, 22nd November 2019
Our technology experts, Ollie Clymow and Rob Jefferies, examine the law surrounding Artificial Intelligence (AI) technology and developments in the sector.

Growth and opportunities in the AI sector

We may not have the walking talking robot helpers (or in some cases, the overlords) that 80’s science fiction promised us but AI is no longer a pipedream. Whilst true “AI” may still be a long way off, sophisticated deep learning algorithms are being rolled out across numerous sectors in the UK economy and are set to change the fabric of how those sectors operate and interact with the rest of the economy and the world.

AI technology offers a diverse range of commercial applications and benefits, including increased efficiencies, greater understanding of customer behaviour and the ability to identify gaps in the market. These benefits are not just theoretical; they have a clear and positive impact on profitability – a report published by Microsoft on 1 October 2019 stated that organisations currently using AI are outperforming those that are not by 11.5%.

The cost of AI technology remains prohibitively high for smaller business. However, as the understanding of AI grows and reaches board level, businesses should begin to recognise the value in investing in AI (whether that be increased efficiency or access to new markets) and entice capital investment even for smaller businesses. The good news for UK businesses is that the Government (in whatever guise we happen to have by the time this article is published) is developing and pushing forward initiatives to encourage the UK to remain at the forefront of AI development and adoption. The current thinking is that many countries will eventually fall in to two categories, as either producers or consumers of AI technology and there is a clear opportunity and drive to ensure that the UK is the former.

Legal aspects

Unfortunately for early adopters of AI the law does not move at the same rate of change. To give an overly simplistic example, Moore’s Law is that computing power roughly doubles every 2 years, which is one of the reasons why CGI in the TV shows of today looks comparable to that in movies made only a few years prior. However, if we think about it in business terms, many companies are able to perform functions today that were beyond reach of all but the tech giants only 5 years ago. Whereas on the legal side, GDPR which was probably being drafted in 2015, published in 2016 and came into force in 2018, is still a very new and shiny piece of legislation. A further example of the gulf between the two is that we are still waiting for the EU’s new ePrivacy Regulation, which was originally scheduled to come in at the same time as GDPR but is now tentatively scheduled for early 2020.

Aside from the moral and ethical dilemmas surrounding the use of AI, organisations operating in the AI space must work hard to stay within legal rules that were simply not designed to encompass the current state of the art. The main issues facing today’s early adopters of AI include:

  • Data privacy

AI technology primarily functions by machines analysing and testing large amounts of data and then ‘learning’ from the results of such analysis and testing. Therefore, data privacy compliance is key in the AI sector.

As a fundamental principle, all personal data processed as part of the development and/or use of AI must be done in accordance with the GDPR and all other relevant data privacy regulation. As such, the anonymisation and/or pseudonymisation of personal data can be a useful tool to facilitate GDPR compliance when carrying out research and development as part of an AI project.

It is vital for the success of AI projects that data privacy and GDPR compliance is considered at the start of a project as non-compliance may prove costly later down the line (particularly when it comes to enticing a buyer as part of an exit strategy).

  • Intellectual property

The identification and protection of IP rights should be at the foremost consideration for organisations creating AI technology, particularly where AI technology is created jointly as part of a collaborative project involving various organisations. 

Having a clear understanding of IP ownership is of predominant importance in the AI space as AI technology, by its very nature, provides scope for machines to create new works and inventions – the question for the parties involved in the project is who will own such works and inventions?

Again, this question is best considered at the start of an AI project to avoid costly IP litigation arising in the future. 

  • Big data

The expansion of AI capabilities requires a huge amount of data to test and fine-tune the underlying algorithm. As such, the market has seen a rise in organisations pooling their data in order to take advantage of an augmented data set. One way in which such an arrangement could be structured is by the creation of a ‘data trust’, which is considered in detail below.

Data trusts

The Open Data Institute (‘ODI’) published an article on 19 October 2018 setting out its suggested definition of a ‘data trust’. Broadly speaking, the ODI identified that a ‘data trust’ applies the concept of a conventional legal trust to data.

As such, a ‘data trust’ is designed to allow certain persons (the ‘trustees’) to manage and make decisions about the use of an augmented pool of data on behalf of the persons who provided access to the data (the ‘settlors’) and, where applicable, persons who benefit from whatever is created using the data (the ‘beneficiaries’).

However mapping a centuries old legal concept onto a 21st century problem is not without issues. Personal data is a slippery concept; it is not really property, so it cannot be bought, sold or licenced in the traditional sense. Is it a part of our legal “person” and what is the extent of our rights to control how it is used?

The organisations establishing the ‘data trust’ would have to identify persons who would be willing to act as trustees (this may include the organisations themselves). Depending on the nature of the ‘data trust’, the beneficiaries of the ‘data trust’ may also be the organisations that contributed the data to the pool and/or a wider group of persons who are seeking a share in the benefits arising from the use of the data. Such benefits may include a proprietary right in any new AI technology and/or algorithm that is created through the ‘data trust’.

As with a traditional trust, the trustees of a ‘data trust’ must be wary that they will likely owe a fiduciary duty to the beneficiaries of the ‘data trust’. As such, the trustees (as fiduciaries) would be subject to a legal obligation to act in the interests and for the benefit of the beneficiaries. This primarily involves each trustee owing the following four duties to the beneficiaries:

  1. not to allow his interests to conflict with the interests of the beneficiaries;
  2. not to profit from his position as trustee at the expense of the beneficiaries;
  3. not to allow his duty to another party to conflict with his duty to the beneficiaries; and
  4. not to use or disclose the confidential information of the beneficiaries.

As you can imagine, the requirement to comply with the above duties will create added complication where, for example, two businesses that are competitors in a market space create a ‘data trust’ and each acts as a trustee and a beneficiary. In this example, each business would owe fiduciary duties to the other, which would likely conflict with each business’s usual commercial operations in the market. This is one area where the concept of a ‘data trust’ is lacking and requires greater clarity from market regulators and the legislature.

Another problem with the traditional concept of a trust is that once the settlor has set up the trust, deposited their property and expressed their wishes their role ends. Unless they are also a beneficiary of the trust they have no further right to take any benefit from the assets deposited into the trust. So, let us assume we solve this problem in the data trust by making all the people who put their data into the trust beneficiaries as well. The trustees now have to balance the interests of the commercial beneficiaries with the potentially competing interests of the personal beneficiaries (i.e. the data subjects). Without a clearly defined and detailed operating structure, the trustees could be in an almost impossible position.

Whilst the ODI’s suggested definition of a ‘data trust’ does provide a very useful starting point for organisations seeking to create a ‘data trust’, the definition has not been tested by the courts. Therefore, there remains a degree of speculation as to how best to structure a ‘data trust’ from a legal perspective and how the associated liabilities should be allocated. This is an area that is calling for clarity from the industry regulator, the ICO. Watch this space!

This article has been co-written by Ollie Clymow and Rob Jefferies.

Enjoy That? You Might Like These:


articles

19 November - Simon Stokes
On 18 November 2019 the UK Jurisdiction Taskforce (Chaired by Sir Geoffrey Vos) launched its "Legal Statement on cryptoassets and smart contracts" in the magnificent Victorian and mock medieval surroundings... Read More

articles

22 October - Penny Rinta-Suksi
Whether the UK crashes out of the EU without a deal or is able to reach a withdrawal agreement for Brexit to take affect over a defined transition period, Brexit... Read More

guides

24 September
All organisations that process information about identifiable individuals (personal data) must ensure that their operations can continue lawfully after Brexit. In this guide, our expert data protection team outline potential issues... Read More