Simon Stokes, a partner specialising in technology and IP, considers the legal issues surrounding the burgeoning field of AI-generated works.
The first sale of an AI art work by a major auction house (Christie’s New York) raised eyebrows last month. The AI generated portrait of the fictitious “Edmond de Belamy, from La Famille de Belamy” was sold for $432,500 – far above its expected price of $7,000 to $10,000 – its description by Christie’s is as follows: “generative Adversarial Network print, on canvas, 2018, signed with GAN model loss function in ink by the publisher, from a series of eleven unique images, published by Obvious Art, Paris, with original gilded wood frame S. 27 ½ x 27 ½ in (700 x 700 mm.)”
Created by ‘Obvious’ a group of three young French men aged 25 at the time, the portrait is one of a series of portraits of the fictitious De Belamy family created by Obvious. The Christie’s portrait is signed by Obvious in the form of a segment of the GAN algorithm code used to generate the work: min G max D x [log (D(x))] + z [log(1 – D (G(z)))]. A member of Obvious commentated that one of the reasons behind the work was to “make everybody understand AI can be creative.’
The sale of the work has certainly brought AI art into the mainstream although computer generated art works are not new. What AI can do is to create works that mimic at least visually how artists paint, although it can do much more than that too. Here Obvious used two neural networks linked together into a Generative Adversarial Network (GAN) trained on images of 15,000 art works from the fourteenth to the twentieth century (taken from wikiart.com). The first neural network (generator) used the training dataset to create portrait image outputs; these were passed to a second neural network (discriminator) also trained using the image dataset which then tried to determine if the portrait images were machine made (fake) or human made (real). The discriminator’s findings were fed back to the generator and the process continued – each network in a sense competing with the other – the generator trying to fool the discriminator; the discriminator looking for a “fake” work – through unsupervised learning each network gets better at its job as they compete. Eventually a “real” though still distorted painting will emerge from the GAN.
The algorithm used by Obvious was not new. GANs were invented by an AI expert Ian Goodfellow in 2014 (Goodfellow loosely translated into French by Obvious as bel ami or “Belamy”) – GANs are a significant advance in machine learning – Yann LeCun, Facebook’s chief AI scientist, has called GANs “the coolest idea in deep learning in the last 20 years.”. Obvious obtained the model from an open source software site called GitHub where based on media reports an even younger AI artist Robbie Barrat had uploaded it in 2017 and this in turn was a modified version of earlier work of researcher Soumith Chintala. This has led to some controversy in the AI art community. Was it right that Obvious used Barrat’s and potentially others’ code without clearly acknowledging this in the publicity surrounding the sale and were they even lawfully allowed to do this? This would be determined by the open source licence terms in operation at the time and any specific arrangements agreed between Obvious and Barrat. Although it is worth perhaps noting that Barrat’s current open source licence (from 5 June this year) states: No outputs of the pre-trained models may be sold or used for profile otherwise.
That’s one copyright conundrum. Others include the copyright status of the portrait itself. To be protected by copyright an artistic work must be original – European copyright law here would see this as the need for there to be a human author making free and creative choices. This suggests the work should be denied copyright protection as unoriginal and not by a human. Against this it can be argued that existing UK copyright law allows a non-human author in the sense that those who program and put together the arrangements necessary to create the work are the ‘author’ and so get copyright. This includes potentially the input of those who “train” the AI model to the extent the AI model needs to be trained in order to provide useful results. And what about moral rights in the work – the right of the author to be identified and to object to derogatory treatment of their works?
There are other copyright and IP issues too – the input data may be protected by copyright and/or database right and as noted earlier the algorithm as embedded in a software model also will have copyright protection and (if secret; though not the case here) potentially trade secret protection. Certain algorithms might even benefit from patent protection.
Whatever the artistic merits of the portrait of Edmond de Belamy the work raises a host of IP issues. Anyone using AI needs to be alert to the fact that there are a number of unresolved IP issues in using AI and the need to identify and if necessary clear the rights concerned is just as important as it is in other creative endeavours including software development. Also the use of open source software may be attractive but this will mean abiding by the rules of the community that developed it.
Enjoy That? You Might Like These:
Beyond resilience: understanding our values to step change individual and team dynamics - Webinar, Thursday 30 September 2021