Skip to main content



Corporate and Commercial


Employment and Immigration


Fraud and Investigations






Planning, Infrastructure and Regeneration


Public Law


Real Estate


Restructuring and Insolvency






Private Wealth


Real Estate


Tech and Innovation


Transport and Infrastructure

Home / News and Insights / Insights / Copyright in the age of AI

This article was first published in Tech+, a newsletter from our tech and innovation team designed to help readers unpack complex topics in the tech space and keep up-to-date with the changes across this rapidly evolving sector. Be the first to receive the next edition and subscribe here.

Many readers will have seen that Getty Images has initiated proceedings for breach of copyright against Stability AI both in the UK and the US. The claims concern Stability AI’s ‘Stable Diffusion’ programme which Getty considers to have infringed the copyright in over 12 million images stored on their library. We look at the legal and practical issues posed by these claims, and also consider the potential consequences for each party and for owners of copyright works in general.

What is the programme?

Stable Diffusion is an AI programme which allows users to produce high-quality images by imputing text prompts. Depending on the user’s prompts the images produced can be photorealistic or stylised (eg a cat on a bike in the style of Van Gogh). The programme is a ‘deep learning’ AI which means that it replicates the way humans process information, although the programme needs to be trained with vast amounts of data in the form of images, labels and metadata. Getty claim that in training itself to recognise images the Stable Diffusion programme has unlawfully copied and processed millions of images protected by copyright and the associated metadata owned or represented by Getty Images. Getty maintains that Stability AI required a licence to train its programme and points to the fact that the owners of other image generating AI programmes do so under licence.

What is the Law?

The Copyright, Designs and Patents Act 1988 (the Act) is the cornerstone of copyright protection in the UK and provides that copying, without a licence, the whole or a substantial part of a copyright work will amount to infringement. Reproducing an image wholesale without a licence will evidently constitute infringement. However, copying is not always clear cut or will concern only a part of the copyright work. A body of UK case law has evolved concerning the question of what constitutes copying a ‘substantial part’ of a copyright work. A court will evaluate whether the threshold has been met by:

  • considering the part of the copyright work which has been copied, not the part of the copy; and
  • looking at the qualitative effect of the copying, not how much of the copyright work has to be copied.

The court will also have to consider the EU test for infringement which asks instead whether the copying goes so far as to reproduce the expression of the author’s intellectual creation.

Getty’s UK statement does not shed much light on the arguments it intends to run to prove that the Stable Diffusion programme copied a substantial part of the copyright works within the Getty Image library. However, by analogy with the Getty’s US complaint, it would seem there will be a focus on the ‘training’ process in which the programme is fed millions of images so that it can learn to associate images with specific tags.

Getty argues that its high quality images are:

More useful for training an AI model such as Stable Diffusion than low quality images because they contain more detail or data about the image that can be copied whereas a low quality image, such as one that has been compressed and posted as a small thumbnail on a typical social media site, is less valuable because it only provides a rough, poor quality framework of the underlying image and may not be accompanied by detailed text or other useful metadata.

Moreover Getty alleges that Stability AI copied its images with the express aim of enabling Stability AI to supplant Getty Images as a source of creative visual imagery.

New technology will not always fit neatly within established legal parameters, a parallel can be drawn with the need for reform to enable the law to both quantify and keep pace with digital assets. If Getty’s claim reaches trial it will require a profound examination of the processing behind deep learning AI and will ask whether developers of AI programmes infringe a work if they show their programmes copyright works at the input stage.

On the basis of the US claim, we could also see an action for the infringement of Getty’s trade mark, as the renowned watermark has appeared in images generated by Stable Diffusion.

Will Stability AI have a defence?

This case could be a conceptual battle as much as a legal one – it is not beyond the realms of possibility that Stability AI could argue their programme does not ‘copy’ images at all, rather it mimics the way in which humans process information.

In any event we can expect Stability AI to run a defence that relies on the exceptions under the Act and that its copying amounted to ‘fair dealing’. Stability AI could argue its copying did not infringe on the grounds of:

  • the making of temporary copies;
  • research;
  • parody, caricature and pastiche;
  • text and data mining; or
  • observing, studying and testing of computer programs.

It is unlikely that Stability AI could rely on a fair dealing exception if the copying was commercially beneficial. It is therefore unsurprising to see that Getty’s US complaint makes extensive reference to Stability AI’s valuation of $1 billion and that Stability AI copied works with the express aim of enabling Stability AI to supplant Getty Images as a source of creative visual imagery.

What are the consequences for each party?

If the owner of the copyright work can show their work has been infringed they can ask the court for interim injunctive relief to stop the infringing act. It is therefore foreseeable that Getty could seek an injunction to stop the proliferation of the programme.

If after trial the court were to find in Getty’s favour, Stability AI could be ordered to pay damages on account of profit. Whilst the sums pursued by Getty in the US should not be a barometer for the UK claim, the fact Getty is seeking up to $150,000 per infringed work (in relation to the copying of 12 million works) is indicative of the figures that could be involved.

What are the consequences in general / wider trends?

The wider consequences could be quite simply vast. Getty has stated it believes ‘artificial intelligence has the potential to stimulate creative endeavors’. However, it is clear that Getty already sees AI image generating programmes as its direct competitors, therefore any licence which would enable the likes of Stability AI to use Getty’s image library would not come cheap.

The Getty claims are not in isolation as artists in the US are also bringing class action suits against a number of image generating AI. There is little doubt that AI image generating tools are true disruptors as they can bypass the need for a number of creative professions as well as undercutting traditional image libraries. But it remains to be seen if they can also circumvent the protections of copyright currently afforded to the owners whose works are fed into such programmes.

This is a live issue for legislators. The UK Government recently consulted on widening the data and text mining exception under the Act. The Government initially stated that the exception would be extended to cover data mining for commercial purposes, however it was announced on 1 February 2023 that the introduction of this new exception would be scrapped. Legislators will need to strike a balance between protecting the interests of rightsholders and making the UK an appealing jurisdiction for cutting edge technology. The outcome of Getty’s claims will undoubtedly play a part in what the legislative landscape will eventually look like.

It is impossible to say how this claim will be resolved. The existential threat posed by image generating AI to numerous graphic based professions could be said to mirror the threat posed to the music industry in the early 2000s by peer to peer music sharing in the form of websites such as Napster. Napster was found to be illegal and had to shut down but ultimately the streaming model prevailed; could we see a similar development with image generating AI? The owners of copyright works may receive credit, but if the parallel with the music industry plays out, would it not be the companies running the image generating programmes who stand to gain the most?

Getty’s claim could challenge the fabric of copyright law and mark either a consolidation for rightsholders or a significant step forward for a nascent but already revolutionary technology. If Getty are successful we could expect the owners of other copyright works to push back against the use of data mining and learning processes without a licence (think ChatGPT and the owners of literary works). Copyright law is ultimately intended to foster creativity by protecting original works. The exceptions are designed to balance the interests of rightsholders with the interests of wider society. If Stability AI successfully defend the claim there will doubtless be questions as to whether copyright law strikes a sufficient equilibrium and if it is fit for purpose in the age of AI.

We will of course be paying close attention to these cases as they develop.

To hear more from our Technology and Innovation team, visit their homepage.

Related Articles

Our Services

Charities chevron
Corporate and Commercial chevron
Employment and Immigration chevron
Fraud and Investigations chevron
Individuals chevron
Litigation chevron
Planning, Infrastructure and Regeneration chevron
Public Law chevron
Real Estate chevron
Restructuring and Insolvency chevron

Sectors and Groups

Private Wealth chevron
Real Estate chevron
Transport and Infrastructure chevron