Insights from the UK CMA’s Initial Report on AI Foundation Models
The UK’s Competition and Markets Authority (CMA) has recently published an ‘initial report’ that delves into the regulation of AI foundation models (FMs). This report comes as a follow-up to the CMA’s ‘Initial Review’ conducted earlier this year. Whilst the report does not identify immediate concerns related to competition or consumer law, it does highlight several areas of uncertainty.
Functional Models (FMs) are versatile AI systems which can be adapted for a range of different purposes. Recent developments in FMs and their rapid adoption (ChatGPT, Office 365 Copilot) are set to have a significant impact on individuals, businesses and the UK economy.
The initial Report, published on 18 September 2023, is a result of consultations with over 70 stakeholders and represents the CMA’s ongoing efforts to understand the market dynamics and potential risks associated with FMs. This initiative aligns with the UK government’s broader strategy around AI regulation, which emphasizes the role of existing regulators in setting domain-specific standards.
Key observations from the Report
The CMA identifies potential challenges related to competition in the FM sector. It notes that technology-driven markets often exhibit network effects and switching barriers, leading to market consolidation and reduced competition. The Report also highlights the importance of ‘upstream inputs’ like data, technical expertise, and computing power in the development of FMs. The CMA expresses concern over restricted access to these inputs, particularly for smaller organizations.
The report also examines the ripple effects of FMs on downstream markets, such as consumer-facing applications and services. It raises questions about the extent to which businesses will have a range of FM deployment options and whether consumers will be able to make informed choices among them. The report specifically identifies the potential for vertically integrated firms to side-line competitors as an area of uncertainty.
From a consumer protection standpoint, the report focuses on issues like misleading outputs and the need for greater consumer understanding of FMs. It also discusses the risks of ‘hallucinations,’ where FMs generate outputs that appear accurate but are contextually incorrect. The CMA recommends governance measures, including rigorous testing and watermarking of FM-generated outputs.
The Initial Report outlines seven overarching principles to guide the development and deployment of FMs. The cornerstone of these principles is accountability, emphasizing that developers and deployers should be responsible for the outputs provided to consumers. The CMA plans to engage with a broad range of stakeholders to refine these principles and aims to publish an update in early 2024.
The CMA’s initial Report serves as a comprehensive guide to the evolving regulatory landscape for AI foundation models in the UK. While the report does not flag immediate legal concerns, it does highlight several areas of uncertainty that industry participants should closely monitor. With its robust enforcement toolkit and collaborative approach, the CMA is poised to play a significant role in shaping the future of AI regulation. Therefore, for those active in the AI space, compliance with competition and consumer laws should remain a top priority.
Visit the government website to read the Report in full.
This article was first published in Tech+, a newsletter from our tech and innovation team designed to help readers unpack complex topics in the tech space and keep up-to-date with the changes across this rapidly evolving sector. Be the first to receive the next edition and subscribe here.