Artificial intelligence is back on the agenda, but this time it’s serious. Alexander Amato-Cravero spoke to Deborah Ritchie about what businesses can expect when it comes to new regulation – as well as the rules they should already be heeding today
Artificial intelligence is the hot topic once more but this time the conversation has shifted up a gear. Why is this happening now?
That’s exactly right. We’ve seen AI move up the board agenda a couple of times in the past, notably around advances in analytical AI, but there are good reasons for the gear shifting up again now.
Analytical AI has been around for a long time and is able to do a broad range of tasks – descriptive, diagnostic, prescriptive and predictive. This most recent evolution, however, is driven by generative AI – computer systems that can generate content like written prose, code, pictures and videos with sometimes remarkable results.
This new category of AI is already making us rethink how we go about tasks that require humans to create original works, whether that is marketing professionals producing creative content or – dare I say – lawyers producing written advice. It’s the opportunity and risk, for those impacted industries and roles that is driving much of the current conversation and excitement.
In terms of why this is happening now, the short answer is that we are seeing better models, more data and more compute. There’s an excellent article on this from venture capital firm Sequoia, that articulates the recent history of AI in ‘waves’.
According to Sequoia, small models that excelled at analytical tasks reigned supreme prior to 2015. However, in the period to 2020, new neural network architectures started to emerge and compute increased vastly – by several orders of magnitude – which led to AI systems passing some human performance benchmarks. But models were still difficult and expensive to run.
That changed between 2020 and 2022. The cost of compute reduced, new techniques arose and became more widely available and tools we more frequently made open source by developers. This opened the floodgates for exploration and application development, resulting in the creation of tools like OpenAI’s ChatGPT and DALL-E 2 – and many others we now see in the market.
Of course, all the hype and excitement we see around potential use cases for this technology will settle down over time. It’s typical of every new technology. It will be interesting to see, though, whether generative AI will be as widely transformative as so many people are expecting it to be.
Where is the greatest progress being made in the regulation and governance of AI?
There are two players globally that are worth watching: the EU and China.
The European Commission made it known early that it wanted to be a global leader on AI regulation, as they did with regulating data under the General Data Protection Regulation. To achieve this, policymakers are now working on implementing three significant pieces of legislation.
The first and perhaps most widely known is the EU AI Act. Rather than regulating AI technology itself, the Act proposes a risk-based approach that governs uses of AI. In effect, it provides a sliding scale of rules that apply depending on the perceived level of risk presented by the function carried out by the AI system.
The Act is not yet in final form, having recently moved to trilogue – tripartite meetings on legislative proposals between representatives of the EU Parliament, Council and Commission. However, as it stands, AI that is deemed unacceptable, for example where it exploits individual vulnerabilities, will be prohibited. High risk AI functions will be subject to further substantive requirements around data governance, record keeping, provision of information to users and so on, while limited risk uses will require compliance with lesser obligations. Other AI systems would be classed as minimal risk and would not be subject to additional requirements – though, of course, developers, operators and users of AI systems will still need to comply with all other existing law and regulation, as usual.
In parallel, the European Commission has also proposed an AI Liability Directive and revised Product Liability Directive. Both are worth noting, but the former is particularly interesting. It establishes procedural facilitations for claims under fault-based civil liability regimes that apply under Member State national law. This includes creating a rebuttable ‘presumption of causality’ to ease the burden of proof for victims to establish damage caused by an AI system, as well as giving national courts the power to order disclosure of evidence about high-risk systems suspected of having caused damage.
Together, these throw a spotlight on responsible AI business practices and in particular the need for organisations to understand exactly how their AI systems are working, to be able to rebut claims and comply with disclosure obligations. Given the increasing complexity of AI systems, particularly in the area of generative AI, which leans heavily on deep learning and opaque neural networks, businesses will need to ensure ‘compliance by design’ – factoring law, regulation and ethics into their plans from the earliest stages of development or use and revisiting them regularly.
China has taken a different approach. Unlike the EU, they are introducing specific regulation on AI technology itself. For example, as well as introducing regulations on recommendation algorithms and deep synthesis technologies (more commonly known as ‘deepfakes’), in April this year it opened a consultation phase for a draft law on generative AI services.
The law sets out a series of obligations on those using generative AI to provide services. This includes algorithm filing obligations, content oversight and control, intellectual property and fair competition, data privacy and ethics and more. It also seems to borrow from the concept of extra-territoriality seen in the European Union’s GDPR, meaning this law will likely apply to business outside of China that provide services to users in China.
And where does the UK stand?
The UK is taking a different approach again. The UK Government published its strategy paper in September 2021, which indicated a plan to adopt a pro-business and pragmatic approach to AI governance. Rather than establishing a new legislative framework for AI, or creating a new AI-specific regulator, the Government’s white paper from earlier this year set out a sector-led, principles-based approach.
This listed five non-statutory principles which the Government expects existing sector regulators to implement within their own remit, with a degree of central monitoring, evaluation and guidance to ensure a consistent and coherent approach. The principles are articulated broadly, giving regulators a lot of scope for interpretation: though, in essence, AI systems must be safe, secure and robust; appropriately transparent and explainable; fair, subject to clear governance measures and lines of accountability; and with scope for contestability and redress.
Precisely how each regulator interprets and applies these principles remains to be seen. The Competition and Markets Authority, for example, has started by launching an initial review of AI foundation models to help understand the market for those models, how their use could evolve and opportunities and risks. The output, it is hoped, will be greater clarity over the competition and consumer protection principles that would best guide development of these markets going forward.
Most recently, the UK Science, Innovation and Technology Select Committee – which conducted an inquiry into the impact of AI on several sectors – published on 31st August The Governance of Artificial Intelligence: Interim Report. In it, they identified 12 challenges of AI and expressed a need for regulation now, as well as establishment of an international forum on AI. Essentially, they express concerns that the UK may fall behind, given the speed of advancement of AI technology together with moves by the EU to regulate AI and encourage the Government to go direct to legislation on AI regulation.
This is an interesting development, particularly given the call to depart from the approach set out in the Government’s recent white paper. It remains to be seen whether or how the Government takes on board these recommendations, particularly in light of the international summit on AI safety that is expected to held in the UK at the beginning of November.
Which organisations will face the most stringent rules?
If the Government continues with the approach in its white paper, then businesses operating in regulated industries may find themselves having to comply with obligations set by their industry regulator in addition to those that would apply more generally.
This issue is exacerbated by the international nature of modern business. As I noted earlier, different countries are approaching AI regulation in different ways. The risk is that without close international collaboration on rules, businesses will be left to grapple not only with disjointed rules and guidance as between sector regulators domestically, but increasingly fragmented regulation across multiple jurisdictions. That could make compliance and risk mitigation much more challenging.
This article was published in the Q3 2023 issue of CIR Magazine.
Download PDF
Contact the editor
Printed Copy:
Would you also like to receive CIR Magazine in print?
Data Use:
We will also send you our free daily email newsletters and other relevant communications, which you can opt out of at any time. Thank you.
YOU MIGHT ALSO LIKE