Kate Davies is Public Policy Director at Ofcom, supporting Ofcom’s work across the communications and media sectors, and engaging with policy makers across the UK. Kate also oversees Ofcom’s work in the Digital Regulation Cooperation Forum. Previously Kate was Director of Strategy and Policy at Ofcom for 3 years. Prior to joining Ofcom Kate worked in a range of roles in Whitehall and the third sector.
Within Ofcom, how is AI impacting the regulatory landscape?
We’ve been using AI for years. From the moment we get up to the moment we go to bed. It’s central to the personalisation of social media, streaming services, online shopping services, predictive text, emails and spam filters.
However, while AI has been around for a long time, last year’s developments in generative AI were pivotal. ChatGPT had 100 million users in its first two months. Analysts have suggested that a similar number of users took TikTok nine months and Instagram two years to achieve. It also has a very interesting demographic split too; much younger age groups are adopting it. Ofcom research from 2023 showed that 4 in 5 teenagers were using Generative AI tools and services and by the end of last year, 79 per cent of 13 to 17 year olds and 40 per cent of 7 to 12 year olds were adopting the technology.
“AI can simultaneously be part of the harm and the solution.”
At Ofcom our regulatory frameworks are typically technology-neutral. Ultimately, we are thinking about both the risks or harms to consumers of communication services and the specific outcomes we’re seeking to achieve as a result of our statutory duties, whether AI is enabled or not.
We are seeing the impacts of AI right across the comms landscape, for example we are looking at the impact of online news intermediaries and what content is being served to readers and why and how this impacts what people are engaging with and ultimately media plurality. AI can simultaneously be part of the harm and the solution – AI can potentially underpin some solutions to scams but can also enable scammers to be more effective. The same is true in Online Safety – we have a duty to look at the role of algorithms in the dissemination of harmful content, and we also recognise the role of AI in a number of the technologies that can be deployed to enhance user safety such as age verification.
Ofcom’s strategic approach to AI for 2024/25, which describes this work on AI and more, was published in March this year.
How is the regulatory model developing internationally with regards to AI?
There are a range of regulatory models developing internationally. A number of which were in development long before the explosion of generative AI last year – for example, the process around the EU AI Act started before ChatGPT gained its 100 million users in record time.
Recently the UK Government set out its response to the AI White Paper, in which it takes a pro-innovation principles-based approach with five key areas: safety, transparency, fairness, accountability and redress.
These principles are cross-cutting and non-statutory, and recognise the need for regulators to look at the impact of AI in their own sectors. The government has also recognised the need for legislation in the longer term for the most powerful and sophisticated AI models.
In the EU the AI Act is a risk-based approach, focusing on specific use cases, proposing bans on unacceptable uses, and identifying others as high risk and regulating them accordingly. In the US, there is the blueprint for an AI Bill of Rights published by the White House in 2022, and some of this overlaps with the UK principles-based approach, but the US framework to be voluntarily applied by the companies themselves.
In China there is a very different vertical approach to regulation with legislation for specific technologies such as deepfake technologies and generative AI.
So, you can see internationally already we have a range of different approaches. All of which are at various stages in the legislative process and this is all against the backdrop of the continued development of international technical standards for AI.
“We recognise how tech innovation can make us more efficient and effective as a regulator.”
What, in your opinion, are the key factors inhibiting effective collaboration among nations in regulating AI, and how can these barriers be addressed?
So I would turn this around and think about enablers rather than the inhibitors. Active collaboration is really important. Here in the UK we already look at collaborating across regulatory boundaries. We have the Digital Regulation Cooperation Forum (DRCF) which includes Ofcom, the ICO, CMA and the FCA. The DRCF has a whole programme of work around AI and algorithms which we have been doing for a few years as we are aware that there are different harms around safety, privacy and competition linked with online technologies, and with AI. The DRCF has a small core team, and we hold regular round tables with a broader set of regulators. We all have digital concerns that are so central to our remits, many other regulators are interested and we will engage on specific pieces of work together.
I think the DRCF was slightly ahead of the curve when we set it up and other countries are also now doing the same – the Netherlands, Australia and Ireland are all taking similar approaches, setting up specific groups to look across regulatory boundaries. However, internationally we don’t all have exactly the same regulators so it can be hard to always align.
Next, horizon scanning, both domestically and internationally, is vital. Here at Ofcom, we are undertaking futures research alongside our wider horizon scanning activities and through the DRCF we’ve set up a programme to join up horizon scanning activities across regulators to make it more effective. Horizon scanning and wider research picks up on new tech trends but also ensures we have a good understanding of what consumers are doing, how they are using tech and engaging online. We need to ensure we are always thinking about the public’s awareness and understanding of the technology that’s being used in order to serve them up content.
And finally, we need to think about upskilling as a lack of key skills in regulators could be a major inhibitor. As regulators we need to have the skills required to effectively interrogate new technologies and understand what they’re doing. We’ve had a dedicated team at Ofcom for some time now and we’re building on that with data scientists and engineers, looking at a range of technologies, and this is going to be crucial for AI.
“Because of the changing landscape and speed of development, a lot of it is about knowledge sharing and figuring out what is working.”
Can you identify any existing models of international collaboration in other fields that could serve as inspiration for developing effective mechanisms for regulating AI on a global scale?
There is a range of international models of governance and regulation out there that have been successful in achieving specific outcomes while recognising the diversity of national positions across health, trade, climate change. However, which model is right here will be up to governments to determine and will depend on where there is global consensus around particular outcomes. There are already a number of initiatives in which the UK is participating, including the Council of Europe AI Treaty, the G7 Hiroshima process, the Global Partnership on AI, and the UN AI Advisory Board.
At Ofcom, we are engaging extensively with many of our international counterparts because of the changing landscape and speed of development, so a lot of it is about knowledge sharing and figuring out what is working through forums and other discussions.
With a number of other regulators, we have established the Global Online Safety Regulators Network which brings together Australia, Ireland, Fiji, France, South Africa, South Korea and the UK because all of us are thinking about online safety. The Network supports us in learning from each other as we implement Online Safety regimes, and the role of AI in how it may exacerbate existing harms, or create new ones is certainly part of that. More broadly we’re speaking to other regulators around the world about how they are using AI, and how AI is manifesting in the markets they regulate, to help us better understand both the opportunities and the new risks.
In your view, what role should governmental bodies, industry stakeholders, and civil society organisations play in fostering innovation and international cooperation for AI regulation?
The UK Government’s AI White Paper response recognises that collaboration is really critical and the role of the DRCF in playing its part is key. And a part of that is the role of each regulator and the DRCF in supporting innovation. We need to recognise that there are huge benefits to these tech developments while being vigilant to ensure users are protected from harm.
At Ofcom we have a duty to support innovation and it’s a thread throughout our work as innovation can be so beneficial to consumers of communications services. We also recognise how tech innovation can make us more efficient and effective as a regulator – for example the recent developments in AI can support us in our content standards work by speeding up translations of non-English content.
The DRCF is running a pilot, establishing the AI and digital hub, to see how it can work with innovators, and bring together the regulators so that it’s easier for innovators to understand how regulation applies.
From your perspective, what are the essential components of a successful international framework for regulating AI, and how might nations work together to implement such a framework?
I would come back to thinking about the specific outcomes any framework is trying to achieve. However, this is a question for governments in the first instance and ultimately, I suspect, it’s simply too early to tell. We are continuing to see rapid developments in the technology – such as the generation of video content rather than just text, or the option to download and use a model on your own device – changes in how people and businesses are using these technologies, and continued developments in the issues governments and regulators are seeking to tackle.
I am excited about what comes next in terms of the potential benefits to people and the new innovative and collaborative approaches we can take to regulation to ensure new harms are effectively mitigated.