Gaia Marcus, director at the Ada Lovelace Institute, leads a team of researchers investigating one of the thorniest questions in artificial intelligence: power.
There is an unprecedented concentration of power in the hands of a few large AI companies as economies and societies are transformed by the technology. Marcus is on a mission to ensure this transition is equitable. Her team studies the socio-technical implications of AI technologies, and tries to provide data and evidence to support meaningful conversations about how to build and regulate AI systems.
In this conversation with the Financial Times’ AI correspondent Melissa Heikkilä, she explains why we urgently need to think about the kind of society we want to build in the age of AI.
Melissa Heikkilä: How did you get into this field?
Gaia Marcus: I possibly chose the wrong horse out of the gate, so I ended up doing history because it was something that I was good at, which I think is often what happens when people don’t quite know what they’re doing next, so they just go with where they have the strongest grades.
I saw that I increasingly needed numbers to answer the questions that I had of the world. And so, I was a social network analyst for the RSA [the London-based Royal Society for Arts] for almost five years. And I taught myself social network analysis at the end of my human rights master’s, basically, because there was a job that I wanted to do, and I didn’t have the skill for it.
My mum’s a translator, who was self-taught, so I’ve always been taught that you teach yourself the skills for the thing that you need. I, kind of by mistake, ended up being an analyst for almost five years, and that took me more towards ‘data for good’.
I did maybe another five years in that digital ‘data for good’ space in the UK charity sector, running an R&D team for Centrepoint, the homeless charity, moving more into data strategy. I was responsible for Parkinson’s UK’s data strategy, when I saw that the UK government was hiring for head of national data strategy. I did that for a couple of years and then, was around government for six-and-half years in total.
I’ve always ended up in areas where there’s a social justice component because I think that’s one of my main motivators.
MH: There was a time in AI where we were thinking about societal impacts a lot. And now I feel like we’ve taken a few steps, or maybe more than a few steps, back, and we’re in this “let’s build, let’s go, go, go” phase. How would you describe this moment in AI?
GM: I think it’s a really fragmented moment. I think maybe tech feels like it’s taken a step back from responsible AI. Well, the hyperscalers feel like they might have taken a step back from, say, ethical use of AI, or responsible use of AI. I think academia that focuses on AI is as focused on social impact as it ever was.
It does feel to me that, increasingly, people are having different conversations. The role that Ada can play at this moment is that as an organisation, we’re a bridge. We seek to look at different ways of understanding the same problems, different types of intelligences, different types of expertise.
You see a lot of hype, of hope, of fear, and I think trying to not fall into any of those cycles makes us quite unique.
AI Exchange
This spin-off from our popular Tech Exchange series of dialogues examines the benefits, risks and ethics of using artificial intelligence, by talking to those at the centre of its development.
See the rest of the interviews here
MH: What we’re seeing in the US is that certain elements of responsibility, or safety, are labelled as ‘woke’. Are you afraid of that stuff landing in Europe and undermining your work?
GM: The [Paris] AI Action Summit was quite a pivotal moment in my thinking, in that it showed you that we were at this crossroads. And there’s one path, which is a path of like-minded countries working together and really seeking to ensure that they have an approach to AI and technology, which is aligned with their public’s expectations, in which they have the levers to manage the incentives of the companies operating in their borders.
And then you’ve got another path that is really about national interest, about often putting corporate interests on top of people. And I think as humans, we’re very bad at both overestimating how much change is going to happen in the medium term, and then not really thinking how much change has actually just happened in the short term. We’re really in a calibration phase. And fundamentally, I think businesses and countries and governments should really always be asking themselves what are the futures that are being built with these technologies, and are these the futures that our populations want to live in.
MH: You’ve done a lot of research on how the public sector, regulators and the public think about AI. Can you talk a little bit about any changes or shifts you’re seeing?
GM: In March, we launched the second round of a survey that we have done with the Alan Turing Institute, that looks to understand the public’s understanding, exposure, expectations of AI, linked on really specific-use cases, which I think is really important, and both their hopes of the technologies and the fears they have.
At a moment where national governments seem to be stepping back from regulation and where the international conversation seems to be one with a deregulatory, or at least simplification bent, in the UK, at least, we’re seeing an increase in people saying that laws and regulations would increase their comfort with AI.
And so, last time we ran the nationally representative survey, 62 per cent of the UK public said that laws and regulation help them feel comfortable. It’s now 72 per cent. That’s quite a significant change in two years.
And interestingly, in a space, for example, where post-deployment powers, the power to intervene once a product has been released to market, are not getting that much traction, 88 per cent of people believe it’s important that governments or regulators have the power to stop serious harm to the public if it starts occurring.

I think I do worry about this almost two steps of removal we have with governments. On the one hand, those that are seeking to evaluate or understand AI capabilities are often slightly out of step with the science because everything is moving so quickly.
And then you have another step of removal, where public comfort is, in terms of an expectation of regulation and governance, an expectation of redress if things go wrong, an expectation of explainability, and as a general feeling, that things like explainability are more important than perfect accuracy, and it feels that governments are then another step removed from their populations in that. Or at least in the UK, where we have data for it.
MH: What advice would you give the government? There’s this massive anxiety that Europe is falling behind, and the governments really want to boost investment and deregulate. Is that the right approach for Europe? What would you rather see from governments?
GM: It’s really important for governments to consider where they think their competitive advantage is around AI. Countries like the UK, and potentially, of Europe, as well, are more likely to be active at the deployment of AI than at the frontier layer.
A lot of the race dynamics and conversation are focused on the frontier layer, but actually, where AI tools will have a real impact on people is at the deployment layer, and that is where the science and the theory hit messy human realities.
One big lesson that we very much had with the AI Opportunities Plan, it is great that the UK wants to be in the driving seat, but the question for me is the driving seat of what? And actually, something that we maybe didn’t see, is a hard-nosed analysis of what the specific risks and opportunities are for the UK. Instead of having 50 recommendations, what are the key things for the UK to advance?
This point of really thinking about AI as being socio-technical is really important, because I think there has to be a distinction between what a model or a potential tool or application does in the lab, and then what it does when it comes into contact with human realities.
We’d be really keen for governments to do more on really understanding what is happening, how are our models or products or tools actually performing when they come into contact with people. And really ensuring that the conversations around AI are really predicated on evidence and the right kind of evidence, instead of theoretical claims.
MH: This year, agents are a big thing. Everyone’s very excited about that, and Europe definitely sees this as an opportunity for itself. How should we be thinking about this? Is this really the AI tool that was promised? Or are there maybe, perhaps, some risks that people aren’t really thinking about, but should?
GM: One of the first things is that you’re often talking about different things. I think it’s really important that we really drive specificity of what we mean when we’re talking about AI agents.
It’s definitely true that there are systems that are designed to engage in fluid and natural language-like conversations with users, and they are designed to play particular roles in guiding and taking action for users. I think that’s something that you’re seeing in the ecosystem. We’ve done some recent analysis on what we’re seeing so far, and we have disaggregated AI assistants, at least in three key forms, and I’m sure there’ll be more.
One is executive, so things like OpenAI’s Operator, which actually takes action directly on the world on a user’s behalf, and so that’s quite low autonomy. There are agents or assistants that are more like advisers, so these are systems that will guide you through, maybe, a topic that you’re not that familiar with, or will help you understand what steps you need to take to accomplish a particular goal.
There’s a legal instruction bot called DoNotPay, and people have been trying to do this for a very long time. I remember when I was working at Centrepoint, there were chatbots that weren’t in any way agentic, but they were aiming to help you understand what to do with a parking fine or give you some very basic legal advice.
Then we’ve got these interlocutors, which is a really interesting area we should think more about, which are AI assistants that converse, or have a dialogue with users, and potentially, aim to bring out particular change in a user’s mental state, and these could be like mental health apps.
There’s some really interesting questions about where it’s appropriate for those AI assistants to be used, and where it isn’t. And they might become one of the primary interfaces in which people engage with AI, especially with Generative AI. They’re very personalised and personable. They’re well-suited to carrying out these complex open-ended tasks, so you might see that this is actually where the general public start interfacing with AI a lot more.
And you might see that they’re used by the general public more and more, to carry out some early tasks that are associated with early AI assistants. You might see that this becomes a way in which a lot of decisions and tasks are then delegated from an average user to AI. And there is a potential that these tools could have considerable impacts on people’s mental or emotional states. And therefore, there’s the potential for some really profound implications.
That brings forth some of the more long-standing regulatory or legal questions around AI safety, bias, liability, which we discussed, and privacy. When you’re looking at a market that’s quite concentrated, the more the AI assistants are integrated into people’s lives, the more you raise questions about competition and who’s driving the market.
MH: What sort of implications to people’s lives?
GM: The rise of AI companionship is something we should be looking at more, as a society. There have been some pretty stark early use cases from the [United] States, involving children, but there is that question of what it means for people [more broadly]. There were recent reports of people in the Bay Area using [Anthropic’s AI chatbot] Claude as almost like a coach, despite knowing that it isn’t.
But there are just things that we don’t know yet, like what does it mean for more people to have discussions, or use tooling that doesn’t have any intelligence, in the real sense of the word, to guide their decisions. That’s quite an interesting question.
The liability is quite interesting, especially if you start having ecosystems of agents, so if your agent interacts with my agents and something goes wrong, whose fault is it, that becomes quite an interesting liability question.
But also, there’s a question about the power of the companies that are actually developing these tools, if then these tools are used by increasing amounts of the population. The survey that came out in March showed that about 40 per cent of the UK population have used LLMs [large language models].

What is quite interesting there is that there’s quite a difference between habitual users and then people that have maybe played around with the tool. For different use cases, between 3 and 10 per cent of the population would classify themselves as a habitual user of LLMs. But that, to me, is really interesting, because a lot of the people that are opinion formers around LLMs, or driving policy responses, or who are in the companies that are actually building these tools, they are going to be in that 3 to 10 per cent.
There’s that really interesting question of, what’s the split that you’re then seeing across the population, where most people, that are opinion formers in this space probably use LLMs quite habitually, but then they then represent quite a small proportion of the overall population.
But even now, before AI assistants have become as mainstream a thing as people think they might become, we’ve got some data that suggests that 7 per cent of the population has used a mental health chatbot.
MH: Oh, interesting. That’s more than I expected.
GM: It does raise questions around where tools that are marketed or understood as being general purpose go into uses that are regulated. Providing mental health advice is regulated. And so, what does it mean where a tool that, in its very essence, doesn’t have any actual human understanding or any actual understanding what is the truth and isn’t the truth?
What does it mean when you start seeing the use of these tools in increasingly sensitive areas?
MH: As a citizen, how would you approach AI agents, and use these tools?
GM: Firstly, it’s really important that people use the democratic levers that they have available, to make sure that their representatives know what their expectations are in this space. There’s that general sense of obviously voting at the ballot box, but there’s also speaking to your politicians. Our study suggests . . . that 50 per cent of people don’t feel reflected in the decisions that are made about AI governance.
But also, I would say I don’t think it’s necessarily [the] individual’s responsibilities. I don’t think we should be in a situation, where each individual is having to upskill themselves just to operate in the world.
There’s a conversation, maybe, as a parent, what do you need to know, so that you know what you’re comfortable with [what] your children [are] interacting and not interacting with. It is fundamentally the state’s responsibility to ensure that we have the right safeguards and governance, that people aren’t being unnecessarily put in the way of harm.
MH: Do you think the UK government is doing that to a sufficient degree?
GM: This government committed . . . to regulating for the most advanced models, in the understanding that there are certain risks that are introduced at the model layer, that it’s very hard to mitigate at the deployment or application layer, which is where most of the public will interact with them. That legislation is still forthcoming, so we are interested to understand what the plan is there.
We’d be really interested to know what the government’s plans are, in terms of protecting people. There’s also the Data (Use and Access) bill going through government at the moment. We have been giving advice around some of the provisions around automated decision-making that we don’t think align with what the public expects.
The public expects to have the right to redress from automatic decisions that are made, and we’re seeing the risk that those protections are going to be diluted, so that is out of step with what the public expects.
MH: What questions should we be asking ourselves about AI?
GM: Amid a lot of the hope that’s being poured into these technologies, we run the risk of losing the fundamental fact that the role of technology should always be helping people live in worlds that they want to live in. Something that we’ll be focusing on in our new strategy is actually unpacking what public interest in AI even means to various members of the public and different parts of, say, the workforce.
In the past we’ve seen some general-purpose technologies that have really fundamentally shaped how human society operates, and some of that has been fantastic.
Most of my family is in Italy, and I can call them, and video call them, and fly, and these are all things that wouldn’t be possible without previous generations’ general-purpose technologies.
But these technologies will always come, also, with risks and harms. And the things that people should be thinking about is, what futures are being created through these technologies and are these futures that you want?
This transcript has been edited for brevity and clarity.