Who’s responsible for AI? Verity Harding on AI policy and ethics

Who’s responsible for AI? Verity Harding on AI policy and ethics

Jenn Marshall by Jenn Marshall on

Who’s responsible for regulating technological change in a democracy?

Verity Harding, a globally recognized expert in AI technology and public policy, and one of Time Magazine’s 100 most influential people in AI, thinks anyone – with any level of technological knowledge – can have a valid opinion about AI. After all, it may not be technological knowledge that helps us make the best decisions around how we want to use AI as a society.

Harding, who is currently the director of the AI and geopolitics project at the Bennett Institute for Public Policy and author of the book, AI Needs You, How We Can Change AI’s Future and Save Our Own, talked with Michael “Roo” Fey, Head of User Lifecycle & Growth at 1Password on the Random but Memorable podcast about technology policy and ethics.

To learn more, read the interview highlights below or listen to the full podcast episode.


Editor’s note: This interview has been lightly edited for clarity and brevity. The views and opinions expressed by the interviewee don’t represent the opinions of 1Password.

Michael Fey: Tell me about the book.

Verity Harding: I wanted to make sure I actually added something new to the AI debate, because obviously it can get a bit old and tired sometimes. People have given me lovely feedback that what I have in there is really new.

MF: Actually, before we dig too much into the book, can you give a little background on yourself and what led you to writing something like this?

VH: It’s an odd journey I had to AI. I studied history at university and the earliest part of my career was spent in politics. I was the political advisor to the then Deputy Prime Minister, Nick Clegg, who’s now president at Meta.

It was really my experiences in politics that ended up leading me to technology. I worked quite heavily on a piece of legislation in the UK that was national security related. It was about updating the powers of the security services in the UK for the digital age. Obviously, that’s an extremely controversial and difficult subject, and it was very fraught in the UK with lots of different opinions on whether it was too much overreach from the government.

What it made me realize was that there was this huge deficit in terms of knowledge about technology between the technologists and the political class who are responsible for regulating this technology for society.

“There was this huge deficit in knowledge about technology between the technologists and the political class."

I felt that this gap was not good and that there needed to be more people who could speak both languages – the political language and the technological language. Because of course, technology is extremely political. I eventually ended up joining Google and I was head of security policy in Europe, the Middle East, and Africa (EMEA) and also head of UK and Ireland policy, which was a fantastic experience.

Funnily enough, in the time between me leaving government and joining Google, the Edward Snowden revelations happened. That subject, which was already fraught, became even more fraught. We had to do a lot of work at Google, educating and explaining and helping politicians learn more about what digital privacy, security, human rights, and civil liberties on the internet really meant.

While I was at Google, the company acquired DeepMind, which is a British AI lab. I got to know the CEO and founder, Demis Hassabis, who’s a really visionary and inspirational scientist himself. I learned more from him about AI.

It was clear to me that all of the subjects that I cared most about when it came to technology policy were going to be made immeasurably better or worse by AI, depending on how we managed to navigate it. I wanted to be part of making sure that it went down the better route and not the worst route.

“It was clear to me that all of the subjects that I cared most about when it came to technology policy were going to be made immeasurably better or worse by AI."

I moved to DeepMind and was one of the really early employees there. I co-founded all of DeepMind’s policy and ethics and social science research teams, as well as things like the Partnership on AI, which is an independent, multi-stakeholder organization of tech companies and different businesses and civil society groups and academics looking at the societal impact of AI.

All of this led me eventually to writing this book because I felt that I’d had this really privileged, up-close view and perspective on AI. I wanted to be able to share that more broadly. This book is really everything I’ve learned from all of that experience.

MF: You’ve been part of the AI conversation or a long time. At what point did you start writing this book? Did the launch and popularity of ChatGPT change the trajectory of your book?

VH: It’s true, I’ve been involved in it for a really long time.

What’s so funny is that when I moved from Google to DeepMind to work on AI policy, I was thinking, well, this is going to be a much quieter life. Because at Google we were right in the thick of many news cycles – as I said, the Snowden revelations were causing a huge amount of press coverage.

I also covered other issues at Google, like online radicalization and hate speech that were also getting a huge amount of attention. Going straight from politics into dealing with media stories and being involved in the constant 24/7 news – it’s quite exhausting.

Nobody was talking about AI at all, so I thought, well, this will be a lot quieter and I’ll have time to do the deep thinking and not be fire fighting every day.

Demis offered me the job when he was in the car on the way to fly to South Korea. That’s where AlphaGo happened, which created a huge amount of interest and everything really blew up straight away, so I didn’t ever get that quiet life.


When I started writing the book, I would say that the media coverage and attention around AI had started to dip a little. It was a surprise to all of us in AI that ChatGPT had the effect that it did. We all knew about these capabilities already, but something just connected and hit, and you never can quite tell when that will happen. It brought AI crashing into the limelight.

“ChatGPT brought AI crashing into the limelight."

I had either finished or was very close to finishing the book when that happened. But because I already knew about generative AI, I had written about it quite a lot in the book already. It was something that I was concerned that politicians – and society more broadly – weren’t grappling with.

Before ChatGPT we had already been warning about the possibility for deep fakes to mess with our democracy and undermine truth. We hadn’t seen much response to that, really. So, my book already covered all of those kinds of issues.

I didn’t have to change it much. I did decide to alter it a bit and include more on ChatGPT specifically, just because I think that made it easier to get my argument across. Before, I had to explain from scratch what generative AI is.

It was very helpful that ChatGPT enabled me to have this shorthand that made me pretty sure that anybody who picked up the book would know straightaway what that was.

MF: What are the most pressing concerns or misconceptions people have around AI?

VH: There’s no right and wrong answer about what people should or shouldn’t be concerned about when it comes to AI.

That’s what I say in the book: that everybody will have an opinion and everyone has a right to an opinion. Their opinion is no less or more valid based on the depth of their technological knowledge. And indeed, sometimes technological knowledge won’t help make a decision about whether we’re happy with AI being used in certain aspects of society or not.

I think one common misconception is that, if I don’t understand the deep technology and detailed technological side of AI, then I don’t have a right to have an opinion. I think there’s quite a lot of gate-keeping that happens in AI and it encourages people not to get involved.

“There’s quite a lot of gate-keeping that happens in AI and it encourages people not to get involved."

That’s partly why I wrote the book – to say, in a democracy, you do get to have a say and you can educate yourself to an extent, but you don’t need to be the world’s leading research scientist to be able to have that say.

I also personally find the conversations around AI causing human extinction very unhelpful. I don’t think that that’s an appropriate way to think about this new technology. I think that it tends to obscure some of the more pressing concerns, and it tends to obscure some of the more exciting potential, too.

We’ve ended up in quite an odd position with AI. Back when I started at DeepMind, I was very keen that we would shift the conversation from AI as Terminator, AI as Skynet and towards AI as a tool. The things to be worried about should be more realistic; things like bias and accountability and security and safety. And I think probably the latest hype cycle has not contributed to calm common sense when we’re talking about it.

MF: Is one of the driving factors around the release of this book trying to bring a more stable, measured approach to the conversation?

VH: That wasn’t the motivation. The motivation was really that I felt I had something to contribute, something new to say. The bulk of the book is these examples of transformative technologies of the past.

I think coming from both a history training and a political background, I was very conscious that the tech industry is not known for its humility and likes to think everything it’s doing is the first time anyone’s ever done anything. But while AI is new, invention is not new, and progress is not new, innovation is not new. I really had this hunch that there would be things that we could learn to help guide us with the future of AI.

I feel very strongly that it’s an extremely important and exciting technology. I don’t mean to diminish its importance by saying that I don’t think that it will cause human extinction, but that’s not to lessen the need to pay real attention to its power. I felt that we weren’t looking enough to the past and what we could learn.

I suppose the other motivation was, I really believe in democracy. It’s not necessarily always the most fashionable thing, but I think policymaking is hard graft. It’s difficult and it can be a slog and it can be boring, certainly not the sexiest thing to talk about, but it’s really important.

“We’ve managed great technological change before and I’m really confident that we can do it again."

Someone who read the book said to me just yesterday that they really got a sense from it that AI was important, but they also got a sense that humans were pretty great too. I liked that feedback because hopefully that does come across.

I feel that AI is important and it’s great, but we have done this before. We’ve managed great technological change before and I’m really confident that we can do it again.

Subscribe to Random but Memorable

Listen to the latest news, tips and advice to level up your security game, as well as guest interviews with leaders from the security community.
Subscribe to our podcast

Contributing Writer

Jenn Marshall - Contributing Writer Jenn Marshall - Contributing Writer

Tweet about this post