Load low-bandwidth site?
Help

Published: August 4, 2025

Getting Ahead of the Threat: Reflections on AI and Security Risk Management

Share this:

James Blake, author of the AI Security Imperative, discusses motivations behind the writing of his new book, highlights some of the most pressing AI-driven threats, and provides key takeaways for security professionals.

As a 20-year-old intern in Washington DC at the Nuclear Threat Initiative, I overlapped with the release of The Last Best Chance, a docudrama that explored the threats posed by unsecured nuclear material around the world. It was an attempt to raise awareness of a threat, if realised, would be catastrophic for global security. I was fascinated.

That experience continues to shape how I think about security to this day: the hardest part of security is often getting people to take the threat seriously before disaster strikes. Stories, real or fictional, can cut through the noise, turning abstract risks into something people can understand, feel, and act on.

My new book, The AI Security Imperative, is my attempt to create the same sense of urgency about a risk I believe is fast becoming one of the defining security challenges of our time: artificial intelligence (AI).

AI-driven threats: What we’re up against

Eric Schmidt recently warned that over the next decade, AI is likely to be involved in a critical incident, defined as an event resulting in more than 10,000 deaths.

In my book, I explore how AI may shape future crises, and what organisations, NGOs included, can do to prepare. Some of the most pressing risks include:

  • AI-enhanced threat actors: We are already seeing nation states developing code written by AI, enabling more sophisticated attacks at scale.
  • Deepfakes and voice cloning: AI-generated media can be used to impersonate leadership, spread disinformation, and defraud or undermine an organisation’s mission.
  • Attacks on critical infrastructure: From nuclear facilities to health systems, AI-driven attacks against critical infrastructure can lead to loss of public trust and have cascading effects on security and humanitarian operations.

Key takeaways for NGO security professionals

There are several practical steps that security professionals can start taking now, many of which I explore in my book. These include:

  • Cross functional coordination. AI threats sit at the intersection of digital and physical security. That means stronger collaboration between security leads, IT, comms, and leadership.
  • Establishing or contributing to emerging risk committees, which consider the impacts of these threats and ensure mitigation efforts are coordinated and forward-looking.
  • Preparing for high-impact scenarios. From biosecurity incidents to supply chain attacks, organisations should revisit their business continuity planning to factor in AI as an enabler of catastrophic impacts.
  • Investing in internal capability. Working with staff to help train them with the skills they need to specialise in understanding and communicating these risks to internal decision-makers. This could include:
    • Technical training in AI model awareness and threat detection
    • Enhance understanding around cyber and disinformation risks to the organisation.

A final reflection

When I think back to The Last Best Chance, I remember how the docudrama made an abstract threat feel immediate and human. It translated nuclear risk into something people could understand and respond to. With AI, I believe we are at a similar turning point. The risks are growing, but difficult to explain. We see glimpses, from deepfakes, cyber-attacks, to AI-driven disinformation, but many of the most serious consequences still feel out of reach. Just like nuclear security then,  AI needs storytellers now.

The AI Security Imperative is my attempt to do that: to communicate risk before it becomes undeniable. To help convince leaders to act and invest pre-emptively and decisively, on issues ranging from disinformation, cyber threats, and the risks around biosecurity. It draws on real stories, expert conversations, and lessons from past emergencies.

But this speaks to a broader challenge in security: how we engage people in risks that are complex, fast-moving, and unfamiliar. Everyone working in security knows what it’s like to try to get others to take a threat seriously before the crisis hits. We’ve all used stories, from close calls, lessons learned, things we wish we’d seen sooner, to make risk feel real.

The ability to communicate risk – clearly, persuasively, and in time – is one of the most powerful tools we have. That’s how we get ahead of the next crisis.

About the author 

James Blake has worked in security, conflict risk and humanitarian risk management for over 15 years. He has served as an intelligence analyst for The Risk Advisory Group, an embedded regional security advisor at the International Monetary Fund, a partner for the Soufan Center and Truepic, and within the preparedness unit at the International Rescue Committee. He is the author of The AI Security Imperative and Crisis Readiness: How Business Leaders Can Better Prepare for Tomorrow on Issues from Climate to Cyber, exploring risks from climate change to cyber threats.

The views and opinions expressed in this article are solely those of the author. They do not necessarily represent the views or position of GISF or the author’s employers.

Click here to return to our blogs homepage.

Image credit: Elise Racine & The Bigger Picture / Glitch Binary Abyss II / Licenced by CC-BY 4.0

Related:

Managing a Cyber Security Crisis: digital resilience in the aid sector

While digital tools undeniably facilitate humanitarian response, the risk of connected societies and access to data leaves organisations vulnerable to cybercrime attacks. In this blog, CyberFish’s Berta Pappenheim explores how effective cyber crisis management differs from traditional crisis management responses and how organisations can build digital resilience to limit their vulnerabilities in the digital realm.

All 2022

In the Spotlight: can recent media attention support advocacy for better humanitarian security?

Recent progress on legislation for crowd security in the United Kingdom might offer some inspiration for the humanitarian sector. James Blake and Christian Kriticos explore how we can leverage media attention to advocate for better security for aid workers in this context.

Global, Europe 2024