Baroness Joanna Shields OBE, Founder WeProtect Global Alliance; CEO and Founder Precognition

Safeguarding Children in the Age of AI: A Call to Proactive Action

1. Introduction

This paper continues the critical conversation on Artificial Intelligence (AI) and its impact on children. It draws from my experience founding WeProtect Global Alliance in 2014 with the goal of uniting governments, law enforcement, and the tech industry to combat online child sexual abuse. Although progress has been made, AI has dramatically reshaped the risks children face online, introducing new, scalable, and harder-to-detect forms of abuse.

My work in the tech industry spans four decades. I have seen the rise of the internet, the shift to mobile, and the emergence of social media – but nothing has matched the speed and scale of AI. Reid Hoffman describes this moment as a Cognitive Industrial Revolution[1] – changing not only what we create but how we think, dream, and invent. AI is driving creativity and enhancing human agency in ways we have never experienced before. But with this extraordinary potential comes profound risk – especially for children.

2. The New Threat Landscape for Children

Artificial Intelligence (AI) is already being weaponised to create synthetic child sexual abuse material (CSAM) at scale, overwhelming detection systems and emboldening offenders. AI-driven chatbots are grooming and manipulating children in real-time, while algorithmic incentives continue to drive harmful content to young audiences, capitalising on outrage and addiction as profitable engagement tools. These harms are not theoretical but are occurring now and with increasing frequency.

To highlight the evolving threat landscape, I commissioned and produced the film Protect Us,[2] to expose the devastating ways in which AI is already being weaponised against children. The film is based on real-life cases assembled to reflect the patterns of abuse enabled by AI technologies. A domain-specific large language model was employed to analyse these patterns and translate them into a compelling narrative. Although the film is only fifteen minutes long, it delivers a lasting impact. Protect Us is available through WeProtect.org and YouTube and serves as a vital tool for raising awareness and driving collective action.

3. The Political Context

The political context we are operating in is just as complex as the technological one, and it would be remiss not to mention the elephant in the room. Political division and increasing geopolitical instability have reshaped the Overton window. For those who may not have heard this term before, it reflects the range of politically acceptable solutions available to policymakers and regulators at any moment in time.

For the past decade, efforts to protect children online have largely focused on platform responsibility and content moderation. However, this post-harm punitive approach – relying on enforcement after damage has already occurred – was never designed to keep pace with the speed, scale and sophistication of today’s digital landscape. As we have seen, AI-generated content is amplifying these challenges and current systems are struggling to respond effectively. At the same time, enforcing AI and digital regulation is increasingly seen as a hostile act. Vice President JD Vance’s speech at the AI Action Summit in Paris made this clear – he stated that attempts to regulate AI are viewed as an existential threat to American dominance in technology. President Trump has reinforced this stance, making it clear that any effort to regulate AI from outside the U.S. will be seen as a direct attack on American interests.

How do we protect children when the political appetite for regulation is diminishing – and AI development is accelerating?

4. Shifting to a Proactive Model

For too long, we have operated under a post-harm enforcement model – where action is only taken after the damage has been done. That model is no longer viable in the AI era. The scale, speed, and complexity of AI-driven abuse have outstripped our ability to respond after the fact.

Once harmful content is created and shared, it can spread globally in seconds, and removing it entirely is almost impossible. Law enforcement is overwhelmed, and the platforms responsible for hosting this content often lack the technical or legal capacity to respond quickly enough.

We have reached an inflection point. Content moderation and takedown orders are not enough – by the time action is taken, the harm is already done. The abuse has already occurred, the victims have already suffered, and the psychological scars have already formed.

We need to shift from a reactive model to a proactive one – embedding safety into AI systems from the ground up. Platforms must be incentivised to prioritise safety over engagement and addiction. Developers must build protections into AI systems at the design stage, ensuring that models are hardwired to prevent harm rather than amplify it. Governments need to step up – not by imposing outdated regulatory frameworks, but by setting clear safety standards that balance innovation with accountability.

5. The Incentives Behind AI

There is another pressure at work beneath the surface – one that worries me deeply. Most AI models are currently funded by subscription. However, the costs of developing and running these models at scale are enormous – these systems are computationally intensive and expensive to maintain. To offset these costs, platforms are exploring ad-driven models – turning to the same engagement-based incentives that fuel social media.

If AI responses become shaped by the highest bidder rather than the best answer, the risks will multiply exponentially. We cannot afford AI systems that are designed to manipulate engagement, prioritising profit over truth. Algorithmic manipulation would shift AI from a tool for human empowerment to a tool for exploitation.

Open-source AI is also compounding the challenge. Unlike closed systems, open-source models are freely distributed and can be modified and weaponised by bad actors. Large language models and generative AI tools are already being adapted to create synthetic CSAM, automate grooming, and generate deepfake content at a speed that outpaces existing safeguards. Initiatives like ROOST[3] are starting to address these vulnerabilities – but broader action is needed to protect children at scale.

6. Conclusion

This is not just about protecting children – it is about shifting the incentives that drive the entire AI ecosystem. AI systems designed for engagement are dangerous. AI systems designed for safety and empowerment are transformative. We need to get this balance right – and we need to act now.

These issues have not been adequately covered during this conference – and they are too important to overlook. Conversations about AI and child protection often focus on individual cases of abuse or platform responsibility, but the deeper challenge lies in the incentives and infrastructure behind these technologies.

AI is evolving faster than regulation, faster than oversight, and faster than our ability to detect and prevent harm. That is why we must confront these challenges head-on.

Let us recognise that this extraordinary moment – the unprecedented pace and potential of AI, combined with the complexity of the political and technological environment – demands new approaches and solutions.

The window has changed – but new doors are open. We must walk through them now, with purpose and resolve.

References

  • Hoffman, R. (2024). Superagency: What Could Possibly Go Right with Our AI Future.
  • WeProtect Global Alliance. (2024). Protect Us [Film].
  • ROOST Initiative. (2024). Addressing Open-Source AI Vulnerabilities.

 

[1] Hoffman, R. (2024). Superagency: What Could Possibly Go Right with Our AI Future.

[2] Protect Us, WeProtect Global Alliance: https://www.youtube.com/watch?v=18yPZFpwLFY and https://www.weprotect.org/thematic/artificial-intelligence-and-gen-ai/

[3] ROOST Initiative. (2024). Addressing Open-Source AI Vulnerabilities.