Imane Lakbachi [1], Digital Transformations for Health Lab [2]

Youth Voices in AI Governance: Building a Safe Digital Future for Children

This youth statement was prepared in advance of the high-level meeting, Risks and Opportunities of AI for Children: A Common Commitment for Safeguarding Children,[3] co-organized by the Pontifical Academy of Sciences,[4] the World Childhood Foundation,[5] and the Institute of Anthropology of the Pontifical Gregorian University.[6]

Introduction

Artificial Intelligence (AI) is changing many aspects of our lives, especially how children and young people interact with the digital world. AI has the potential to bring a lot of positive changes – it can improve access to education, help make online spaces safer, and support the overall well-being of children and youth. But at the same time, it can also present serious risks. Issues like data privacy, biased algorithms, and online safety threats can negatively affect the rights, dignity, and future of young people.

Children (under 18 years) and young people (under 30 years) are not just passive users of technology – they are digital natives whose daily lives, education, health, and social interactions are increasingly influenced by AI-driven systems. Yet, policies shaping these technologies are often developed without their input. This statement aims to highlight the main risks and challenges AI poses to children and young people but also focus on the unique opportunities AI can bring to improve those same lives. This is a call to action for all key actors within the digital and AI space to prioritize the best interests of children and youth to create an inclusive digital future.

Key Risks & Challenges

While AI can offer promising avenues for enhancing education, as well as health and well-being, it can present significant risks to children’s and young people’s safety. It is important for stakeholders to understand the risks that AI can pose in order to effectively build a secure and enriching digital environment for the younger generation. Some of the concerns associated with AI in children’s and young people’s digital lives include:

  • Online Safety Threats: AI can be used to create harmful content. One example is AI-generated child sexual abuse material. According to the Internet Watch Foundation (IWF),[7] there were 245 reports of AI-generated child sexual abuse imagery in 2024, compared with only 51 in 2023 – representing a 380% rise. 193 of these reports involved imagery that was so realistic it had to be treated the same as real photographic child sexual abuse material.
  • Data Privacy Concerns: A 2024 survey by PR Newswire[8] revealed that 46% of parents are apprehensive about their children sharing personal data online. In addition, the use of AI-powered surveillance software in schools, intended to monitor students’ online activities for safety purposes, has led to unintended exposures of sensitive information. For instance, an investigation by AP News[9] uncovered that nearly 3,500 unredacted student documents were inadvertently accessible, highlighting substantial cybersecurity risks associated with such monitoring practices.
  • Endless Scrolling and Personalized Notifications: Social media features like endless scrolling and personalized notifications are designed to maximize user engagement but pose significant threats to the mental health and brain development of children and adolescents. A study from Oxford University[10] found a direct relationship between time spent on social media and mental health issues, with some teenagers using these platforms for up to eight hours daily.
  • AI-Driven Online Grooming: AI can be misused to impersonate peers or trusted individuals, facilitating online grooming and exploitation of children. The Child Rescue Coalition[11] highlights that AI’s ability to create realistic personas can be exploited by predators to gain the trust of young users, leading to potential abuse. This misuse of AI technology emphasizes the necessity for vigilant monitoring and education to protect children from such sophisticated threats.
  • Algorithmic Amplification of Harmful Content: AI algorithms can also promote harmful content to children by optimizing for engagement without adequate safeguards. For example, TikTok’s ‘For You’ feed has been found to risk pushing children and young people towards harmful mental health content. Amnesty International[12] highlights the need for stricter regulations and responsible algorithm design to prevent the dissemination of content that could negatively impact children’s well-being.

Addressing these risks requires a collaborative effort among policymakers, tech industry, teachers, and parents to establish ethical guidelines and protective measures in the development, deployment, and governance of AI technologies affecting children.

Opportunities for Positive AI Integration

While AI presents notable challenges, when thoughtfully designed and used with meaningful safeguards, it holds significant potential to enhance the digital safety and well-being of children and youth.

  1. Access to Education in Low-Resource Settings: AI platforms can provide virtual tutors, language translation, and educational content to children in remote or underserved areas, bridging gaps in educational access. For example, platforms like Wordly[13] offer live audio translations and captions in multiple languages, facilitating learning for students in diverse linguistic settings.
  2. Personalized and Inclusive Learning: AI-powered educational tools can customize learning materials to match each child’s pace, strengths, and areas for improvement. These technologies can also offer vital support for children with disabilities, enhancing their learning and communication abilities. For instance, innovations such as eye-tracking systems and specialized keyboards offer alternative ways for learners[14] with disabilities to engage with educational content, fostering greater inclusivity and accessibility.
  3. Health Information & Self-Care Guidance: AI health apps[15] can provide young people with reliable, age-appropriate health information, empowering them to make informed decisions about their physical and mental well-being.
  4. Enhanced Creativity & Innovation: AI-powered creative tools (e.g., music generators, design platforms, storytelling apps) allow children and adolescents to experiment, create, and innovate easily.

Recognizing the transformative potential of AI to positively shape young people’s lives, DTH-Lab’s goal is to ensure that these technologies are designed and implemented in ways that truly serve the needs and rights of children and youth. The Lab understands that meaningful youth engagement is key to realizing the benefits of AI, especially in advancing health equity, education, and well-being. That’s why DTH-Lab actively partners with young people, empowering them not just as beneficiaries, but as co-creators and decision-makers in shaping ethical, inclusive, and rights-based digital health futures.

By integrating young people with policymakers, technology companies, and other stakeholders, the Lab drives change in four key areas:  

  1. Digital First Health Systems: Co-creating health systems that prioritize digital solutions, ensuring they are designed with and for young people.  
  2. Digital and Data Governance: Advancing value-based governance models to oversee the digital transformation of health, emphasizing transparency and accountability.
  3. Digital Determinants of Health: Addressing factors within the digital environment that impact health and well-being, such as digital literacy and online safety. 
  4. Digital Citizenship for Health: Empowering young people to meaningfully engage in digital, health, and civic spaces

Through initiatives like the Regional Youth Champions[16] cohort, the #MyHealthFutures Youth Network,[17] and the Research Fellowship;[18] DTH-Lab ensures that youth are not only beneficiaries but also active partners in co-creating health futures.  

Call to Action

To ensure that AI is developed and implemented with children’s and young people’s best interests at heart, we call on both the public and private sectors to take immediate action. Specifically, we recommend prioritizing and investing in the following areas:

  • Addressing digital illiteracy to ensure children and young people can use AI safely and effectively.
  • Making digital solutions, including AI tools, more accessible to marginalized and underserved communities.
  • Empowering young innovators and entrepreneurs to develop child-friendly AI solutions.
  • Preserving human interaction and support within AI-driven services, especially in education and health.
  • Developing inclusive digital strategies that protect and promote children’s rights.
  • Promoting a culture of continuous innovation coupled with sustainable practices while ensuring ethical AI practices.
  • Investing in core infrastructure and addressing the digital determinants of health.
  • Empowering educators and the child protection workforce to use AI responsibly.
  • Protecting children’s trust by strengthening data privacy and security measures.
  • Engaging children, families, and communities through a whole-of-society approach in the design and governance of AI systems.

These recommendations represent just a few of the immediate steps needed to ensure AI technologies are designed and implemented with children’s and young people’s best interests at heart. For a more comprehensive set of recommendations, particularly tailored for governments and policymakers working on digital first health systems, we encourage you to read the Regional Youth Champions (2023-24 cohort) policy brief.[19]

Conclusion

Designing AI technologies that safeguard and empower young people requires multisectoral collaboration. The DTH-Lab is dedicated to leading these efforts and calls on all stakeholders to join in shaping a digital future that prioritizes children’s rights and well-being. To access additional resources and explore collaboration opportunities, please visit the DTH-Lab’s website.[20]

References

Internet Watch Foundation. (2024). IWF Annual Report 2024. Retrieved from https://www.iwf.org.uk/report/2024-annual-report%E2%80%8B

PR Newswire. Parents Cautiously Optimistic on AI in Schools: Content Safety and Data Privacy Among Top Worries https://www.prnewswire.com/news-releases/parents-cautiously-optimistic-on-ai-in-schools-content-safety-and-data-privacy-among-top-worries-302227077.html

The Times. The link between heavy social media use and teenage anxiety https://www.thetimes.com/uk/healthcare/article/social-media-is-linked-to-anxiety-in-teenagers-say-researchers-0dczvtgtk?region=global

Lurye, S., & Bryan, C. (2025, March 12). Schools use AI to monitor kids, hoping to prevent violence. Our investigation found security risks. Associated Press. Retrieved from https://apnews.com/article/25a3946727397951fd42324139aaf70f%E2%80%8B

Child Rescue Coalition. (n.d.). The Dark Side of AI: Risks to Children. Retrieved from https://childrescuecoalition.org/educations/the-dark-side-of-ai-risks-to-children/

Amnesty International. Driven Into the Darkness https://www.amnesty.org/en/latest/news/2023/11/tiktok-risks-pushing-children-towards-harmful-content/

[1] Imane Lakbachi is a youth and gender equality advocate from Morocco; with a strong background in Computer Science and Multimedia, Imane currently serves as the Focal Point of the Digital Transformations for Health Lab (DTH-Lab) to the WHO Youth Council (https://www.who.int/initiatives/who-youth-engagement/who-youth-council) and holds the position of Director of Network Engagement at the International Youth Alliance for Family Planning (IYAFP).

[2] DTH-Lab (https://dthlab.org/) is a global consortium of partners working to drive the implementation of The Lancet and Financial Times Commission on Governing Health Futures 2030’s recommendations for value-based digital transformations for health, co-created with young people. DTH-Lab operates through a distributive governance model, led by three core partners: Ashoka University (India), DTH-Lab (hosted by the University of Geneva, Switzerland), and PharmAccess (Nigeria).

[3] https://www.pas.va/en/events/2025/ai_children.html

[4] www.pas.va

[5] https://childhood.org/

[6] https://www.unigre.it/en/anthropology/

[7] https://www.iwf.org.uk/news-media/news/new-ai-child-sexual-abuse-laws-announced-following-iwf-campaign/

[8] https://www.prnewswire.com/news-releases/parents-cautiously-optimistic-on-ai-in-schools-content-safety-and-data-privacy-among-top-worries-302227077.html

[9] https://apnews.com/article/ai-school-chromebook-gaggle-goguardian-securly-25a3946727397951fd42324139aaf70f

[10] https://www.thetimes.com/uk/healthcare/article/social-media-is-linked-to-anxiety-in-teenagers-say-researchers-0dczvtgtk?region=global

[11] https://www.thetimes.com/uk/healthcare/article/social-media-is-linked-to-anxiety-in-teenagers-say-researchers-0dczvtgtk?region=global

[12] https://www.amnesty.org/en/latest/news/2023/11/tiktok-risks-pushing-children-towards-harmful-content/

[13] https://www.wordly.ai/education-translation

[14] https://www.jetlearn.com/blog/how-ai-can-help-learning-disabilities

[15] https://www.weforum.org/stories/2024/09/ai-diagnostics-health-outcomes/

[16] https://dthlab.org/regional-youth-champions/

[17] https://dthlab.org/myhealthfutures/

[18] https://dthlab.org/research-fellows/

[19] https://dthlab.org/embracing-digital-first-health-systems/

[20] https://dthlab.org/