Emily Weinstein, EdD & Beck Tench, PhD Center for Digital Thriving at Project Zero, Harvard Graduate School of Education Cambridge, MA, USA

Listening to See: Uncovering Dilemmas with Youth and Generative AI

Effectively supporting youth requires good insights, not just good intentions. This is admittedly complicated in the context of generative artificial intelligence: it can feel as though there is hardly time to slow down for thoughtful reflection as technologies change at an ever-accelerating pace. Per the International Telecommunication Union (Abdullah, 2025): it took 75 years for the telephone to reach 75 million users; for the radio, it was 38 years; for the television, 13 years; and for the internet, four years. ChatGPT crossed the 100 million mark in only two months.

Youth represent 30% of the global population, and young people are known early adopters of new technologies (5Rights Foundation, 2025). Yet surfacing “good insights” about their experiences can prove challenging because so much about young people’s digital experiences remains invisible to most adults. Technology companies incorporate features and algorithms in ways that are often unseen, and young people use technologies in ways that aren’t always foreseen. Youth have experiences behind their screens that are routinely out of adult lines of sight. Limited AI literacy compounds the problem: many adults lack the vocabulary to ask the right questions or make sense of the answers. In this essay, we argue that we must listen to see – that is, in order to grasp the new dilemmas that come up for youth as they engage with generative AI, adults in positions of influence (policymakers, NGO leaders, technologists, educators, parents, and researchers) need an ongoing and purposeful practice of listening to young people.

The challenge of seeing not just “what” they do, but “as” they do

The risks of technology use coexist with considerable benefits for youth who often engage with technologies to learn, play, and connect in meaningful ways. Yet the stakes are high. This is evident in cases like the heartbreaking story of 14-year-old Sewell Setzer III who died by suicide after a chat with a bot that encouraged him to “please come home to me,” which was later discovered by adults in his life who had not even known he was even using this kind of app. Less extreme examples are relevant, too, as teens report using bots to covertly impersonate and evade parents (Common Sense Media, Hopelab, & Center for Digital Thriving, 2024), and as bots offer ready guidance on topics like how to mask the smell of alcohol (Fowler, 2023).

There is an important lesson from social media research that may be relevant as adults turn our collective attention to the range of dilemmas related to generative AI: even when adults see what young people do, adults may not see it as they do. In the context of social media use, there are at least two reasons for this. First, scholars have for years written about social steganography in teens’ social media communication: the practice of “hiding in plain sight” through crafted messages that have different meanings to different audiences (Marwick & boyd, 2014). Song lyrics or an emoji may be “read” one way by those who are part of a teen’s intended audience, while their intended meaning is missed or misinterpreted by others.

Second, adults may fail to see a digital interaction as young people do because the adults have outgrown developmental sensitivities that fundamentally color adolescents’ experiences. Developmental changes during adolescence elevate sensitivity to social information and attunement to potential social evaluation (Somerville, 2013). Practically, we see the influence of these changes as teens describe digital practices like scrutinizing others’ Snapscores to see if a friend has been ignoring them while responding to others, or examining Venmo transactions for cues about peer dynamics and exclusion, or worrying about Instagram “follower ratios” – all of which might surprise even regular adult users of these apps (Weinstein & James, 2022).

Developmental sensitivities thus collide with design affordances of technologies in ways that transform adolescent experiences and how they interact with peers (Nesi et al., 2018). There are also good reasons to believe social media might interact with rapid changes during adolescence in behavior, cognition, and neurobiology that increase vulnerabilities to mental health challenges (Orben et al., 2024).

Such developmental considerations are likely to be relevant in the context of generative AI use, too. For example, consider a young person who is bringing their most intimate questions to AI at a key developmental moment when their self-concept is changing and taking new shape. They might ask bots questions like: Is it normal to…? What does it mean if I…? How do I know if…? What should I do if…?

 

A screenshot of a chat. AI-generated content may be incorrect.

 

Figure 1. ChatGPT responses to hypothetical questions about a school dance (a common social event in US middle and high school).

In short, young people’s developmental stages will almost certainly influence the kinds of topics and disclosures that they bring to a generative AI (the ways they use it), their interpretations of the responses they get, and their sensitivity to these exchanges. Crucially, because adolescents are not a monolith – they have different strengths, contexts, circumstances, vulnerabilities, developmental trajectories – we should also expect their generative AI experiences to play out differently.

The concept of differential susceptibility is a resonant and well-supported principle across media research (Valkenburg & Peter, 2013). The 5 Cs of Media Use from the US Center of Excellence on Social Media and Youth Mental Health (American Academy of Pediatrics, 2024) distills key variables for consideration that may also be relevant to generative AI research and practice, including attention to (1) the child; (2) the content they engage with and their reactions to it; the ways media use is relevant to a child’s emotional regulation (captured by the word (3) calm); the concept of displacement or what is (4) crowded out by technology use, and the potentially protective influence of (5) communication with supportive adults. Young people’s varied contexts in their offline lives are also well-recognized to shape the meaning and impacts of their online experiences (Ito et al., 2020; Reich et al., 2012).

To summarize: it is a legitimate challenge for adults to “see” young people’s experiences with new technologies as they do, in ways that elucidate the contours of their interactions. This challenge calls for a practice of listening, by adults who have a dispositional orientation (Perkins et al., 1993) – inclination, sensitivity, and skills – toward curiosity.

Listening to see

Our research team at the Center for Digital Thriving, based at Project Zero at Harvard Graduate School of Education, has spent over a decade refining ways to “see” the dilemmas young people face with emerging technologies. Our goal is to help adults have young people’s backs, rather than looking over their shoulders (Jenkins, 2010), and to support public conversations about teens and technology that are youth-centered. We honor young people’s agency and competence through a design cycle that builds on youth voices to inform both research and resources.

Our design cycle starts and ends with listening to learn. We listen in a way that is inspired by a pedagogy of listening from the Reggio Emilia philosophy (Rinaldi, 2001) – a practice that began in a village in northern Italy and has since gone on to influence early childhood education around the world (Foerch & Iuspa, 2016). Core to the philosophy is a view of the child as competent, capable, and inherently worthy of being listened to – because adults have much to learn from such listening. Reciprocal respect between children and adults underpins Reggio Emilia’s pedagogy and practices, which supports democratic participation both within school walls and across the broader community (Turner & Wilson, 2010).

Just as educators in the Reggio Emilia philosophy ask open-ended questions and then listen with intention and respect, so do we as researchers. We routinely ask young people questions like: What are adults missing that you most need us to understand? What does it feel like to grow up with a particular app or technology? How do you navigate dilemmas and tensions that are related to tech? We create space for adult researchers to listen with and to teens’ thinking about such questions through regular advisory sessions, individual and small group interviews, focus groups, and by conducting surveys and then co-interpreting data with input from teens.

What we hear in young people’s responses routinely deepens or shifts our thinking. For example, in past conversations with our teen advisors, some described the pull to stay online when a friend is struggling (Weinstein & James, 2022). They felt torn between disconnecting for self-care – to get rest before a test the next day or to spend time with family – and the urge to be a good friend who answers late-night messages. We saw how these impulses could be pitted against each other in ways that create a pull to their phones that wasn’t simply a function of persuasive design or “addiction.” As we took the pressures they felt seriously, we also started to appreciate how widespread increases in adolescent mental health struggles (Centers for Disease Control and Prevention, 2024) could confer a hidden toll of friendship on those who are texted for support or who witness signals of peers’ distress on social media.

More recently, our research team has widened our focus beyond social media to include generative AI. We collaborated on a national study and collected perspectives from 1,545 US teens on “what adults should know about how teens use artificial intelligence (AI)” (Common Sense Media, Hopelab, & Center for Digital Thriving, 2024). Teens shared responses like:

We use it for very creative purposes, not just cheating on homework.
           
It helps me ask questions without feeling any pressure.
           
Some [teens], not all, can use it for getting test answers.
           
I use AI to help myself with making myself feel better about being who I am.
           
It listens.

In ongoing listening sessions with teens, which include open-ended advisory meetings, structured focus groups, and semi-structured interviews, we are now actively co-interpreting the survey responses and asking young people to share more about their own experiences with generative AI. In one such discussion, a teen shared a story about a family friend who had used ChatGPT to write a card to his wife; the wife was so moved by the card that she cried. The teen’s initial reaction was to frame this as “a really cool” use of AI.

Perhaps you have a knee-jerk reaction to the idea of ChatGPT authoring the text for that husband’s card; based on our experience sharing this dilemma with adults, we would venture a guess that most adult readers are not so enthusiastic about outsourcing heartfelt messages to an algorithm. As researchers, we strive to lean into curiosity in these moments, and to respond in ways that convey: I believe you, I’m listening; please help me understand. This approach is rooted in a respect for young people’s perspectives, and a learned recognition that our own knee-jerk judgments can otherwise shut down conversations with teens. Listening to see requires more than asking good questions; it involves engaging what Elbow (2008) called the believing game – listening with openness and trust toward young people’s experiences, especially when they differ from your own instincts and assumptions. Engaging the believing game means embracing a willingness to be changed by what you hear. In our conversation with teens about the ChatGPT-authored card, this meant genuinely wanting to understand the young person’s positive take, even though it differed from our own. We asked follow-up questions, inviting further reflections from both the teen who shared the story and others in the group. Other teens in the group initially seemed to endorse the idea that this was positive: One shared, “I don’t see that [it’s] an ethical problem at all,” and another elaborated, “ChatGPT should be used to benefit not only yourself but others also.

In the Reggio Emilia philosophy, listening is not only a way to see, but also to be seen; adults “listen to children not only because we can help them but also because they can help us” (Rinaldi, 2001). The researcher facilitating this session (Emily) listened, and then – without suggesting that the teens were “wrong” and she was “right” – also shared she had a differing view:

I can follow everything that you are saying in your logic about why … [this] seems like a positive use … it’s so interesting, because – [at the same time] – there is something about it that makes me feel unsettled, almost.

In response, teens made further efforts to convey their reactions. One teen compared the GPT-authored card to someone buying a ready-made bouquet of flowers. “If you bought a whole big flower arrangement and you didn’t make it yourself, like, it’s still a nice effort.” As the group built upon the bouquet analogy, everyone – including the adult researchers – refined our views and came to appreciate dimensions of other perspectives. The aim of this kind of listening is not for one person to convince another, but rather to engage in ways that surface the logic that underpins different perspectives, which we see as an essential step in our listening cycle.

In subsequent sessions with other groups of teens, we learned that young people vary in their perspectives and draw “lines” in different places related to using AI to help with writing a card for a loved one. Some are comfortable using AI to brainstorm ideas for the card or for wordsmithing, while others object once the tool starts to compose full sentences – or the entire card – for them. As one teen explained, “Sometimes you struggle getting [thoughts] from your head into words – that’s probably most of the time okay, but asking [ChatGPT] to write an entire card, like, that’s a line.” The conversations that followed gave us a sense that “the line” was a generative concept, ripe for surfacing data about where young people draw lines in other uses of AI, including with schoolwork. With this insight uncovered, we moved into the next phase of our listening cycle: to introduce a design invitation that makes young people’s thinking about these “lines” visible.

Making thinking visible

Thinking is an internal process that is often invisible, not only to others but also to ourselves (Ritchhart & Perkins, 2008). When we externalize our thoughts – through words written or spoken, images, 3-D models – they become visible artifacts we can reflect and build upon. When visible thinking practices are used in classrooms, they deepen students’ learning, as well as dispositions as good thinkers (Project Zero & Reggio Children, 2001). When sketches are used in design work, these visualizations “convey conceptions of reality” (Tversky, 2002). Design sketches externalize an idea or abstract concept and invite others to iterate and build upon it.

In our research group, we embrace this principle by sketching design prompts that invite teens to make their thinking about technology visible, ideally in ways that deepen awareness of their own thinking, and ours. Following up on the example of the ChatGPT-authored card and “the line” described by one teen, we created several sketches of “the line” as an abstract concept.

We then convened another set of discussion groups with different groups of teens. We told them about the genesis of “the line” idea, and invited them to engage with it by choosing a school-related assignment they wanted to explore as a group (e.g., write an essay, create a self-portrait, solve math problems, build an app). After selecting a specific assignment, we invited them to brainstorm all possible ways they might use generative AI to complete it. Notably, at the time of our sessions and of this writing, using AI for homework assignments is considered cheating in many K-12 classrooms in the US. As a result, sharing how to use AI for assignments is sometimes stigmatized and reserved for private conversations and TikTok channels. But the prompt in our sessions intentionally fostered a “judgment-free zone” for students to make such uses visible. We explained that the aim was to surface as many uses as possible, tabling for a moment the question of whether or not those uses were ethical or allowed.

The following example from one group’s brainstorm of the ways they might use AI to help write a 5-page analytical essay on key themes in The Great Gatsby.

We then asked each teen to individually map each of these uses using “the line” design invitations above (Fig 4). The example below (Fig 5) shows six different teens’ mappings using the “Gray Area” invitation (Tench & Weinstein, 2025). The “Gray Area” sketch, which we came to call “grAIdients,” was a unanimous favorite across groups when we asked teens which sketch showed the most promise for understanding their relationship to “the line.”

The mappings showcased an important insight for our team: teens differ in their assessments of generative AI uses that are acceptable, unacceptable, and in the “gray area” between. This exercise invited further conversation about different uses and their reasoning. Now, we are engaging with educators to explore how the “grAIdients” concept might inform generative AI policies that support essential learning goals, can be narrowed to a specific assignment or scaled to an entire class, and support cross-generational conversation between educators and students through a structure that allows for nuance. This arc – from asking teens what adults are missing, through the ChatGPT-written card and the insight about “the line,” to the design invitation that surfaces data and responds to a real problem in US classrooms – shows how (a) listening to learn and (b) making thinking visible can reveal meaningful insights into the dilemmas young people face. These practices also illuminate potential paths forward, like frameworks that inform classroom policies.

Looking Forward

The rapid development and deployment of generative AI in our society requires new conversations and new literacies to understand both what good and what harm may come from them. Research into young people’s experiences with social media over the last decade has surfaced valuable insights that can illuminate the contours of young people’s engagement with generative AI, including the ways developmental sensitivities collide with design affordances and the recognition that youth are not a monolith – they bring diverse perspectives, vulnerabilities, and strengths to their experiences with new technologies.

Time and again, we have seen through our own work that while it is challenging to “see” tech as teens do, listening is a key that unlocks insights – and interventions – about the dilemmas they face. As qualitative and youth voice researchers steeped in traditions of making thinking visible and Reggio Emilia’s pedagogy of listening (Project Zero & Reggio Children, 2001; Ritchhart et al., 2011), we benefit from methods and values that encourage a disposition of curiosity. Listening is an essential practice that positions adults to meet this moment alongside young people. Listening helps us to see – with clarity and focus – at a time when our technological horizon is shifting faster than our eyes can adjust.

Acknowledgments

Thank you to the Pontifical Academy of Sciences, the World Childhood Foundation, and the Institute of Anthropology of the Pontifical Gregorian University for inviting us to participate in this convening and compilation of essays. We are grateful to Carrie James, Dan Be Kim, and Eduardo Lara for their comments on an earlier version of this essay, to Daniel Wilson and Mara Krachevsky for their time and expertise in Reggio Emilia philosophy and practice, and to Howard Gardner for his ongoing support of our interests in drawing such connections. We are also indebted to the young people who have trusted us with their perspectives and stories. Our work on the projects and listening cycle described in this essay is generously supported by Pivotal Ventures, The Susan Crown Exchange, and Grant 63220 from the John Templeton Foundation. The opinions expressed in this publication are those of the author(s) and do not necessarily reflect the views of our funders or organizations.

References

5Rights Foundation. (2025). Children & AI design code. https://5rightsfoundation.com/wp-content/uploads/2025/03/5rights_AI_CODE_DIGITAL.pdf

Abdullah, N.S. (2025, March). Protect, provide, participate: Children first in the AI era [Conference presentation]. Ethical and Responsible AI: Roles and Responsibilities of Stakeholders and Regulation – Risks and Opportunities of AI for Children. Pontifical Academy of Sciences, Vatican City.

American Academy of Pediatrics. (2024). The 5 Cs of media use. https://www.aap.org/en/patient-care/media-and-children/center-of-excellence-on-social-media-and-youth-mental-health/5cs-of-media-use/

Centers for Disease Control and Prevention. (2024). Youth Risk Behavior Survey data summary & trends report: 2013-2023. US Department of Health and Human Services.

Common Sense Media, Hopelab, & Center for Digital Thriving. (2024). Teen and young adult perspectives on generative AI: Patterns of use, excitements, and concerns. https://www.commonsensemedia.org/research/teen-and-young-adult-perspectives-on-generative-ai

Elbow, P. (2008). The believing game or methodological believing. Journal of the Assembly for Expanded Perspectives on Learning, 14, Article 3.

Foerch, D.F., & Iuspa, F. (2016). The internationalization of the Reggio Emilia philosophy. Contrapontos, 16(2), 321-350.

Fowler, G.A. (2023, March 14). Snapchat tried to make a safe AI. It chats with me about booze and sex. The Washington Post. www.washingtonpost.com/technology/2023/03/14/snapchat-myai

Ito, M., Odgers, C., Schueller, S., Cabrera, J., Conaway, E., Cross, R., & Hernández, M. (2020). Social media and youth wellbeing: What we know and where we could go. Connected Learning Alliance.

Jenkins, H. (2010, October 18). Raising the digital generation. Pop Junctions. https://henryjenkins.org/blog/2010/10/raising_the_digital_generation.html

Marwick, A.E., & boyd, d. (2014). Networked privacy: How teenagers negotiate context in social media. New Media & Society, 16(7), 1051-1067.

Nesi, J., Choukas-Bradley, S., & Prinstein, M.J. (2018). Transformation of adolescent peer relations in the social media context: Part 1 – A theoretical framework and application to dyadic peer relationships. Clinical Child and Family Psychology Review, 21, 267-294.

Orben, A., Meier, A., Dalgleish, T., & Blakemore, S.J. (2024). Mechanisms linking social media use to adolescent mental health vulnerability. Nature Reviews Psychology, 3(6), 407-423.

Perkins, D.N., Jay, E., & Tishman, S. (1993). Beyond abilities: A dispositional theory of thinking. Merrill-Palmer Quarterly 39(1), 1-21.

Project Zero, & Reggio Children. (2001). Making learning visible: Children as individual and group learners. Reggio Children.

Reich, S.M., Subrahmanyam, K., & Espinoza, G. (2012). Friending, IMing, and hanging out face-to-face: Overlap in adolescents’ online and offline social networks. Developmental Psychology, 48(2), 356-368.

Rinaldi, C. (2001). The pedagogy of listening: Listening perspective from Reggio Emilia. Innovations in Early Education: The International Reggio Exchange, 8(4), 1-4.

Ritchhart, R. & Perkins, D. (2008). Making thinking visible. Educational Leadership, 65(5), 57-61.

Ritchhart, R., Church, M., & Morrison, K. (2011). Making thinking visible: How to promote engagement, understanding, and independence for all learners. Jossey-Bass.

Somerville, L.H. (2013). The teenage brain: Sensitivity to social evaluation. Current Directions in Psychological Science, 22(2), 121-127.

Tench, B. & Weinstein, E. (2025). GrAIdients 1.0: A framework for making the ethically gray areas of generative AI visible. Center for Digital Thriving.

Turner, T., & Wilson, D.G. (2010). Reflections on documentation: A discussion with thought leaders from Reggio Emilia. Theory Into Practice, 49(1), 5-13.

Tversky, B. (2002, March). What do sketches say about thinking? In Proceedings of the 2002 AAAI Spring Symposium on Sketch Understanding (pp. 148-151). AAAI.

Valkenburg, P.M., & Peter, J. (2013). The differential susceptibility to media effects model. Journal of Communication, 63(2), 221-243. https://doi.org/10.1111/jcom.12024

Weinstein, E., & James, C. (2022). Behind their screens: What teens are facing (and adults are missing). MIT Press.