A child watches a PBS Kids show, while the TV character encourages curiosity by asking questions and giving positive feedback.

STEM Learning With AI-enabled Television Characters

In this project, we are partnering with PBS KIDS to integrate conversational agents into children’s STEM television shows so that children can have contingent interaction with media characters, with the goal of supporting active engagement and learning. The end goal of this project is to distribute the conversational videos as publicly accessible content via PBS KIDS platforms to millions of children across the country.

Rosita talks about going to a picnic in both English and Spanish.

AI Reading Partners to Support Language and Literacy Development

To enrich children’s home literacy environments, this project presents a series of studies investigating how AI can support children’s language learning through storybook reading. Grounded in the framework of dialogic reading, we have designed and tested AI-assisted reading systems for both narrative and expository stories. These systems are intended to support children reading alone as well as joint reading between parents and children.

A child and a robot face each other with speech bubbles between them.

Is AI Too Human? Children’s Perception and Relationships

In this series of projects, we aim to understand children’s perceptions of AI agents. In particular, we are interested in questions such as: Do children see AI as having human-like attributes? Do they develop any forms of “relationships” with AI? And how do they distinguish between AI and the people they interact with, and what heuristics do they apply?

Speech bubble word cloud with words like “why,” “how,” “when,” “where,” and “who.”

Children’s Information Seeking and Trust Towards AI

While children learn much by asking questions to those around them, generative AI—now widely used as an information source—does not always provide reliable answers. To safeguard children’s learning, we examine how they ask questions and develop trust in AI by identifying: 1) the predictors of AI-related question-asking, 2) the processes through which they evaluate AI responses, and 3) the consequences of relying on AI. This project will inform curricula that promote productive AI use and guide the design of trustworthy educational AI.

People using technology with a world map in the background

Generative AI and Youth Learning Across the Globe

In this series of studies, we set out to explore this through a two-pronged approach: first, by describing the phenomenon—how adolescents are using AI in their learning processes; and second, by understanding the antecedents—the individual, social, and contextual factors that shape how and why students choose to engage with AI in the first place.

Diverse group of people greeting each other in different languages

Culturally and Linguistically Responsible AI

This project investigates how commercial automatic speech recognition (ASR) systems handle Spanish-English bilingual children’s speech—particularly from linguistically diverse backgrounds. We analyze how features like age, speech consistency, and dialect affect recognition accuracy, with the goal of making voice-based educational tools more equitable and inclusive.

Children using different devices to learn, connected to an AI cloud with books above

Improving AI Algorithms and Systems for Personalized Learning at Scale

While off-the-shelf AI models are powerful, there is still much room for improvement to better support educational purposes. For example, AI can be improved to more accurately detect students’ knowledge states, deliver instructional moves aligned with evidence-based strategies, and recognize its own limitations when providing answers. We collaborate with computer scientists to advance algorithms and system designs, with the goal of creating scalable and responsible AI tools tailored for educational use across a variety of learning settings.