Children’s Information Seeking and Trust Towards AI
Children grow up surrounded by information — from conversations to books, media, and online sources. Yet not all sources are equally trustworthy, and distinguishing credible information from unreliable content is a persistent challenge. This issue is made more complex by generative AI tools like ChatGPT. Unlike traditional sources, generative AI remixes information from different sources, making it difficult to trace where any single piece of information comes from. Its human-like conversational style can further lead children to over-trust or uncritically accept AI-generated responses.
In this series of projects, we examine how children ask questions and develop trust in AI by exploring three areas:
-
The underlying factors that predict children’s information-seeking and trust, such as cognitive ability, language processing, anthropomorphism, and critical thinking.
-
The processes of information gathering and evaluation, including how children selectively trust human versus AI sources and how they detect or monitor errors in AI-generated responses.
-
The implications of AI reliance, such as increased trust in flawed sources, susceptibility to misinformation, and changes in children’s inquiry and reasoning strategies.
Question Asking and Evaluation in the Wild
We deployed a home-based, child-friendly AI chatbot, Curio the Catbot, that answers any questions children initiate and prompts them to evaluate the trustworthiness of its answers. We collected almost 10,000 raw questions from over 50 children who used the AI for over 2 weeks and
-
Children primarily sought a wide range of factual information from the AI, but they also asked personal questions—either about the AI or about themselves—likely out of playfulness.
-
Children engage in a form of curiosity-driven information seeking that includes follow-up questions and deeper exploration—resembling passages of intellectual search.
-
Children appeared to be sensitive and vigilant to the accuracy of information they receive, and they employed different follow-up strategies based on their evaluation.
Children’s Selective Trust in AI vs Human Informants
We are currently in the process of conducting an in-lab experimental study on children’s trust in generative AI chatbots for health-based information. In this study, children are asked whether they believe grownups, chatbots, and doctors can provide them with accurate information about their bodies and staying healthy. Children are also tasked with recognizing different kinds of errors (typos, factual inaccuracies, and inconsistencies) that chatbots and people might make when answering questions online.
Upcoming Studies
-
Can children evaluate and monitor information biases?
-
How can we design educational programs and AI systems to support children in developing healthy skepticism and effective strategies for detecting errors?
Selected Publications
- Oh, S., Zhang, C., Girouard-Hallam, L., Zhou, Z., March, H., Jayaramu, S., & Xu, Y. (2025). “Hey Curio, Can You Tell Me More?”: Children’s Information-Seeking and Trust in AI. In Proceedings of the ACM International Conference on Interaction Design and Children (IDC ‘25).