Is AI Too Human? Children’s Perception and Relationships
Children’s daily lives are increasingly surrounded by AI entities that emulate human-like speech and language. Because children’s learning and development are closely tied to their social interactions, the blurring of boundaries between AI and humans may have important implications for child development and everyday functioning. For example, children may become more vulnerable to undue influences such as advertisements and misinformation—especially if they mistake these messages as coming from trusted human sources.
In this series of projects, we aim to understand children’s perceptions of AI agents. In particular, we are interested in questions such as: Do children see AI as having human-like attributes? Do they develop any forms of “relationships” with AI? And how do they distinguish between AI and the people they interact with, and what heuristics do they apply? This work represents a collection of studies that use experiments and cognitive interviews to explore these questions.
Highlights
-
Our study showed mixed patterns in children’s conceptualization of AI categories. Some children perceived AI agents as humans, while others saw them as technological artifacts. However, a considerable portion of children struggled to place AI into either category, describing them instead as “magical” or “like a person, but not a person.” (paper in CHI2020).
-
Children responded affirmatively to questions about whether they felt safe with, trusted, or treated AI-based media characters as friends. This suggests the possibility of “parasocial relationships” forming through AI interaction (paper in International Journal of Child-Computer Interaction).
-
Children also perceived AI partners as having some degree of agency but lacking the ability to experience emotions and sensations, and there are clear developmental differences (paper in Computers in Human Behavior: Artificial Humans).
Upcoming Studies
Just like adults are sometimes puzzled about whether they’re interacting with a person or a bot behind the screen, children might face similar questions. In our upcoming studies, we aim to understand children’s ability to distinguish AI-generated speech from human-generated speech in an information-seeking context. As a starting point, we focus on subtle “human-indicator cues,” such as praise, emotion, humor, and mental state language.
-
Do social cues like praise or emotional language increase children’s trust in voice-based answers?
-
Are children more likely to misidentify AI voices as human when these cues are present
-
How do factors like age, prior experience with AI, or anthropomorphic beliefs shape children’s judgments?
Want a taste of the study? Listen to two answers to kids’ questions—can you guess which one is from a robot and which is from a real person?
Selected Publications
-
Xu, Y., Thomas, T., Yu, C. L., & Pan, E. Z. (2025). What makes children perceive or not perceive minds in generative AI? In Computers in Human Behavior: Artificial Humans. [DOI]
-
Xu, Y., Thomas, T., Li, Z., Chan, M., Lin, G., & Moore, K. (2024). Examining children’s perceptions of AI-enabled interactive media characters. In International Journal of Child-Computer Interaction. [DOI]
-
Li, Z., Thomas, T., Yu, C. L., & Xu, Y. (2024, June). “I Said Knight, Not Night!”: Children’s Communication Breakdowns and Repairs with AI Versus Human Partners. In Proceedings of the 23rd Annual ACM Interaction Design and Children Conference. [DOI]
-
Xu, Y., & Warschauer, M. (2020, April). What are you talking to?: Understanding children’s perceptions of conversational agents. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. [DOI]