Educational Artificial Intelligence, Child Rights, and Human Care in Early Childhood
The European Educational Researcher, Online-First Articles, pp. 33-55
OPEN ACCESS VIEWS: 101 DOWNLOADS: 143 Publication date: 15 Oct 2025
OPEN ACCESS VIEWS: 101 DOWNLOADS: 143 Publication date: 15 Oct 2025
ABSTRACT
This article examines the use of artificial intelligence (AI) in early childhood education from a rights-based perspective, drawing on a critical interpretive synthesis (CIS) of the literature published between 2019 and 2025. A typology of four uses in early childhood —Tutor, Tool, Companion, and Tracker (THCR)— is proposed and each category is mapped against the core principles of the United Nations Convention on the Rights of the Child: privacy, non-discrimination, best interests, and participation. The contribution includes: (a) a risk-safeguard matrix differentiated by type of AI; (b) a logic model and theory of change for care-centered implementations; and (c) the SAFE LEARN checklist (Safety by design, Agency/assent, Fairness, Explainability, Learning alignment, Educator capacity, Accountability, Risk logging, Non-replacement of care). Implications for policy and practice are discussed for schools, administrations, and providers, emphasizing human mediation and verifiable equity as minimal conditions for acceptance. This work offers a pioneering framework that connects international normative principles with operational instruments, providing immediate guidance for the Education 2030 agenda and Sustainable Development Goals (SDGs) 4 and 16.
KEYWORDS
Early childhood, AI ethics, Child rights, Surveillance, Equity, Theory of change, CIS.
CITATION (APA)
Garcia Peinado, R. (2025). Educational Artificial Intelligence, Child Rights, and Human Care in Early Childhood. The European Educational Researcher. https://doi.org/10.31757/euer.833
REFERENCES
- Baethge, C., Goldbeck-Wood, S., & Mertens, S. (2019). SANRA—A scale for the quality assessment of narrative review articles. Research Integrity and Peer Review, 4(1), 5. https://doi.org/10.1186/s41073-019-0064-8
- Bronfenbrenner, U. (1979). The ecology of human development: Experiments by nature and design. Harvard University Press.
- Buarque, G. (2023). Artificial intelligence and algorithmic discrimination: a reflection on risk and vulnerability in childhood. Brazilian Journal of Law, Technology and Innovation, 1(2), 63–86. https://doi.org/10.59224/bjlti.v1i2.63-86
- Devillers, L., & Cowie, R. (2023). Ethical considerations on affective computing: An overview. Proceedings of the IEEE, 111(10), 1445-1458. 10.1109/JPROC.2023.3315217
- Dixon-Woods, M., Cavers, D., Agarwal, S., Annandale, E., Arthur, A., Harvey, J., Hsu, R., Katbamna, S., Olsen, R., Smith, L., Riley, R., & Sutton, A. J. (2006). Conducting a critical interpretive synthesis of the literature on access to healthcare by vulnerable groups. BMC Medical Research Methodology, 6, 35. https://doi.org/10.1186/1471-2288-6-35
- Franz, L., Goodwin, C. D., Rieder, A., Matheis, M., & Damiano, D. L. (2022). Early intervention for very young children with or at high likelihood for autism spectrum disorder: An overview of reviews. Developmental Medicine & Child Neurology, 64(9), 1063–1076. https://doi.org/10.1111/dmcn.15258
- Grace, T. D., Abel, C., & Salen, K. (2023). Child-centered design in the digital world: investigating the implications of the Age-Appropriate design code for interactive digital media. In Proceedings of the 22nd annual ACM interaction design and children conference (pp. 289-297). https://doi.org/10.1145/3585088.3589370
- Grant, M. J., & Booth, A. (2009). A typology of reviews: An analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26(2), 91–108. https://doi.org/10.1111/j.1471-1842.2009.00848.x
- Holstein, K., & Doroudi, S. (2022). Equity and artificial intelligence in education. In The ethics of artificial intelligence in education (pp. 151-173). Routledge.
- Johnson, W. L., & Lester, J. C. (2016). Face-to-face interaction with pedagogical agents, twenty years later. International Journal of Artificial Intelligence in Education, 26(1), 25–36. https://doi.org/10.1007/s40593-015-0065-9
- Kurian, N. (2023). Toddlers and robots? The ethics of supporting young children with disabilities with AI companions and the implications for children’s rights. International Journal of Human Rights Education, 7(1), 9.
- Lemaignan, S., Newbutt, N., Rice, L., Daly, J., & Charisi, V. (2021). UNICEF guidance on AI for children: Application to the design of a social robot for and with autistic children. arXiv preprint arXiv:2108.12166.United Nations. (1989). Convention on the rights of the child. https://www.ohchr.org/en/instruments-mechanisms/instruments/convention-rights-child. https://doi.org/10.48550/arXiv.2108.12166
- Leong, W. Y., & Zhang, J. B. (2025). Ethical design of AI for education and learning systems. ASM Science Journal, 20(1), 1-9. https://doi.org/10.32802/asmscj.2025.1917
- McStay, A., & Rosner, G. (2021). Emotional artificial intelligence in children’s toys and devices: Ethics, governance and practical remedies. Big Data & Society, 8(1), https://doi.org/10.1177/2053951721994877
- Miao, F., & Holmes, W. (2021). AI and education: A guidance for policymakers. Unesco Publishing. ISBN 978-92-3-300165-7. https://discovery.ucl.ac.uk/id/eprint/10130180/1/Miao%20and%20Holmes%20-%202021%20-%20AI%20and%20education%20guidance%20for%20policy-makers.pdf
- Morandín-Ahuerma, F. (2023). Ten UNESCO recommendations on the ethics of artificial intelligence. OSF. https://www.researchgate.net/publication/374234687_Ten_UNESCO_Recommendations_on_the_Ethics_of_Artificial_Intelligence
- National Institute of Standards and Technology NIST (2023). AI risk management framework (AI RMF 1.0). https://doi.org/10.6028/NIST.AI.100-1
- Noddings, N. (1985). Caring: A feminine approach to ethics. Center for Research on Women, Stanford University.
- Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230–253. https://doi.org/10.1518/001872097778543886
- Skitka, L. J., Mosier, K. L., & Burdick, M. (2000). Accountability and automation bias. International Journal of Human-Computer Studies, 52(4), 701–717. https://doi.org/10.1006/ijhc.1999.0349
- Smuha, N. A. (2021). Beyond the individual: governing AI’s societal harm. Computer Law & Security Review, 45, 105678. https://doi.org/10.14763/2021.3.1574
- Saxena, C. (2024). Ethical Considerations in Affective Computing. In: Garg, M., Prasad, R.S. (eds) Affective Computing for Social Good. The Springer Series in Applied Machine Learning. Springer, Cham. https://doi.org/10.1007/978-3-031-63821-3_13
- Tabassi, E. (2023). Artificial intelligence risk management framework (AI RMF 1.0). https://doi.org/10.6028/NIST.AI.100-1
- United Nations. (1989). Convention on the rights of the child. https://www.ohchr.org/en/instruments-mechanisms/instruments/convention-rights-child. https://doi.org/10.48550/arXiv.2108.12166
- United Nations Children's Fund (UNICEF) (2021). Policy guidance on AI for children. https://www.unicef.org/innocenti/es/media/1351/file/UNICEF-Global-Insight-policy-guidance-AI-children-2.0-2021_ES.pdf
- Vygotsky, L. S. (1978). Mind in Society: The Development of Higher Psychological Processes. Cambridge, MA: Harvard University Press.
- World Economic Forum (2024). Shaping the Future of Learning: The Role of AI in Education 4.0. https://www3.weforum.org/docs/WEF_Shaping_the_Future_of_Learning_2024.pdf
LICENSE

This work is licensed under a Creative Commons Attribution 4.0 International License.