Reflecting on AI tools in education this week has left me feeling quite anxious and confused, which has led me to do some research in order to better understand and articulate my concerns. I recognize the significant benefits AI offers in education such as assisting with workload, providing more resources and adapting learning tools for more inclusive education. AI is very important topic of conversation for us as future educators and is something many of us are both excited and worried about. We all want to make sure that we adapt AI into our classrooms in a safe and meaningful way. While I appreciate learning about AI tools and their potential to support educators, I also feel that most conversations we are having focus primarily on how AI is used for our convenience, efficiency, and productivity. As future educators, we are encouraged to explore what AI can do for us. But we spend far less time understanding the deeper ethical, social and developmental risks it presents, particularly for children.
Some of these concerns include impacts on children’s cognitive, social, and emotional development, such as reduced critical thinking, creativity, and the ability to distinguish what is real from what is artificial. We also must consider the larger societal harms, such as the increase in AI-generated Child Sexual Abuse Material (CSAM), environmental impact, data bias, discrimination and privacy. AI can perpetuate inequalities and directly harm already vulnerable communities, and while educators can help mitigate some risks, these issues require serious reflection and responsibility. We must also provide students with critical thinking skills to navigate these complexities.
This week, I have also been reflecting on what we are learning about the First Peoples Principles of Learning. These principles are not sets of lesson plans or instructional units, but values and an inclusive approach to education that honors all learners. They emphasize interconnectedness, responsibility, and the consequences of our actions, reminding us that learning should support the well-being of individual’s, communities, the land, and future generations. While I have read about how AI can support meaningful initiatives such as language revitalization and community-based projects, I have been thinking about what truly supports collective well-being and what does not.
There is an increasing amount of research coming out about both the benefits and the risks of AI and there are no right or wrong answers, after reflecting on this, here are some of the questions I am left with:
-Is our convenience and productivity worth the harms of AI?
-Does this have children’s best interest in mind?
-“This is just the way the world is heading whether we like it or not”: what is the balance between acceptance, ignorance and standing up for human rights?
-If I am using AI for education knowing it causes harm to others, does this align with my morals?
-Am I overreacting about these risks?
-What are the best ways to learn about the benefits and the risks and facilitate these conversations with children?
I am not opposed to AI, and I am excited to continue learning about how it can be used in education. However, I believe we must engage more critically and intentionally with both its benefits and harms. Responsible integration means choosing uses for AI that genuinely support students, communities, and the environment, not simply using it for convenience or entertainment. I feel disappointed this is not something that has been discussed more in our program because it feels very important to understand the depth of these decisions. The fact is, it’s easy to just see the benefits of something when we aren’t personally being harmed, this is true for many aspects of our society, this is what perpetuates inequality.
I think that this discomfort and these questions are how we grow, what pushes us to do better and understand different perspectives. I am looking forward to continuing to learn about AI and begin to answer some of my questions.
Here are a few resources I have been reading to learn more.
Indigenous data stewardships stands against extractivist AI
Indigenous-Led AI: How Indigenous Knowledge Systems Could Push AI to be More Inclusive
Abundant intelligences: placing AI within Indigenous knowledge frameworks
AI Friend? Risks, Implications, and Recommendations on Generative AI for Children


Leave a Reply