Welcome to my weekly reflection. This one is going to be a mish-mash of various musings on course content – I can’t pull out a particular theme to my thoughts from this week, so I figured I’d just put them all down together, stream-of-consciousness style (bonus: that way, you can tell that I haven’t used AI to write this reflection!).

Urban Dictionary definition of AI slop

First, I’m still kind of reeling from our discussion about the potential environmental impacts of generative AI. I found it really interesting hearing what generative AI chatbots had to say about the environmental impacts of their servers. This felt kind of dystopian – I’m picturing a culminating scene in a sci-fi movie. Maybe the protagonist is about to shut down a chatbot’s server, but the chatbot has gained enough sentience to fight to stay alive by lying to people. I obviously don’t believe that Gemini and ChatGPT are lying about the extent of their environmental impacts. However, is it far-fetched to assume that the information AI chatbots provide on their carbon footprints might be an underestimate? It would make sense for these chatbots to be programmed to purvey a favourable view of AI – that would be good for business. It was interesting that when Randy asked various chatbots bout their environmental impact, they highlighted the other household appliances that take up water and electricity. I might be getting a bit too critical here, but… Anyways, I’m much more inclined to believe the research out of Western University that Nathan shared last week.

Side note: book recommendation

Speaking of sentient AI, I strongly recommend Kazuo Ishiguro’s novel, Klara and the Sun. I think it’s a great read for teachers. It takes place in a near-future world where wealthy children have AI companions to help with their social, emotional, and cognitive development. Klara and the Sun explores questions like, “What does it mean to love?” And “Will AI ever be able to replicate human emotions?” And like much of the science fiction and dystopian genres, it grapples with the consequences of unchecked technological innovation. 5 stars from me! And I believe it’s under 200 pages, making it tackle-able over reading break!

2021 Booker Prize winner, Klara and the Sun by Kazuo Ishiguro

Klara and the Sun is also soon to be a movie starring Jenna Ortega and produced by Taika Waititi (two of my faves!): https://www.screendaily.com/news/taika-waititi-says-klara-and-the-sun-will-be-coming-this-year/5213005.article

I am glad that researchers are narrowing in on the impacts of AI on natural resources and ecosystems. I wonder what the challenges of studying AI might be. I imagine that it is exceedingly difficult to conduct rigorous academic research into a field that is shifting so rapidly. I remember trying out ChatGPT in December 2022. My roommate at the time was studying software engineering, so I had an “in” … I feel like I was slightly ahead of the curve in terms of testing out and adopting generative AI. The GPT-5 series we have today is a COMPLETELY different beast than ChatGPT 3-4 years ago. It wasn’t until recently that generative AI could search the web and incorporate accurate and up-to-date information. Even in the past few months, the technology has taken leaps and bounds. It must be difficult to pin AI down and study it – I talked about this in a reflection for my Language and Literacies class, which you can check out below. We had to read a fantastic article by Nomisha Kurian about conversational AI, mental health chatbots, and children’s literacy. Worth a skim, especially for future teachers!

AI’s Empathy Gap” by Nomisha Kurian

My most common thought while reading this week’s article was “What?!,” closely followed by “Oh my goodness, society is doomed.” Nomisha Kurian clearly outlined the risks of unchecked conversational AI for children, using various examples that terrified me. What’s scarier is that as Kurian said, “conversational AI is a click away” for kids everywhere – even those who try to avoid it, because the first thing that comes up when you make a quick google search is the Google AI overview.

Conversely, though, I think it’s important to recognise that conversational AI has made leaps and bounds since the failure of Microsoft’s chatbot, Tay (2016), and the countless other examples shared in the article. The example about the 12-year-old experiencing sexual abuse (and the ridiculously inappropriate response from the chatbot) is from 2018, which may only be eight years ago, but so much has changed even in the past year! ChatGPT 4.5 passed the Turing Test in April 2025, when 73 percent of human judges in a UC San Diego study believed it was human. The development of conversational AI is outpacing the research surrounding it – it must be so difficult for academics like Nomisha Kurian to study AI considering that once their work gets approved/published, it might not be accurate – or maybe even obsolete! The responses that ChatGPT, Gemini, and other conversational AI models generate today are typically indistinguishable from information or advice given by a human – although as Kurian outlined, holes in AI’s accuracy appear when it comes to providing empathetic and contextual responses to emotional disclosures.

I think that today, the greatest issue facing children who engage with conversational AI won’t be blatantly inappropriate responses, but responses that are remarkably human and almost correct. I really liked how Kurian summarised it in their section on “transparency and authenticity.” We need to make it clear to children that even though it might feel like their interacting with another person, chatting with AI is not the same as connecting with a human. This starts with education – what is AI? How does it work? Etc. We need to remind our students that AI responses cannot substitute human interaction, and to encourage them to reach out to real people if they need help.

People on my social media feeds have been talking about using ChatGPT as a therapist, which, for lack of better language, really freaks me out! Here’s a link to a TikTok video about ranting to ChatGPT (because apparently linking TikTok videos is just something I do in these reading responses now!): https://vt.tiktok.com/ZSUusRQjT/.

I think that this is so dangerous! Even if you overlook the fact that AI can occasionally misinterpret a prompt and provide wildly harmful or unhelpful responses, there’s still the issue with conversational AI being designed to be agreeable. I think that’s a big danger with the personalisation aspect of AI – it tells you exactly what you want to hear. I’ve tried using ChatGPT to solve math problems, and if I say, “Wait, isn’t the answer 5?,” it’ll spit back an apologetic response ensuring me that I must be right, and making up reasoning for my answer, even if I was totally wrong with 5. If we take this same concern and apply it to someone’s personal problem, ChatGPT will typically unequivocally take their side and maybe lead them into an echo chamber/spiral of sorts. Scary!

To connect back to the article, I strongly disagree with the idea of mental health “counsellor” or “therapist” style chatbot for children. I can concede that we have a shortage of mental health professionals and that there are significant barriers to getting help (e.g., long wait lists for assessments, insurance). However, AI absolutely cannot replace therapy, especially for children and youth. Kids with mental health problems are some of our most vulnerable people – and those in the direst need of real human connection. We need to put strong guard rails up on all forms of conversational AI when it comes to mental health – chatbots should not be counselling us, they should be reiterating that they are unable to provide advice and pointing users toward the services that they need. To summarise, that’s my main takeaway from this week’s article – we (teachers, parents, and policymakers) need to REGULATE children’s access to conversational AI, and lobby for software companies to put safeguards in place to protect our kids from falling victim to the “empathy gap.”

– SUmmary of my reflection on conversational AI chatbots for children (EDCI 301)

CBC – environmental impacts of AI (video)

A great video for kids.

To close, rapid fire – here are my thoughts on generative AI, its environmental impact, and our free will:

Do we have any choice about using GenAI?

Yes and no. Yes, we can actively choose whether we put our work into Gemini or MagicSchool or ChatGPT – no one is making you do that (except, maybe this class when we did the AI Workbook – haha!). However, I do think we as teachers have a duty to understand how AI works and how our students are being exposed to it. I want to prepare my students for safety and success in our increasingly artificially intelligent world, and I would be doing them a serious disservice by shuttering AI out of my classroom.

Can we avoid its use?

Increasingly, no. AI is pervasive, and even if we’re not using it, someone else is – whether that be our colleagues, administrators, students, or the sources that we trust to get our information from (news outlets, academic articles, even search engines with that annoying “AI Overview” function).

Will the tech giants solve the power and water overconsumption?

My pessimistic answer is NO WAY! I can’t envision a world where public concern/outrage about AI’s environmental impacts will outweigh the profit motive for tech companies. People know that AI is using so much water and power, but that’s not stopping us. Plus, in big affluent cities in the Global North, we aren’t seeing the impacts of AI servers directly. To make tech giants take real action to mitigate their environmental impact, people would need to mobilize by boycotting AI. And I don’t see that happening, because (a) generative AI is so convenient, good for productivity, and fun, and (b) most AI users aren’t seeing the negative impacts of their AI use in a meaningful way.

That’s all! I’m excited to continue my learning journey about AI!