logo symbol
Technology

The Importance of Critical Thinking in K12 Schools in The Age of AI

Nicholas Norman
Nicholas Norman
Advocate of developments in EdTech and their global impact.
The Importance of Critical Thinking in K12 Schools in The Age of AI

The theme of Safer Internet Day 2025, "Too good to be true? Protecting yourself and others from scams online", is as pertinent as ever as the age of Artificial Intelligence (AI) takes hold.

We’re continuing to discover the impact of Large Language Models (LLMs) on K12 learning environments, however, in the case of online scams there is cause for concern.

To address such challenges faced by leaders, IT administrators and teachers, as well as how to best educate students about threats that exist online requires a degree of understanding and clear communication of the issues faced.

What they entail, what forms they may come in, and how to deal with them. Simply creating awareness of their possibility is not the full story.

Providing actionable points that may be used to ensure we’re able to mitigate their chance of occurring and ensure we’re as safe online as possible is an important second step.

Simply instigating fear and concern in students ramps up anxiety and doesn’t help deal with the issues at hand. Knowledge in this case is an important first step, however, real power comes from knowing how to deal with these situations and understanding one's capacity to confront these challenges.

As an educator, a student, and as a parent.

We’ve put together 3 key types of scams online that should be cause for concern:

  1. Phishing and Impersonation Scams
  2. Misinformation and Disinformation Amplification
  3. AI-Powered Social Engineering and Grooming

This isn’t another blog about how AI image generation creates disfigured hands, though we will touch on that.

You’ll learn more about these possible types of scams and the forms that educators and students may encounter them in.

Phishing and Impersonation Scams

Impersonation and phishing scams are as old as commerce itself.

Prior to the internet, and before any modern technology for that matter, we have always been subject to deceptive practices that strive to mislead people, gain access to their personal information and steal from them.

AI Contributing to the Problem

In the modern AI form of this age-old tactic, we now have to contend with such scams with material that is far more readily and easily developed at scale, almost endlessly.

AI, or LLMs in this case, are able to generate realistic phishing emails, messages, and “deepfake's” of audio and even video that impersonate trusted people with an unfortunate amount of ease and increasing accuracy.

This could be parents, teachers, school leaders, administrators, the list goes on. Young students especially can be deceived by these sophisticated scams.

Types of scams that lean on this are:

  • Fake School Emails
  • Impersonating Teachers or Staff
  • Impersonating Parents
  • Phishing through Educational Platforms

AI’s Role in this

AI algorithms can analyse and process vast, almost inconceivable amounts of data to develop these scams. This further adds to the degree of realism and extent to which they can be highly convincing.

Deepfake technology can create realistic videos and audio recordings which are becoming increasingly challenging to identify and avoid being manipulated or tricked by.

What can Teachers & Parents do?

Knowing about these issues is one thing, but how are we able to combat them?

Educating students is an important first step, knowing that information can be easily faked is the first step towards awareness of the problems' existence.

The second is to outline how to ensure that the information received is correct.

A lot of AI-created information, images, and videos have inconsistencies and inaccuracies in them. Whether it's extra fingers, misplaced toes, or disproportionate facial features.

Another pointer can be the consistency of lighting in an AI-generated image, where not all shadows are consistent throughout the image, which is harder to put a finger on but creates a feeling of discomfort when looking at such an image.

In the context of faked emails, the subtleties are harder to identify, especially if the mimicked sender has a prolific online profile. AI’s may be trained on the person's style of writing and are able to easily create messages to mimic them based on this information.

Students should be informed to consider the messages they are sent and consider whether they actually have an outstanding library book or whatever front or query is being requested.

Misinformation and Disinformation

One of the main abilities of AI is the creation of content, whether that’s text, images or video as we touched on above.

The AI’s that a lot of us engage with are LLMs, which are essentially designed to always provide an answer to your question. You may have experienced this yourself in the requests you make, regardless of the context or accuracy of the information available and how incorrect it may be, the model will produce a result.

This can lead to some outright incorrect information and answers, and without further interrogation of the information provided, they can mislead students.

However, this is in the context of honest requests of the models. In the case of people intentionally generating misinformation and disinformation, the models are fully capable of generating huge swaths of information that are entirely misleading and incorrect.

Designed to Respond

There are a few instances where they may be stumped or their training data precedes the event inquired about, however, the general response is for information to be jettisoned out to the query received.

This creates a situation where a lot of misinformation may be easily requested and created, with malicious actors who are seeking to motivate their cause, creating factually incorrect information to mislead and sway people to their cause.

The rate at which AI can create this information, in the form of articles, social media posts, videos, and so forth is incredible and allows for the proliferation of information that is simply incorrect or misleading.

Examples of these types of actions include:

  • Fake News About School Policies
  • Rumours and Cyberbullying
  • Manipulated Images and Video
  • Propaganda that Targets Students

Critical thinking skills and discernment as to what you are reading and consuming online are more important than ever now in the world of AI. Assessing the source of the information, who they are, what their motivations may be, and how this information relates to any possible motives they may have as an organisation ought to be considered.

Verifying Information in the Age of AI

The skill of academic reading is more important to hone than ever in order to discern if the information presented to the reader is legitimate or intentionally misleading.

Further to this, teachers can teach students the importance of verifying the information they consume before passing it on, in order to validate it and ensure that it is factually correct.

Teaching students to consider multiple sources, prefer reliable and trusted sources, and verify and cross-reference the information is an important method of consuming information.

Mitigation Strategies: Developing Critical Thinking Skills

Learning critical thinking skills is greatly important in the modern world, as we covered above, identifying scams, fake information, and purposefully emotively charged information.

Learning to interrogate the information encountered and separating oneself from the purposefully emotive triggers used require a degree of discernment that is enhanced through the development of critical thinking skills.

Educating students on the importance of identifying sensational and emotionally charged content can make a big difference to their degree of engagement with the information.

Identifying the tone of the content, how they are feeling in response to what they are reading, and what steps may be taken in order to fully process the information prior to reacting and sharing information that could be incorrect is important, and a skill that requires consistent work.

Students ought to learn to have adaptable and flexible thinking skills and be able to easily engage with new challenges and unique information on their feet without having to lean too heavily on external input (or querying an AI for help).

This is an important skill set to develop in order to become an independent thinker, a skill required to develop creative and innovative thinking skills. This applies to both online content and online scam attempts.

Scammers will try to leverage an emotionally charged topic to try to trigger a reaction, rather than fully considering what is being asked of us or being triggered within us. Whether that’s insisting that a family member is injured or missing, or they are being threatened and require immediate assistance.

Ensuring that we’re able to fully understand the situation at hand and consider the emotional response that is being sought out by the scammer, simply identifying this can help with preventing further steps that may endanger one.

This Walden University article outlines how to teach critical thinking in elementary education, and this Edutopia article outlines how to help K12 students hone these skills.

Social Engineering and Grooming

A more ominous and uncomfortable topic in the realm of AI is that of social engineering and grooming, and the degree that this is impacted by generative tools.

In line with misinformation and phishing scams highlighted above, AI allows for the creation and distribution of fake profiles easier and possibly to be more complete than ever.

AI chatbots are able to simulate human conversation and interact on social media platforms or online gaming platforms to mimic a fellow student or online personality to coax information out of unsuspecting victims that may be used to the gain of the malicious actor.

And LLMs that are able to create imagery and videos based on prompts, provides scammers with the means to create videos that aim to confirm their scam.

First Step Awareness

As with all of these concerns, awareness is an important first step in educating students on these challenges.

Some types of scams could include:

  • Personalised Phishing Attacks
  • AI-Generated Fake Friends
  • Deepfake Audio and Video Calls

But as I’ve already mentioned, simply informing students is not enough, and beyond that, even though we want students to be aware of these concerns, creating huge amounts of paranoia of every interaction and nugget of information online is an unfortunate lens to view the world through.

Teaching students to approach online information with a critical mindset can have positive, long-term benefits.

As students develop their ability to interpret and analyse information, they will be able to better understand the motives and tone of online content, and make informed decisions.

Proactive Behaviour

As students develop the ability to identify emotionally triggering information, and questions that imply the asker is after more information than they may appear to, are great triggers for students to develop.

Though the context of scams, phishing, social engineering, and so forth are to be brought to the attention of students, having it as the intimidating and scary forefront of everything online can cause undue distress in the development of the worldview that students are developing.

Scepticism is an important skill and this presents a different tone and approach to the world than one of fear and distrust.

Additionally, the UK Safer Internet Centre provides further educational resources to help guide students, parents, and teachers in navigating these challenges.

Intervention as a Preventative Measure

In combination with all these means of equipping students with discernment and fostering the skill of critical thinking, schools require proactive measures to be in place for when the discernment of students and teachers fails.

Ensuring that measures are in place to prevent readily sharing of student information online, that access to known and suspected phishing sites is outright blocked, and that gaming platforms with shady reputations are not accessible goes a long way in ensuring students remain safe and sound online.

Platforms such as Mobile Guardian offer the tools K12 schools require to ensure their students remain safe online.

Through features such as application management, safer web filtering, category-based web filtering, and location-based filtering, the exploration that students may conduct online may be limited.

Further Coverage of AI in K12

Beyond these issues with the impact that AI stands to play on scams and phishing attempts around the world, further consideration into the behaviour of AI is to be made.

We’ve discussed this in other blogs, such as the use of Artificial Intelligence in Education, the Negative effects of AI in education, and how to manage AI applications in the classroom.

These blogs touch on the issues of biases in training data of generative AI tools, how to limit or restrict their use in digital learning environments, and how to get the best out of digital learning programs in general.

Nicholas Norman
Nicholas Norman
Advocate of developments in EdTech and their global impact.
Share Post

Related Articles

No items found.

Transform your Mobile
Learning Program