Artificial Intelligence and other emerging technologies avail a range of opportunities for improving the wellbeing of school aged children, but also risks that need to be carefully managed. In determining how to best respond to the opportunities and challenges of emerging technologies, it is important that we navigate a balanced pathway that lies between ‘moral panic’ and naïve ‘techno-enthusiasm’.
Benefits
The ability to work with Artificial Intelligence in almost all aspects of human endeavour will be an essential and requisite capability in the future, from designing and developing a new product, to running a business, to solving an engineering problem, to writing a report, and a myriad of other use cases. Equipping students with generative AI literacies they need for future work and life success will be a fundamental responsibility of any education system. Not having these capabilities could substantially limit a person’s opportunities, adversely affecting their wellbeing.
Artificial intelligence can fulfill a number of roles for school aged children. For instance, Artificial Intelligence has the potential to provide every child with a personalised tutor for every subject that they study, which is a tremendous boon for child wellbeing. Artificial Intelligence can assume the role of a guide on the side, or coach, providing students with ideas and advice about how to respond to situations which they may otherwise uncomfortable talking about with someone else. As generative AI continues to improve, it will be able to provide personalised assistance on any human area of activity, for instance, sports training, cooking, relationship advice – the possibilities are almost limitless.
Research indicates that people with lower skill levels can benefit most from generative AI (law student study), meaning that if we can place these powerful tools in the hands of the more disadvantaged segments of our society, then it could potentially serve to close the equity gap. Note that significant training and education would also be required to enable this strategy to be successful.
Artificial Intelligence has profound potential for providing support for students with special needs. The recent release of Chat GPT4o, with the capacity to natively see, hear and speak, means that people with vision, hearing or speech impairments can use AI tools to help them navigate and interact with the world.
Immersive Virtual Reality is another emerging technology that has the potential to benefit students, by providing them with access to learning experiences that would otherwise be unavailable. Students experience what it is like to visit the other side of the world, work in a science lab, or be an archaeologist in Egypt. Immersive Virtual Reality also offers immense opportunity for people to develop empathy for others, for life-like, visceral experiences. The range of experiences that may be offered using IVR, and the heightened engagement, can affect wellbeing by increasing the episodic repository from which people can draw, potentially enabling them to make better vocational and relational decisions.
In sum, emerging technologies such as Artificial Intelligence have immense potential to benefit the wellbeing of school children, if access is provided and they are used in ways that enable positive educational experiences.
Issues
However, there are a number of ways that emerging technologies such as Artificial Intelligence may adversely affect the wellbeing of school aged children.
Students may be exposed to biases, inaccuracies and misleading information, through their use of AI platforms. While AI providers are increasingly improving the performance of their platforms in efforts to remove bias and inaccuracies, fully removing these biases and inaccuracies is a difficult undertaking. Almost all information could be considered to include biases and inaccuracies, and the best safeguard that we can provide for our children is to help them learn how to identify biases and inaccuracies so that they do not incorporate them into their responses.
To the extent that Artificial Intelligence and other technologies can be amplifiers of productivity and opportunity, there are significant equity issues relating to access to these tools. If access to the highest quality and most powerful artificial intelligence technologies is only provided to those who can afford it, the socio-economic gap will widen, entrenching disadvantage and disempowering the people who most need support.
There are continual concerns about over-use of screens by children. Safety concerns are often associated with the use of screens, for instance, relating to cyberbullying, cyberstalking and so on. To date, these risks do not generally occur through the use of Artificial Intelligence, and are greater when using social media at large. Because generative AI platforms are so far provided by reputable organisations with the objective of providing a service that is valued by its users, there have not yet been instances of AI providing malicious advice or misusing user data for malicious purposes. However, the onus will fall on adults to evaluate the quality of AI platforms that children and young people use, to safeguard student wellbeing.
Screen addiction is another issue. Digital games, social media and even self-paced online learning platforms often include interactive elements in their interfaces that are designed to provide a small dopamine reward for use. However, it is important that we disentangle positive and educational screen uses from unproductive and unhealthy screen uses, encouraging students to choose the former over the latter. Adults also need to take responsibility for filtering the sorts of technologies that children and young adults can access, depending on their age.
Distraction while children use screens is an ongoing concern of teachers and parents, which is often associated with a reduced ability to concentrate. Using generative artificial intelligence need not increase distraction, though the extent to which students are able to maintain control over their focus depends on their ability to self-regulate their attention. Physiologically, research indicates that it is not the use of screens that affects people’s eyesight, it is the level of close-up reading and the lack of outdoor time.
Another key risk is that student reliance on generative AI may constrain their learning. Students may choose to use AI as an answer machine rather than a learning machine. The extent to which students copy-paste responses to their learning tasks from generative AI is the extent to which they will learn nothing from their education and be of lesser value to the workforce and society. Once again, helping students to self-regulate their use of generative AI will be critical.
Learning from the past to prepare for the future
Historically, technologies that enable people to access information more easily and distribute it in rich media forms have been the ones to transform society (for instance, the printing press, digital cameras, and so on). To that extent, Artificial Intelligence, Immersive Virtual Reality, Augmented Reality, Smart Glasses, are all technologies that we can expect to play an increasing role in our future.
There are a number of analogies that we can draw from the past to help inform our interpretation of AI and other emerging technologies, including how to best navigate the use of technologies by children and young people. Artificial intelligence can be likened to earlier technological innovations such as calculators, spellcheckers and Wikipedia. In each case there were concerns that the technology should not be used at all in education, because it was able to perform tasks that students were required to complete. At the same time, there were other concerns raised about whether the knowledge and skills that could now be completed by technology was still relevant to learn. However, over time, we came to realise that:
- the knowledge and skills that technology can provide are still important for humans to learn, to develop their mind and have better understanding of the world
- those technologies could be used to help students learn the underlying skills or knowledge (arithmetic, spelling, historical facts),
- it was appropriate to use those technologies in cases where higher order problem solving tasks were being completed (for instance, real-world mathematical problems, writing a report)
Being successful in a world with increasingly powerful artificial intelligence and other emerging technologies will depend on children and young people developing strong AI literacies. This includes the ability to understand how AI works, specify tasks accurately, critique the information that is provided by AI, understand the limitations of AI technologies.
In a future saturated with AI technologies that can match and often surpass human performance, people will need to concentrate more on being human well as they offload a lot of the cognitive work to machines. In addition to the 4Cs (critical thinking, creativity, communication and collaboration), students will need to focus more on human and values-oriented aspects of our world, which can be summarised as the four Es:
- Ethics – the value systems to underpin appropriate use and acknowledgement of AI tools in their work and life
- Empathy – the ability to interpret the potential responses of other human beings and to act accordingly
- Enterprise – the self-motivation to make best positive use of AI technologies to affect positive impact
- Engagement – taking responsibility for their learning and proactively contributing to society.
Most importantly, students will need to have the self-regulation capabilities to monitor and appropriately adjust their use of technologies.
In order to develop these capabilities, extensive restructuring and reprioritisation must occur within our education system. We will need to provide access to technology for all young people, and extensive training that equips them with the knowledge, skills and dispositions to thrive in an AI world.


You must be logged in to post a comment.