Following the recent advice issued by Ofsted that leaders must ensure the use of AI does “not have a detrimental effect” on their school’s outcomes, decisions or quality of provision, automated technology is yet again at the forefront of conversation in the EdTech world.

A threat to humanity and a scourge on intellectualism and creativity? Or a futuristic tool with the potential to change society for the better? Whether you love it, hate it, or simply can’t wrap your head around it, AI is here to stay. But just how serious are problems with AI in education? We investigate whether automated technology is more of a help or a hindrance…

Plagiarism

When it comes to using AI in schools and universities, plagiarism is a common concern. The increased popularity of tools such as ChatGPT, which can generate content and essays at the click of a button, has raised worries about cheating.

With access to this software, there may be a heightened inclination to submit unoriginal work without understanding the material - a practice that many people argue undermines the integrity of the education system. 

To combat this risk, schools and colleges must work with EdTech providers to install robust anti-plagiarism software and teach students the value of strong academic ethics - reiterating the difference between artificial intelligence helping them learn vs helping them cheat. It’s also relevant to note that AI can enhance students’ fact-checking and accuracy skills, as generated content from ChatGPT is prone to factual errors and fabricated details and therefore cannot be solely relied upon. 

 

Data privacy concerns 

Data privacy is another prevalent issue that arises from the use of artificial intelligence in the education sector. 

It’s important to note that AI in schools and colleges extends further than assistive software like ChatGPT - it’s used to track student performance, daily attendance, and is increasingly being introduced to tackle the rising problem of absenteeism, which surged as a result of the global pandemic. 

That’s not to mention the information that may voluntarily be entered into platforms by well-intentioned users who don’t understand the data entered is then available to everyone else using the platform too. There are three potentially high-risk problems with this:

  • Safeguarding - Data falling into the wrong hands presents a clear safeguarding issue. Without mitigating processes or control measures, there is a risk that users may inadvertently agree to sharing of personal information, including material which may be of a sensitive nature. That’s because data sharing is embedded into many AI tools (it’s how they train their models to get better over time). Unless users explicitly opt-out, then it’s likely information is being shared through the use of the platform through the implicit acceptance of the terms of service.
  • When it comes to data ownership, parents need assurance that personal information is not being sold, exposed or misused in line with the commitments made in GDPR and data security policies.
  • The extensive data needed for AI systems to function could also increase cyber attacks on multi-academy trusts - with hackers using erroneously shared details to gain access to sensitive information and disrupting educational activities. 

To help alleviate these worries, it is important that education providers comply with national regulations and take proactive steps to improve data security. This can be achieved by working with reliable EdTech companies that specialise in creating safe and reliable IT solutions for multi-academy trusts, schools and colleges.

 

Algorithmic bias

With AI, there is an additional risk of bias and discrimination - particularly regarding software that lacks transparency and openness. 

Many people fear that algorithms could end up perpetuating discrimination by reinforcing and amplifying existing stereotypes and prejudices. A recent investigation by NEA discovered numerous problems with AI tools in schools, such as falsely flagging essays written by non-native English speakers as AI-generated. 

Another study carried out by the Massachusetts Institute of Technology uncovered that some automated language models think that “flight attendant” and “secretary” are feminine jobs, while “fisherman”and  “lawyer” are masculine. 

Furthermore, there are fears that automated collection of data could lead to a culture of surveillance and profiling for marginalised communities. To tackle wider issues of gender and racial bias, it’s important that teachers engage in these important conversations with their students, as well as encourage the use of software that is safe, trustworthy and transparent. 


Dependency culture

With the growing societal dependence on technology, there is a concern that students may excessively rely on software such as ChatGPT - and that this will carry on into life after higher education. For example, many critics have argued that if students increasingly use AI to write essays or solve complex problems without developing their analytical skills, they may struggle in real-world scenarios that require critical thinking. 

This over-reliance is further regarded as a threat not only to traditional teaching methods but also to the development of critical thinking skills and academic authenticity - creating a dependency culture in the education sector. 

But how do we fix this problem? AI is here to stay for the foreseeable future - so it’s time to embrace the endless possibilities of this technology, and work towards striking a balance between maximising the efficiency of tools like ChatGPT while reinforcing the importance of cognitive development and independent learning. 

A force for good?

While it’s important to address legitimate concerns about the pitfalls of automation, it’s also worth noting the growing use of AI technology in education presents a significant opportunity to unleash creative potential. 

Rather than being regarded as a quick fix that strips young minds of their imagination, motivation and hard work, software such as ChatGPT should instead be recognised as an enhancement tool to reinforce these crucial qualities. It’s important to keep in mind that the purpose of ChatGPT is to be assistive. When it comes to essays and assessments, it gives students more time to focus on exploring concepts and honing their ideas by simplifying ‘the boring bits’. This can lead to growth in student performance and instil a sense of confidence in pupils who have bright ideas but may struggle to articulate them. 

Additionally, with the ability to provide targeted feedback and identify areas that warrant improvement, AI can advantage teachers by relieving them of administrative responsibilities and granting them more time to allocate to their pupils. 

To maximise these potential benefits, it’s critical that educational providers incorporate AI into their wider institutional IT strategy. By correctly learning how to make the most of this new technology, students and teachers can adapt to an evolving world where a hybrid human-technology output is becoming the norm. 

If you’re concerned about the impact of AI on your school, Novatia can support you. We understand that while ICT is vital to achieving your organisation’s educational vision, so is safeguarding students and protecting data. Specialising in cybersecurity, data audits and strategy, we work hard to keep your schools safe - we can help with:

  • Crafting a tailored AI strategy aligned with your educational objectives
  • Defining a clear vision for integrating AI in the classroom, including practical use cases and curriculum enhancements
  • Providing guidance and training for staff on best practices in AI implementation and educating them on privacy and safeguarding protocols

To discover more about what we do, please get in touch with us today. 

 

  • Email