Generative AI: the Benefits, Risks and the BSM Response

Generative AI: the Benefits, Risks and the BSM Response
  • 2023-24

In a recent article Eric Naiman, Professor of Literature at the University of Berkeley California, says he has noticed a sudden difference in the essays submitted by his students.

In thirty years, he says, he does not recall any student using the word ‘delve’. And yet abruptly many essays now use this term, together with newly prominent words such as ‘intricate’, ‘complex’ and ‘multifaceted’.

The reason for this change? Naiman squarely points the finger at ChatGPT. In just eighteen months, it has changed the digital landscape, affecting working practices and academic research.

AI is nothing new, of course. We have been using such things as spell checkers and calculators for many years. But Generative AI is radically different.

What is Generative AI? According to the European Parliament in 2023, it is the ability of a machine to display human-like capabilities such as reasoning, learning, planning and creativity’.

By trawling the vast resources of the Internet GenAI can create new text, images, and music. It operates by using prompts. The more specific the prompt, the more precise the response it produces.

The question for many institutions, including the BSM, is how do we address it. We cannot pretend to control it. I’m not sure anybody would make that claim. But we can seek to manage its use within the framework of the school.

Our Aims are as follows:

  • To acknowledge GenAI exists and not prohibit but manage its use
  • To harness the potential benefits for Teaching and Learning
  • To address the risks
  • To promote responsible & ethical use

The main benefits of Generative AI seem to be that it can save time, reduce workload, correct grammar and spelling, help generate resources, and brainstorm ideas.

The key risks are ethical  and Legal – namely, the risk of student malpractice (simply using GenAI to write essays and produce work for them); the inaccuracy and bias of the answers it produces; the safeguarding of children, and data privacy.

Obviously as a school we want to maximise the benefits and minimise the risks.

This requires us to educate pupils and staff, and even parents, manage and monitor its use, and ensure that it aligns with the values of academic integrity.

It also involves creating a culture of honesty, with no secrets about how people use it. This will help us model good practice, and arrive at a position where we feel neither overwhelmed by the technology, nor become over-reliant on it.

Ultimately the only justified use of Generative AI in a school is that it improves learning. If it does not do this, then something is going wrong.

The teaching of GenAI very much depends on age-appropriateness. In the Primary Curriculum, there are plans to embed elements of it in the curriculum.

In the Senior School, Generative AI material is already taught strategically through Informatics lessons from Year 7 to 9, with key skills developed.

In Years 10 – 13, the delivery of dedicated lessons on GenAI take place during PSHE and in Core time for the IB Diploma.

It is stressed that students must use approved tools, available through school accounts.

We do not recommend ChatGPT as a school because until students are 18, they need the permission from their parents to use it. Instead Co-Pilot as part of the Microsoft suite is our preferred AI tool.

Students are taught how best to use prompts – for example, asking for a list of 3 – 5 bullet points on a very specific topic.

Generative AI raises several e-Safety concerns. Risks include the collection of private or sensitive data below the appropriate age.   

Authoritative-sounding and human-like information delivered to unknowing children can easily deceive. Worse, children can form emotional and harmful attachments to AI avatars.

Our school Academic Integrity policy has been revised so that students are now required to reference any use of Generative AI, especially in formal coursework related to public examinations.

Students are asked for example in Extended Essays to identify the platform, the date accessed, and the prompt used.

We advise strongly against submitting work to GenAI for feedback because the results might not be accurate or trustworthy. Moreover, students may then lose control of the work, risking accusations of plagiarism as GenAI reproduces the content for others to use as they answer similar assignments.

Generative AI has seen a growth of self-help chatbots, for example Woebot and Limbic Access, where people can seek psychiatric advice. Again this is potentially beneficial, but can also contain risks.

One of the most spectacular uses of GenAI is mimicking reality, taking people and making them say things they never actually said in a way that seems very convincing.

The ‘photograph’ below, for instance, is not a real person but a composite created by GenAI.

In 2023, the Internet Watch Foundation investigated a record 392,660 reports of suspected child sexual abuse imagery – a significant rise since 2022. Much of this content is 'self-generated', but the ease of altering an image, video, voice places people at risk of reputational damage and emotional abuse.

Issues of safety, ethics, privacy, data protection all pre-date GenAI. What is new, though, is the ease of manipulation of images, audio and text, and the ease with which we can be deceived by such content.

A bright light in this murky landscape is the recent partnership forged between OpenAI and Common Sense Media. This partnership is leading to a system of product reviews, that can label GenAI platforms with the equivalent of "nutritional" information, rating them for accuracy, transparency, privacy, and transparency.

Generative AI may change everything. Or perhaps it is just a moment similar to the advent of television, computers, or the Internet – merely the latest in a series of technological developments that at first we find threatening, then we embrace.

Whatever GenAI represents, it is right that we start to have a mature and informed conversation about it, right that we assess its risks and benefits, and right that we establish a framework for its use.

It is important that we ensure learning is enhanced and safety maintained – always a delicate path to navigate, but one we are determined to pursue.  

Chris Greenhalgh
Principal and CEO

  • AI
  • Education
  • Innovation