Some schools and universities in other nations have already banned powerful AI systems like ChatGPT, but the UK government has indicated it will adopt a “light touch” approach to regulating the technology.
Leading figures in the United Kingdom’s education sector stated that systems such as OpenAI’s ChatGPT and Google’s Bard were evolving “much too quickly” and that guidance on how classrooms should adapt was falling behind.
They stated that the government alone would be unable to provide the necessary guidance to schools, with ministers previously acknowledging that any attempts to draught AI-related legislation would quickly become obsolete due to the rate of technological advancement.
Rishi Sunak stated that while “guardrails” are necessary to minimize AI’s risks to society, the government aims to optimize AI’s benefits to transform the United Kingdom into a “science and technology superpower.”
In a letter to The Times with more than 60 signatures, education figures asserted that ministers have not been “capable or willing” to provide the “guidance and counsel” they require.
They wrote: “We have no faith in the ability of large digital companies to self-regulate in the interest of students, faculty, and institutions.
“Neither in the past nor currently has the government demonstrated the ability or willingness to do so.”
They added, “The reality is that AI is evolving far too quickly for the government or the legislature to provide the schools with real-time guidance.”
The headteachers behind the letter, led by Sir Anthony Seldon of Epsom College, intend to establish their own “cross-sector body” of teachers from their schools, under the guidance of digital and AI experts, to provide guidance on which AI developments could be beneficial or harmful.
They would strive to ensure that systems such as ChatGPT benefit students rather than tech companies.
Some foreign workplaces, colleges, and universities have already banned generative AI such as ChatGPT.
While they have been impressed with their ability to pass exams, solve computer bugs, and write speeches, they have also demonstrated the capacity to produce incorrect or offensive responses.
Elon Musk joined a group of AI experts in advocating for a halt to the training of large language models, while Sundar Pichai, Google’s chief executive, admitted that the potential dangers “keep me awake at night.”
The letter in The Times follows AI pioneer Professor Stuart Russell’s warning that “the stakes couldn’t be higher” as governments contend with the optimal regulatory approach.
He asked, “How do you maintain power over entities that are more powerful than you – forever?”
“If you do not have an answer, you should cease your investigation. It’s that straightforward.”
This month, fellow British computer scientist Geoffrey Hinton, the “Godfather of AI,” resigned from Google with a warning about the technology’s peril to humanity.