- AI safety summit in UK
- Bletchley Declaration endorsed
- Calls for global collaboration
Ahead of the two-day AI safety summit in the United Kingdom, over a hundred political and business leaders are in attendance, including Elon Musk, OpenAI’s Sam Altman, and Google DeepMind’s Demis Hassabis.
Bletchley Declaration on AI Risks
China and the US have signed a declaration on addressing AI’s “catastrophic” threats.
International Collaboration for AI Safety
Global AI collaboration is stressed in the Bletchley Declaration, signed by 28 governments, including the world’s major AI powers.
It was released on the inaugural day of an unprecedented summit on AI safety, coordinated by Rishi Sunak. The summit commenced with a video address from the King.
The agreement derives its nomenclature from Bletchley Park, the location where the British codebreakers worked during World War II.
“Start your investing journey with a gift! Claim your free Webull shares.”
The government asserts that the agreement accomplishes significant goals of the safety summit by guaranteeing the “collective management” of potential AI threats and the “safe and responsible development and deployment” of the technology.
Given their prominence in the field and their jurisdiction over many of the world’s leading developers (including ChatGPT creator OpenAI and Beijing tech behemoth Baidu), the participation of the United States and China is deemed crucial.
Inviting China to the event, Mr. Sunak disregarded criticism from within his own party and from his supporters.
According to Rahima Mahmut of Stop Uyghur Genocide, Beijing employed the technology as a means of “repression.”
Accusations allege that Xinjiang, China, is holding one million Uyghurs captive in “re-education” centers. Members of parliament have accused the province’s Uyghurs and other minorities of genocide.
French, Japanese, South Korean, and Saudi Arabia are among the additional nations endorsing the declaration.
The pamphlet lists privacy, bias, “catastrophic” biotechnology, misinformation, and cybersecurity dangers.
It states that such dangers are “best mitigated through international cooperation.”
Participants have consented to cooperate on AI safety research and to convene for subsequent summits, commencing with a mini virtual event co-hosted by the United Kingdom and South Korea in six months.
They will hold another summit in person in France one year from now.
Challenges and Criticisms
Mr. Sunak referred to the declaration as a “landmark achievement,” but it will not appease critics who cautioned him prior to the summit that his attention was preoccupied with speculative future dangers.
The TUC and dozens of academics and NGOs wrote to him last week accusing him of “marginalising” AI’s most vulnerable.
Small businesses and creatives, who have been among the most vocal in their concerns about AI, reportedly felt “suffocated” and “squeezed out” by the influence and power of large technology companies.
Calling the summit on AI “a missed opportunity,” the publication stated: “For many millions of people in the UK and across the world, the risks and harms of AI are not distant – they are felt in the here and now.
At the meeting, prominent AI experts and civil society organisations called for greater urgency in addressing long-standing issues.
Responsible AI UK and the Ada Lovelace Institute, among others, signed a joint statement stating that regulation should “ensure accountability” and prevent AI companies from “grading their own homework.
Global Leaders in Attendance
Also the two-day UK event draws over 100 political and business leaders, including Elon Musk, Sam Altman, and Demis Hassabis.
Controversially, vice president of the United States Kamala Harris, president of the European Commission Ursula von der Leyen. And a Chinese minister of technology are also in attendance. However, not present are German Olaf Scholz, Canadian Justin Trudeau, or French Emmanuel Macron.
Following its conclusion on Thursday, Mr. Sunak and Mr. Musk will engage in a live discussion on X (previously Twitter).