The world is in a race and competition for AI dominance, but today a few of them seemed to come together to say they would rather work together when it comes to mitigating risks.
At the AI Safety Summit in Bletchley Park in England, British Technology Secretary Michelle Donelan announced a new policy document called the Bletchley Declaration, which aims to build global consensus on how to address the risks posed by AI now and in the future. future can be addressed. the future as it develops. She also said the summit will become a regular event: another meeting will take place in Korea in six months, she said; and another in France six months later.
Like the tone of the conference itself, the document published today is of relatively high standard.
“To achieve this, we affirm that, for the good of all, AI must be designed, developed, deployed and used in a way that is safe, human-centered, reliable and responsible,” the paper said. It also draws specific attention to the types of large language models being developed by companies like OpenAI, Meta and Google and the specific threats they could pose for exploitation.
“Special safety risks arise at the ‘border’ of AIconceived as highly capable people for general use AI models, including foundation models, which could perform a wide variety of tasks, as well as relevant specific specific tasks AI that may exhibit capabilities that cause damage – matching or exceeding the capabilities present in today’s most advanced models,” the report said.
There were also some concrete developments.
Gina Raimondo, the US Secretary of Commerce, announced a new AI Safety Institute that would be housed within the Department of Commerce and specifically under the department’s National Institute of Standards and Technology (NIST).
The aim, she said, would be for this organization to work closely with other AI security groups set up by other governments, and to release plans for a security institute that Britain also plans to establish.
“We have to get to work and we have to get to work between our institutions [achieve] policy coordination around the world,” Raimondo said.
The political leaders in today’s opening plenary included not only representatives of the world’s largest economies, but also some speaking on behalf of developing countries, collectively the Global South.
The lineup included Wu Zhaohui, China’s Vice Minister of Science and Technology; Vera Jourova, Vice-President for Values and Transparency of the European Commission; Rajeev Chandrasekhar, India’s Minister of State for Electronics and Information Technology; Omar Sultan al Olama, UAE Minister of State for Artificial Intelligence; and Bosun Tijani, Nigeria’s Minister of Technology. Collectively, they spoke about inclusivity and accountability, but with so many questions hanging over how that is implemented, proof of their commitment remains to be seen.
“I worry that a race to create powerful machines will outpace our ability to protect society,” says Ian Hogarth, a founder, investor and engineer who currently chairs the UK government’s task force on foundational AI models . play a major hand in putting together this conference. “No one in this room knows for sure how or whether these next leaps in computing power will translate into benefits or harms. We tried to ground ourselves [concerns of risks] in empiricism and accuracy [but] our current lack of understanding… is quite striking.
“History will judge our ability to meet this challenge. It will judge us on what we do and say in the next two days.”