This is the last article you can read this month
You can read more article this month
You can read more articles this month
Sorry your limit is up for this month
Reset on:
Please help support the Morning Star by subscribing here
LABOUR will take strict steps to regulate big technology firms developing cutting-edge artificial intelligence functions, it said today.
Shadow science secretary Peter Kyle set out requirements that Labour would impose on companies pioneering “frontier AI.”
They would include reporting the intent to train models beyond certain capabilities, safety testing under independent oversight and tough information security protections.
Mr Kyle claimed that PM Rishi Sunak, despite his much-hyped AI summit, was being “left behind by the US and EU, who are moving ahead with real safeguards on the technology.”
He said: “A Labour government would urgently introduce binding regulation of the small group of companies developing the most powerful AI models that could, if left unchecked, spread misinformation, undermine elections and help terrorists build weapons.
“AI has the potential to transform the world and deliver life-changing benefits for working people. From delivering earlier cancer diagnosis, to relieving traffic congestion, AI can be a force for good.”
The Prime Minister, however, told day two of his summit at Bletchley Park that AI could pose risks on a scale comparable to nuclear war or pandemics.
“That is why as leaders we have a responsibility to act, to take the steps to protect people and that is exactly what we are doing,” Mr Sunak said.
Education Secretary Michelle Donelan had earlier warned that the greatest menace was a Terminator-style “loss of control” over AI.
“That is the one that I am most concerned about because it is the one that would result in the gravest ramifications,” she said.