The threat of Artificial General Intelligence

USAF-drone
Reaper drone sits at Kandahar airbase in Afghanistan in 2018. Photograph: Shah Marai/AFP/Getty Images

Artificial Intelligence (AI) scholar Kate Crawford, discussing the Reith Lecture by Stuart Russell in December 2021, was sceptical that Artificial General Intelligence (AGI) could ever be capable of doing everything better than a human, as much of what we do relies upon emotions machines do not possess.

Stephen Hawkin predicted AI could be the best or the worst thing to ever happen to humanity.

Just last June, a US air force AI-controlled drone had used “highly unexpected strategies to achieve its goal”. Reports a drone in an AI simulation had decided to “kill” its operator to prevent it from interfering with its efforts to achieve its mission were not quite what happened, as the AI system had been instructed not to kill the operator. So instead, it started destroying the communication tower that the operator used to communicate with the drone to stop it from killing its target.

The risk isn’t that AGI will outperform us in every way, but that it will outperform us in ways that could end our civilisation. It doesn’t have to care enough for another individual to make their packed lunch to go to school, or replicate every altruistic thing we do to be effective; AGI just needs to have or develop the capability to follow an objective to a logical but unintended conclusion.

Living with AI – Rutherford and Fry BBC R4, Dec 2021: 
https://www.bbc.co.uk/sounds/play/m00128xd?partner=uk.co.bbc&origin=share-mobile

US air force denies running simulation in which AI drone ‘killed’ operator, Guardian, 2nd June 2023:
https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test

Published 8 April 2024By admin

Categorized as Uncategorized