Stephen Hawking and Elon Musk have signed an open letter warning that more precautions need to be taken around the further development of artificial intelligence (AI). 

The letter, which is backed by dozens of other scientists, entrepreneurs and investors, specifically states that there needs to be a greater focus on the safety and social benefits associated with AI. 

Honda humanoid robot Asimo
Honda's humanoid robot Asimo of how humans are making increasingly intelligent robots ©Flickr/Ars Technica

The letter, and an attatched research paper from the Future of Life Institute (FLI), which recommends how scientists should develop AI, come amid growing fears that machines are going to surpass the capabilities of humans in both the jobs market and many other areas of life. 

It argues that scientists and technologists need to safely and carefully coordinate and communicate advancements in AI to ensure it does not grow beyond humanity's control.

“Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls,” the FLI’s letter says. “Our AI systems must do what we want them to do.” 

The FLI was established last year by several volunteers, including Skype co-founder Jaan Tallinn. It was set up to boost research into how AI will shape humanity's future, while also assessing the risks it presents. 


Musk, the cofounder of SpaceX and Tesla, a member of the FLI’s scientific advisory board alongside actor Morgan Freeman and world-renowned Cambridge University professor, Stephen Hawking, has previously said that uncontrolled development of AI could be "potentially more dangerous than nukes". 

Experts at some of the world's biggest tech corporations - including from within IBM's Watson supercomputer team, Google, Microsoft Research and Amazon - have also signed the letter.

Other signatories include the entrepreneurs behind artificial intelligence companies DeepMind, which was acquired by Google last year, and Vicarious.

Last month Hawking told the BBC: "The development of full artificial intelligence could spell the end of the human race.

"It would take off on its own, and re-design itself at an ever increasing rate," he said. "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

While the letter raises concerns around the development of AI, it also points out that there are many benefits to be reaped if it is developed correctly. 

“There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase,” the letter reads. “The potential benefits are huge, since everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable.”

Indeed, AI technology is already built into devices we use in our every day lives. For example, Siri, an intelligent personal assistant that sits inside iPhones and iPads is underpinned by AI developed by Apple, while Google's self-driving vehicles also rely heavily on AI. According to the FT, more than 150 startups in Silicon Valley are working on AI today.

The FLI warns that greater focus on the social ramifications of AI are necessary as the field attracts more investment and tech firms start to realise the rewards that can be obtained from creating self-thinking computers. 

“Many economists and computer scientists agree that there is valuable research to be done on how to maximise the economic benefits of AI while mitigating adverse effects, which could include increased inequality and unemployment,” writes the FLI in the paper.

Read more:

DeepMind acquires two Oxford University spinout companies