The Future of Life Institute (FLI) has announced it will use a $10 million (£6 million) grant from Paypal founder Elon Musk to fund 37 research projects that will aim to ensure artificial intelligence (AI) is kept “beneficial”.
A further $1.5 million will go towards building an AI research centre, which will be run by Oxford and Cambridge universities in the UK. The money for the centre is being provided by the Open Philanthropy Project.
"There are reasons to believe that unregulated and unconstrained development could incur significant dangers, both from "bad actors" like irresponsible governments and from the unprecedented capability of the technology itself," said Oxford University's Nick Bostrom.
"The centre will focus explicitly on the long-term impacts of AI, the strategic implications of powerful AI systems as they come to exceed human capabilities in most domains of interest, and the policy responses that could best be used to mitigate the potential risks of this technology."
The FLI is funding a plethroa of projects that aim to ensure intelligent robots don’t become dangerous.
Some of the studies will look at how ethics and human values can be integrated into artificial intelligence agents.
The FLI received over 300 funding applications from AI researchers globally, the organisation said.
Six of the 37 projects being funded are in the UK.
A number of well-known public figures have spoken out on the possible threats that increasingly sophisticated robots present to humanity, including Microsoft founder Bill Gates and critically acclaimed Cambridge physicist Stephen Hawking.
But such figures need to be careful how they convey potential risks with AI.
"The danger with the Terminator scenario isn't that it will happen, but that it distracts from the real issues posed by future AI", said FLI president Max Tegmark. "We're staying focused, and the 37 teams supported by today's grants should help solve such real issues."
Musk, who made billions through cofounding companies like Tesla Motors and PayPal, donated to the IFL in January this year.
"Here are all these leading AI researchers saying that AI safety is important", he said. "I agree with them, so I'm committing $10m to support research aimed at keeping AI beneficial for humanity."