Monday 12 January 2015

How Artificial intelligence experts sign open letter to protect mankind from machines

The Future of Life Institute wants humanity to tread lightly while on the road to human-level artificial intelligence.
artificial-intellgience-getty.jpg"Charlie" is an ape-like robotic system that walks on four limbs, demonstrated here in March 2014 in Hanover, Germany. The robot could conceivably be used in the kind of rough terrain found on the moon, or it could be a stepping stone toward humanity's destruction.
Getty Images
We're certainly decades away from the technological prowess to develop our very own sociopathic supercomputer that will enslave mankind, but artificial intelligence experts are already preparing for the worst when, not if, the singularity happens.
AI experts around the globe are signing an open letter, put forth Sunday by the Future of Life Institute, that pledges to safely and carefully coordinate and communicate about the progress of the field to ensure it does not grow beyond humanity's control. The signees already include co-founders of Deep Mind, the British AI company purchased by Google in January 2014, MIT professors and experts at some of technology's biggest corporations, including from within IBM's Watson supercomputer team and Microsoft Research.
"Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to research how to maximize these benefits while avoiding potential pitfalls," the letter's summary reads. Attached to the letter is a research document outlining where the pitfalls lie and what needs to be established to continue safely pursuing AI.
Thee most immediately concerns for the Future of Life Institute are areas like machine ethics and self-driving cars -- will our vehicles be able to maximize risk without killing their drivers in the process? -- and autonomous weapons systems, among other problematic applications of AI. But the long-term plan is to stop treating fictional dystopias as pure fantasy and to begin readily addressing the possibility that intelligence greater than our own could one day begin acting against its programming.
The Future of Life Institute is a volunteer-only research organization whose primary goal is mitigating the potential risks of human-level man-made intelligence that may then advance exponentially. In other words, it's the early forms of the Resistance in the "Terminator" films, trying to stave off Skynet before it inevitably destroys us. It was founded by scores of decorated mathematicians and computer science experts around the world, chiefly Jaan Tallinn, a co-founder of Skype, and MIT professor Max Tegmark.
SpaceX and Tesla CEO Elon Musk, who sits on the institute's Scientific Advisory Board, has been vocal in the last couple of years about AI development, calling it "summoning the demon" in an MIT talk in October 2014 and actively investing in the space, which he said may be more dangerous than nuclear weapons, to keep an eye on it.
"I'm increasingly inclined to think there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don't do something very foolish," Musk said at the time. Famed physicist Stephen Hawking, too, is wary of AI. He used last year's Johnny Depp film "Transcendence," which centered on conceptualizing what a post-human intelligence looks like, to talk about the dangers of AI.
"One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand," Hawking co-wrote in an article for the Independent in May 2014, alongside Future of Life Institute members Tegmark, Stuart Russell and Frank Wilczek.
"Whereas the short-term impact of AI depends on who controls it," they added, "the long-term impact depends on whether it can be controlled at all."
Originally found at:

No comments:

Post a Comment