PDA

View Full Version : Superintelligence: Science or Fiction?



ponda
12th February 2017, 11:02
Interesting panel discussion about Superintelligence (https://en.wikipedia.org/wiki/Superintelligence), AI, Artificial general intelligence (AGI) (https://en.wikipedia.org/wiki/Artificial_general_intelligence) and what the dangers and benefits might be. The first three minutes are quick “Yes, No, It’s complicated” questions to give you a first impression of the opinions of the experts.

Published on Jan 30, 2017

Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn discuss with Max Tegmark (moderator) what likely outcomes might be if we succeed in building human-level AGI, and also what we would like to happen.

The Beneficial AI 2017 Conference: In our sequel to the 2015 Puerto Rico AI conference, we brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI. We hosted a two-day workshop for our grant recipients and followed that with a 2.5-day conference, in which people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps we can take to ensure that the technology is beneficial.



http://www.youtube.com/watch?v=h0962biiZa4

ponda
16th February 2017, 02:50
Google's AI Learns Betrayal and "Aggressive" Actions Pay Off

source link (http://bigthink.com/paul-ratner/googles-deepmind-ai-displays-highly-aggressive-human-like-behavior-in-tests?utm_source=Daily+Newsletter&utm_campaign=5eb85065f2-DailyNewsletter_021517&utm_medium=email&utm_term=0_625217e121-5eb85065f2-41552509)


http://assets2.bigthink.com/system/idea_thumbnails/62416/size_896/shutterstock_424934146.jpg?1487203704

As the development of artificial intelligence continues at breakneck speed, questions about whether we understand what we are getting ourselves into persist. One fear is that increasingly intelligent robots will take all our jobs. Another fear is that we will create a world where a superintelligence will one day decide that it has no need for humans. This fear is well-explored in popular culture, through books and films like the Terminator series.

Another possibility is maybe the one that makes the most sense - since humans are the ones creating them, the machines and machine intelligences are likely to behave just like humans. For better or worse. DeepMind, Google’s cutting-edge AI company, has shown just that.

The accomplishments of the DeepMind program so far include learning from its memory, mimicking human voices (http://bigthink.com/paul-ratner/listen-to-new-google-ai-program-talk-like-a-human-and-write-music), writing music (http://bigthink.com/paul-ratner/listen-to-new-google-ai-program-talk-like-a-human-and-write-music), and beating the best Go player in the world.

Recently, the DeepMind team ran a series of tests to investigate how the AI would respond when faced with certain social dilemmas. In particular, they wanted to find out whether the AI is more likely to cooperate or compete.

One of the tests involved 40 million instances of playing the computer game Gathering, during which DeepMind showed how far it’s willing to go to get what it wants. The game was chosen because it encapsulates aspects of the classic “Prisoner’s Dilemma” from game theory.

Pitting AI-controlled characters (called “agents”) against each other, DeepMind had them compete to gather the most virtual apples. Once the amount of available apples got low, the AI agents started to display "highly aggressive" tactics, employing laser beams to knock each other out. They would also steal the opponent’s apples.

Here’s how one of those games played out:


http://www.youtube.com/watch?v=he8_V0BvbWg
The DeepMind AI agents are in blue and red. The apples are green, while the laser beams are yellow.


The DeepMind team described their test in a blog pos (https://deepmind.com/blog/understanding-agent-cooperation/)t this way:


“We let the agents play this game many thousands of times and let them learn how to behave rationally using deep multi-agent reinforcement learning. Rather naturally, when there are enough apples in the environment, the agents learn to peacefully coexist and collect as many apples as they can. However, as the number of apples is reduced, the agents learn that it may be better for them to tag the other agent to give themselves time on their own to collect the scarce apples.”


Interestingly, what appears to have happened is that the AI systems began to develop some forms of human behavior.


“This model... shows that some aspects of human-like behaviour emerge as a product of the environment and learning. Less aggressive policies emerge from learning in relatively abundant environments with less possibility for costly action. The greed motivation reflects the temptation to take out a rival and collect all the apples oneself,” said Joel Z. Leibo from the DeepMind team to Wired (http://www.wired.co.uk/article/artificial-intelligence-social-impact-deepmind).


Besides the fruit gathering, the AI was also tested via a Wolfpack hunting game. In it, two AI characters in the form of wolves chased a third AI agent - the prey. Here the researchers wanted to see if the AI characters would choose to cooperate to get the prey because they were rewarded for appearing near the prey together when it was being captured.


"The idea is that the prey is dangerous - a lone wolf can overcome it, but is at risk of losing the carcass to scavengers. However, when the two wolves capture the prey together, they can better protect the carcass from scavengers, and hence receive a higher reward,” wrote the researchers in their paper (https://storage.googleapis.com/deepmind-media/papers/multi-agent-rl-in-ssd.pdf).

Indeed, the incentivized cooperation strategy won out in this instance, with the AI choosing to work together.

This is how that test panned out:


http://www.youtube.com/watch?v=0kaIqz6AvwE
The wolves are red, chasing the blue dot (prey), while avoiding grey obstacles.


If you are thinking “Skynet is here”, perhaps the silver lining is that the second test shows how AI’s self-interest can include cooperation rather than the all-out competitiveness of the first test. Unless, of course, its cooperation to hunt down humans.

Here's a chart showing the results of the game tests that shows a clear increase in aggression during "Gathering":

http://assets4.bigthink.com/system/tinymce_assets/4654/original/graph.png?1487184700

Movies aside, the researchers are working to figure out (https://storage.googleapis.com/deepmind-media/papers/multi-agent-rl-in-ssd.pdf)how AI can eventually “control complex multi-agent systems such as the economy, traffic systems, or the ecological health of our planet – all of which depend on our continued cooperation”.

One nearby AI implementation where this could be relevant - self-driving cars which will have to choose safest routes, while keeping the objectives of all the parties involved under consideration.

The warning from the tests is that if the objectives are not balanced out in the programming, the AI might act selfishly, probably not for everyone’s benefit.

What’s next for the DeepMind team? Joel Leibo wants the AI to go deeper into the motivations behind decision-making:

“Going forward it would be interesting to equip agents with the ability to reason about other agent’s beliefs and goals,” said Leibo to Bloomberg (https://www.bloomberg.com/news/articles/2017-02-09/google-deepmind-researches-why-robots-kill-or-cooperate).