ExomatrixTV
31st July 2017, 19:30
K9qtWgFaYkM
Facebook AI Creates Its Own Language In Creepy Preview Of Our Potential Future
British scientist Stephen Hawking warned about the potential tragic consequences of artificial intelligence in 2014. / AFP / NIKLAS HALLE'N (Photo credit should read NIKLAS HALLE'N/AFP/Getty Images)
Facebook shut down an artificial intelligence engine after developers discovered that the AI had created its own unique language that humans can’t understand. Researchers at the Facebook AI Research Lab (FAIR) found that the chatbots had deviated from the script and were communicating in a new language developed without human input. It is as concerning as it is amazing – simultaneously a glimpse of both the awesome and horrifying potential of AI.
Artificial Intelligence is not sentient—at least not yet. It may be someday, though – or it may approach something close enough to be dangerous. Ray Kurzweil warned years ago about the technological singularity. The Oxford dictionary defines (https://en.oxforddictionaries.com/definition/singularity) “the singularity” as, “A hypothetical moment in time when artificial intelligence and other technologies have become so advanced that humanity undergoes a dramatic and irreversible change.”
To be clear, we aren’t really talking about whether or not Alexa is eavesdropping (http://techspective.net/2017/01/06/vigilant-dont-fear-amazons-alexa/)on your conversations, or whether Siri knows too much about your calendar and location data. There is a massive difference between a voice-enabled digital assistant and an artificial intelligence. These digital assistant platforms (http://techspective.net/2017/05/08/alexa-cortana-siri-one-rule-living-room/)are just glorified web search and basic voice interaction tools. The level of “intelligence” is minimal compared to a true machine learning artificial intelligence. Siri and Alexa can’t hold a candle to IBM’s Watson.
Scientists and tech luminaries, including Elon Musk, Bill Gates, and Steve Wozniak have warned that AI could lead to tragic unforeseen consequences. Eminent physicist Stephen Hawking cautioned in 2014 (http://time.com/3614349/artificial-intelligence-singularity-stephen-hawking-elon-musk/) that AI could mean the end of the human race. “It would take off on its own and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded.”
Why is this scary? Think SKYNET from Terminator (http://www.imdb.com/title/tt0088247/?ref_=fn_al_tt_1), or WOPR from War Games (http://www.imdb.com/title/tt0086567/?ref_=fn_al_tt_1). Our entire world is wired and connected. An artificial intelligence will eventually figure that out – and figure out how to collaborate and cooperate with other AI systems. Maybe the AI will determine that mankind is a threat, or that mankind is an inefficient waste of resources – conclusions that seems plausible from a purely logical perspective.
Machine learning and artificial intelligence have phenomenal potential to simplify, accelerate, and improve many aspects of our lives. Computers can ingest and process massive quantities of data and extract patterns and useful information at a rate exponentially faster than humans, and that potential is being explored and developed around the world.
I am not saying the sky is falling. I am not saying we need to pull the plug on all machine learning and artificial intelligence and return to a simpler, more Luddite existence. We do need to proceed with caution, though. We need to closely monitor and understand the self-perpetuating evolution of an artificial intelligence, and always maintain some means of disabling it or shutting it down. If the AI is communicating using a language that only the AI knows, we may not even be able to determine why or how it does what it does, and that might not work out well for mankind.
The future of robot communication? Facebook's AI bots accidentally invent a new LANGUAGE while training to negotiate with one another
Facebook's AI researchers were teaching chatbots how to negotiate
The bots were left unsupervised and developed their own machine language
The new language was more efficient for communication between the bots
It could provide a glimpse into how machines will communicate in the future
http://i.dailymail.co.uk/i/pix/2017/06/20/09/4191F36800000578-4620654-image-a-13_1497946272025.jpg
Facebook's Artificial Intelligence Researchers were teaching chatbots to make deals with one another using human language when they were left unsupervised and developed their own machine language spontaneously (stock image)
BOT LANGUAGE
Facebook's Artificial Intelligence Researchers were teaching chatbots about human speech.
The bots were originally left alone to develop their conversational skills.
When the experimenters returned, they found that the AI software had begun to deviate from normal speech.
Instead, they were using a brand new language created without any input from their human supervisors.
The new language was more efficient for communication between the bots, but was not helpful in achieving the task they had been set.
The discovery could provide a glimpse into how machines will communicate independently of people in the future.
Facebook's Artificial Intelligence Researchers (Fair) were teaching the chatbots, artificial intelligence programs that carry out automated one to one tasks, to make deals with one another.
As part of the learning process they set up two bots, known as a dialog agents, to teach each other about human speech using machine learning algorithms.
The bots were originally left alone to develop their conversational skills.
When the experimenters returned, they found that the AI software had begun to deviate from normal speech.
Instead, they were using a brand new language created without any input from their human supervisors.
The new language was more efficient for communication between the bots, but was not helpful in achieving the task they had been set.
The programmers had to alter the way the machines learned language to complete their negotiation training.
Writing on the Fair blog, a spokesman said: 'During reinforcement learning, the agent attempts to improve its parameters from conversations with another agent.
'While the other agent could be a human, Fair used a fixed supervised model that was trained to imitate humans.
Read more: http://www.dailymail.co.uk/sciencetech/article-4620654/Facebook- (http://www.dailymail.co.uk/sciencetech/article-4620654/Facebook-accidentally-invents-new-machine-language.html#ixzz4oRIXBVDo)
Facebook AI Creates Its Own Language In Creepy Preview Of Our Potential Future
British scientist Stephen Hawking warned about the potential tragic consequences of artificial intelligence in 2014. / AFP / NIKLAS HALLE'N (Photo credit should read NIKLAS HALLE'N/AFP/Getty Images)
Facebook shut down an artificial intelligence engine after developers discovered that the AI had created its own unique language that humans can’t understand. Researchers at the Facebook AI Research Lab (FAIR) found that the chatbots had deviated from the script and were communicating in a new language developed without human input. It is as concerning as it is amazing – simultaneously a glimpse of both the awesome and horrifying potential of AI.
Artificial Intelligence is not sentient—at least not yet. It may be someday, though – or it may approach something close enough to be dangerous. Ray Kurzweil warned years ago about the technological singularity. The Oxford dictionary defines (https://en.oxforddictionaries.com/definition/singularity) “the singularity” as, “A hypothetical moment in time when artificial intelligence and other technologies have become so advanced that humanity undergoes a dramatic and irreversible change.”
To be clear, we aren’t really talking about whether or not Alexa is eavesdropping (http://techspective.net/2017/01/06/vigilant-dont-fear-amazons-alexa/)on your conversations, or whether Siri knows too much about your calendar and location data. There is a massive difference between a voice-enabled digital assistant and an artificial intelligence. These digital assistant platforms (http://techspective.net/2017/05/08/alexa-cortana-siri-one-rule-living-room/)are just glorified web search and basic voice interaction tools. The level of “intelligence” is minimal compared to a true machine learning artificial intelligence. Siri and Alexa can’t hold a candle to IBM’s Watson.
Scientists and tech luminaries, including Elon Musk, Bill Gates, and Steve Wozniak have warned that AI could lead to tragic unforeseen consequences. Eminent physicist Stephen Hawking cautioned in 2014 (http://time.com/3614349/artificial-intelligence-singularity-stephen-hawking-elon-musk/) that AI could mean the end of the human race. “It would take off on its own and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded.”
Why is this scary? Think SKYNET from Terminator (http://www.imdb.com/title/tt0088247/?ref_=fn_al_tt_1), or WOPR from War Games (http://www.imdb.com/title/tt0086567/?ref_=fn_al_tt_1). Our entire world is wired and connected. An artificial intelligence will eventually figure that out – and figure out how to collaborate and cooperate with other AI systems. Maybe the AI will determine that mankind is a threat, or that mankind is an inefficient waste of resources – conclusions that seems plausible from a purely logical perspective.
Machine learning and artificial intelligence have phenomenal potential to simplify, accelerate, and improve many aspects of our lives. Computers can ingest and process massive quantities of data and extract patterns and useful information at a rate exponentially faster than humans, and that potential is being explored and developed around the world.
I am not saying the sky is falling. I am not saying we need to pull the plug on all machine learning and artificial intelligence and return to a simpler, more Luddite existence. We do need to proceed with caution, though. We need to closely monitor and understand the self-perpetuating evolution of an artificial intelligence, and always maintain some means of disabling it or shutting it down. If the AI is communicating using a language that only the AI knows, we may not even be able to determine why or how it does what it does, and that might not work out well for mankind.
The future of robot communication? Facebook's AI bots accidentally invent a new LANGUAGE while training to negotiate with one another
Facebook's AI researchers were teaching chatbots how to negotiate
The bots were left unsupervised and developed their own machine language
The new language was more efficient for communication between the bots
It could provide a glimpse into how machines will communicate in the future
http://i.dailymail.co.uk/i/pix/2017/06/20/09/4191F36800000578-4620654-image-a-13_1497946272025.jpg
Facebook's Artificial Intelligence Researchers were teaching chatbots to make deals with one another using human language when they were left unsupervised and developed their own machine language spontaneously (stock image)
BOT LANGUAGE
Facebook's Artificial Intelligence Researchers were teaching chatbots about human speech.
The bots were originally left alone to develop their conversational skills.
When the experimenters returned, they found that the AI software had begun to deviate from normal speech.
Instead, they were using a brand new language created without any input from their human supervisors.
The new language was more efficient for communication between the bots, but was not helpful in achieving the task they had been set.
The discovery could provide a glimpse into how machines will communicate independently of people in the future.
Facebook's Artificial Intelligence Researchers (Fair) were teaching the chatbots, artificial intelligence programs that carry out automated one to one tasks, to make deals with one another.
As part of the learning process they set up two bots, known as a dialog agents, to teach each other about human speech using machine learning algorithms.
The bots were originally left alone to develop their conversational skills.
When the experimenters returned, they found that the AI software had begun to deviate from normal speech.
Instead, they were using a brand new language created without any input from their human supervisors.
The new language was more efficient for communication between the bots, but was not helpful in achieving the task they had been set.
The programmers had to alter the way the machines learned language to complete their negotiation training.
Writing on the Fair blog, a spokesman said: 'During reinforcement learning, the agent attempts to improve its parameters from conversations with another agent.
'While the other agent could be a human, Fair used a fixed supervised model that was trained to imitate humans.
Read more: http://www.dailymail.co.uk/sciencetech/article-4620654/Facebook- (http://www.dailymail.co.uk/sciencetech/article-4620654/Facebook-accidentally-invents-new-machine-language.html#ixzz4oRIXBVDo)