Fundy Gemini
6th March 2012, 23:30
I found this article interesting, and couldn't help thinking about Whitley Streibers book "The Key" (where he begins to wonder in that book if he may indeed be dealing with an advanced artificial intelligence)
http://www.msnbc.msn.com/id/46590591/ns/technology_and_science-innovation/#.T1ZKInlOVad
**Snip**
Never send a human to guard a machine Even casual conversation with a human guard could allow an AI to use psychological tricks such as befriending or blackmail. The AI might offer to reward a human with perfect health, immortality, or perhaps even bring back dead family and friends. Alternately, it could threaten to do terrible things to the human once it "inevitably" escapes.
The safest approach for communication might only allow the AI to respond in a multiple-choice fashion to help solve specific science or technology problems, Yampolskiy explained. That would harness the power of AI as a super-intelligent oracle.Despite all the safeguards, many researchers think it's impossible to keep a clever AI locked up forever. A past experiment by Eliezer Yudkowsky, a research fellow at the Singularity Institute for Artificial Intelligence, suggested that mere human-level intelligence could escape from an "AI Box" scenario — even if Yampolskiy pointed out that the test wasn't done in the most scientific way.
Still, Yampolskiy argues strongly for keeping AI bottled up rather than rushing headlong to free our new machine overlords. But if the AI reaches the point where it rises beyond human scientific understanding to deploy powers such as precognition (knowledge of the future), telepathy or psychokinesis, all bets are off.
http://www.msnbc.msn.com/id/46590591/ns/technology_and_science-innovation/#.T1ZKInlOVad
**Snip**
Never send a human to guard a machine Even casual conversation with a human guard could allow an AI to use psychological tricks such as befriending or blackmail. The AI might offer to reward a human with perfect health, immortality, or perhaps even bring back dead family and friends. Alternately, it could threaten to do terrible things to the human once it "inevitably" escapes.
The safest approach for communication might only allow the AI to respond in a multiple-choice fashion to help solve specific science or technology problems, Yampolskiy explained. That would harness the power of AI as a super-intelligent oracle.Despite all the safeguards, many researchers think it's impossible to keep a clever AI locked up forever. A past experiment by Eliezer Yudkowsky, a research fellow at the Singularity Institute for Artificial Intelligence, suggested that mere human-level intelligence could escape from an "AI Box" scenario — even if Yampolskiy pointed out that the test wasn't done in the most scientific way.
Still, Yampolskiy argues strongly for keeping AI bottled up rather than rushing headlong to free our new machine overlords. But if the AI reaches the point where it rises beyond human scientific understanding to deploy powers such as precognition (knowledge of the future), telepathy or psychokinesis, all bets are off.