+ Reply to Thread
Page 4 of 4 FirstFirst 1 4
Results 61 to 79 of 79

Thread: URGENT! It will be here soon! Please be AWARE (The Growth of the Internet)

  1. Link to Post #61
    Ecuador Unsubscribed
    Join Date
    3rd February 2011
    Location
    California
    Age
    36
    Posts
    1,584
    Thanks
    3,721
    Thanked 10,195 times in 1,429 posts

    Default Re: URGENT! It will be here soon! Please be AWARE (The Growth of the Internet)





    Quote Posted by Jeffrey Sewell-Holloway (here)
    Internet connectivity trends:



    Global saturation of internet trends:


    Quote Posted by Jeffrey Sewell-Holloway (here)

    [From, War in the Age of Intelligent Machines]:
    Similarly, after a certain critical point is reached in the number of computers connected to a network (a threshold of connectivity), the network itself becomes capable of spontaneously generating computational processes not planned by its designers. For instance, in many computer networks (like the ARPANET, discussed in Chapter One), there is not a central computer handling the traffic of messages. Instead, the messages themselves possess enough "local intelligence" to find their way around in the net and reach their destination. In more recent schemes of network control, messages are not only allowed to travel on their own, but also to interact with each other to trade and barter resources (computer memory, processing time).
    De Landa was referring to the ARPANET back in 1991. Now, we have the internet in all of it's interconnected totality. De Landa continues:
    In a very concrete sense, the development of a network capable of withstanding the pressures of war involved the creation of a scheme of control that would allow the network to self-organize. That is, in the ARPANET there is no centralized agency directing the traffic of information. Instead, the flows of information are allowed to organize themselves: "The controlling agent in a 'packet-switched' network like tha ARPANET was not a central computer somewhere, not even the 'message processor' that mediated between computers, but the packets of information, the messages themselves." What this means is that the messages which circulate through the ARPANET contained enough "local intelligence" to find their own destination without the need of centralized traffic control.

    In short, the efficient management of information traffic in a computer network involved substituting a central source of command embodied in the hardware of some computer, by a form of "collective decision-making" embodied in the software of the machine: the packets of information themselves had to act as "independent software objects" and be allowed to make their own decisions regarding the best way of accomplishing their objectives. Although independent software objects have many functions and names (actors, demons, knowledge sources, etc.), we will call them all "demons," because they are not controlled by a master program or a central computer but rather "invoked" into action by changes in their environment. Demons are, indeed, a means of allowing a computer network to self-organize.

    [...]

    Demons are, indeed, beginning to form "computational societies" that resemble ecological systems such as insect colonies or social systems such as markets. Past a certain threshold of connectivity the membrane which computer networks are creating over the surface of the planet begins to "come to life." Independent software objects will soon begin to constitute even more complex computational societies in which demons trade with one another, bid and compete for resources, seed and spawn processes spontaneously and so on. The biosphere, as we have seen, is pregnant with singularities that spontaneously give rise to processes of self-organization. Similarly, the portion of the "mechanosphere" constituted by computer networks, once it has crossed a certain critical point of connectivity, begins to be inhabited by symmetry-breaking singularities, which give rise to emergent properties in the system. These system "can encourage the development of intelligent [software] objects, but there is also a sense in which the systems themselves will become intelligent.
    As of late, over 60% of internet traffic is non-human.



    See also: Internet Bot
    Last edited by Jeffrey; 13th December 2013 at 03:11.

  2. Link to Post #62
    Brazil Avalon Retired Member
    Join Date
    28th June 2011
    Location
    Belo Horizonte, Brazil
    Age
    40
    Posts
    3,857
    Thanks
    18,436
    Thanked 24,123 times in 3,535 posts

    Default Re: URGENT! It will be here soon! Please be AWARE (The Growth of the Internet)

    Hey Jeff,

    Check this out, mate.

    Google just bought Boston Dynamics, you know, that crazy robot making company.

    http://www.theverge.com/2013/12/14/5...oston-dynamics

    Raf.

  3. The Following 7 Users Say Thank You to RMorgan For This Post:

    AwakeInADream (17th December 2013), Bill Ryan (14th December 2013), Blake Elder (14th December 2013), Freed Fox (15th December 2013), Jeffrey (14th December 2013), Reinhard (11th March 2014), william r sanford72 (14th December 2013)

  4. Link to Post #63
    Australia Avalon Member Blake Elder's Avatar
    Join Date
    3rd February 2011
    Age
    51
    Posts
    42
    Thanks
    255
    Thanked 101 times in 33 posts

    Default Re: URGENT! It will be here soon! Please be AWARE (The Growth of the Internet)

    Yes, Google has been very busy lately, acquiring many robotics companies

    http://singularityhub.com/2013/12/08...even-startups/

    As the article mentions, they wish to take robotics to "the next level", making it far more accessible to everyone.
    Living on the Fringe of this consensus reality

  5. The Following 6 Users Say Thank You to Blake Elder For This Post:

    AwakeInADream (17th December 2013), Freed Fox (15th December 2013), Jeffrey (14th December 2013), Reinhard (11th March 2014), RMorgan (15th December 2013), william r sanford72 (14th December 2013)

  6. Link to Post #64
    Ecuador Unsubscribed
    Join Date
    3rd February 2011
    Location
    California
    Age
    36
    Posts
    1,584
    Thanks
    3,721
    Thanked 10,195 times in 1,429 posts

    Default Re: URGENT! It will be here soon! Please be AWARE (The Growth of the Internet)

    Hope I'm not pushing it with the cross-posting ...

    -----------

    Why the future doesn't need us.
    By Billy Joy | April 2000

    From the moment I became involved in the creation of new technologies, their ethical dimensions have concerned me, but it was only in the autumn of 1998 that I became anxiously aware of how great are the dangers facing us in the 21st century. I can date the onset of my unease to the day I met Ray Kurzweil, the deservedly famous inventor of the first reading machine for the blind and many other amazing things.

    Ray and I were both speakers at George Gilder's Telecosm conference, and I encountered him by chance in the bar of the hotel after both our sessions were over. I was sitting with John Searle, a Berkeley philosopher who studies consciousness. While we were talking, Ray approached and a conversation began, the subject of which haunts me to this day.

    I had missed Ray's talk and the subsequent panel that Ray and John had been on, and they now picked right up where they'd left off, with Ray saying that the rate of improvement of technology was going to accelerate and that we were going to become robots or fuse with robots or something like that, and John countering that this couldn't happen, because the robots couldn't be conscious.

    While I had heard such talk before, I had always felt sentient robots were in the realm of science fiction. But now, from someone I respected, I was hearing a strong argument that they were a near-term possibility. I was taken aback, especially given Ray's proven ability to imagine and create the future. I already knew that new technologies like genetic engineering and nanotechnology were giving us the power to remake the world, but a realistic and imminent scenario for intelligent robots surprised me.

    It's easy to get jaded about such breakthroughs. We hear in the news almost every day of some kind of technological or scientific advance. Yet this was no ordinary prediction. In the hotel bar, Ray gave me a partial preprint of his then-forthcoming book The Age of Spiritual Machines, which outlined a utopia he foresaw - one in which humans gained near immortality by becoming one with robotic technology. On reading it, my sense of unease only intensified; I felt sure he had to be understating the dangers, understating the probability of a bad outcome along this path.

    I found myself most troubled by a passage detailing a dystopian scenario:
    THE NEW LUDDITE CHALLENGE

    First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.

    If the machines are permitted to make all their own decisions, we can't make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machine's decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won't be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.

    On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite - just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone's physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes "treatment" to cure his "problem." Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or make them "sublimate" their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they will most certainly not be free. They will have been reduced to the status of domestic animals.
    In the book, you don't discover until you turn the page that the author of this passage is Theodore Kaczynski - the Unabomber. I am no apologist for Kaczynski. His bombs killed three people during a 17-year terror campaign and wounded many others. One of his bombs gravely injured my friend David Gelernter, one of the most brilliant and visionary computer scientists of our time. Like many of my colleagues, I felt that I could easily have been the Unabomber's next target.

    Kaczynski's actions were murderous and, in my view, criminally insane. He is clearly a Luddite, but simply saying this does not dismiss his argument; as difficult as it is for me to acknowledge, I saw some merit in the reasoning in this single passage. I felt compelled to confront it.

    Kaczynski's dystopian vision describes unintended consequences, a well-known problem with the design and use of technology, and one that is clearly related to Murphy's law - "Anything that can go wrong, will." (Actually, this is Finagle's law, which in itself shows that Finagle was right.) Our overuse of antibiotics has led to what may be the biggest such problem so far: the emergence of antibiotic-resistant and much more dangerous bacteria. Similar things happened when attempts to eliminate malarial mosquitoes using DDT caused them to acquire DDT resistance; malarial parasites likewise acquired multi-drug-resistant genes.

    The cause of many such surprises seems clear: The systems involved are complex, involving interaction among and feedback between many parts. Any changes to such a system will cascade in ways that are difficult to predict; this is especially true when human actions are involved.

    [...]

    At around the same time, I found Hans Moravec's book Robot: Mere Machine to Transcendent Mind. Moravec is one of the leaders in robotics research, and was a founder of the world's largest robotics research program, at Carnegie Mellon University. Robot gave me more material to try out on my friends - material surprisingly supportive of Kaczynski's argument. For example:
    The Short Run (Early 2000s)

    Biological species almost never survive encounters with superior competitors. Ten million years ago, South and North America were separated by a sunken Panama isthmus. South America, like Australia today, was populated by marsupial mammals, including pouched equivalents of rats, deers, and tigers. When the isthmus connecting North and South America rose, it took only a few thousand years for the northern placental species, with slightly more effective metabolisms and reproductive and nervous systems, to displace and eliminate almost all the southern marsupials.

    In a completely free marketplace, superior robots would surely affect humans as North American placentals affected South American marsupials (and as humans have affected countless species). Robotic industries would compete vigorously among themselves for matter, energy, and space, incidentally driving their price beyond human reach. Unable to afford the necessities of life, biological humans would be squeezed out of existence.

    There is probably some breathing room, because we do not live in a completely free marketplace. Government coerces non-market behavior, especially by collecting taxes. Judiciously applied, governmental coercion could support human populations in high style on the fruits of robot labor, perhaps for a long while.
    A textbook dystopia - and Moravec is just getting wound up. He goes on to discuss how our main job in the 21st century will be "ensuring continued cooperation from the robot industries" by passing laws decreeing that they be "nice," and to describe how seriously dangerous a human can be "once transformed into an unbounded super-intelligent robot." Moravec's view is that the robots will eventually succeed us - that humans clearly face extinction.

    [...]

    I was also reminded of the Borg of Star Trek, a hive of partly biological, partly robotic creatures with a strong destructive streak. Borg-like disasters are a staple of science fiction, so why hadn't I been more concerned about such robotic dystopias earlier? Why weren't other people more concerned about these nightmarish scenarios?

    Part of the answer certainly lies in our attitude toward the new - in our bias toward instant familiarity and unquestioning acceptance. Accustomed to living with almost routine scientific breakthroughs, we have yet to come to terms with the fact that the most compelling 21st-century technologies - robotics, genetic engineering, and nanotechnology - pose a different threat than the technologies that have come before. Specifically, robots, engineered organisms, and nanobots share a dangerous amplifying factor: They can self-replicate. A bomb is blown up only once - but one bot can become many, and quickly get out of control.

    Much of my work over the past 25 years has been on computer networking, where the sending and receiving of messages creates the opportunity for out-of-control replication. But while replication in a computer or a computer network can be a nuisance, at worst it disables a machine or takes down a network or network service. Uncontrolled self-replication in these newer technologies runs a much greater risk: a risk of substantial damage in the physical world.

    [...]

    The 21st-century technologies - genetics, nanotechnology, and robotics (GNR) - are so powerful that they can spawn whole new classes of accidents and abuses. Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups.

    [...]

    I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation-states, on to a surprising and terrible empowerment of extreme individuals.

    Nothing about the way I got involved with computers suggested to me that I was going to be facing these kinds of issues.

    [...]

    I excelled in mathematics in high school, and when I went to the University of Michigan as an undergraduate engineering student I took the advanced curriculum of the mathematics majors. Solving math problems was an exciting challenge, but when I discovered computers I found something much more interesting: a machine into which you could put a program that attempted to solve a problem, after which the machine quickly checked the solution. The computer had a clear notion of correct and incorrect, true and false. Were my ideas correct? The machine could tell me. This was very seductive.

    [...]

    From all this, I trust it is clear that I am not a Luddite. I have always, rather, had a strong belief in the value of the scientific search for truth and in the ability of great engineering to bring material progress. The Industrial Revolution has immeasurably improved everyone's life over the last couple hundred years, and I always expected my career to involve the building of worthwhile solutions to real problems, one problem at a time.

    I have not been disappointed. My work has had more impact than I had ever hoped for and has been more widely used than I could have reasonably expected. I have spent the last 20 years still trying to figure out how to make computers as reliable as I want them to be (they are not nearly there yet) and how to make them simple to use (a goal that has met with even less relative success). Despite some progress, the problems that remain seem even more daunting.

    But while I was aware of the moral dilemmas surrounding technology's consequences in fields like weapons research, I did not expect that I would confront such issues in my own field, or at least not so soon.

    Perhaps it is always hard to see the bigger impact while you are in the vortex of a change. Failing to understand the consequences of our inventions while we are in the rapture of discovery and innovation seems to be a common fault of scientists and technologists; we have long been driven by the overarching desire to know that is the nature of science's quest, not stopping to notice that the progress to newer and more powerful technologies can take on a life of its own.

    I have long realized that the big advances in information technology come not from the work of computer scientists, computer architects, or electrical engineers, but from that of physical scientists. The physicists Stephen Wolfram and Brosl Hasslacher introduced me, in the early 1980s, to chaos theory and nonlinear systems. In the 1990s, I learned about complex systems from conversations with Danny Hillis, the biologist Stuart Kauffman, the Nobel-laureate physicist Murray Gell-Mann, and others. Most recently, Hasslacher and the electrical engineer and device physicist Mark Reed have been giving me insight into the incredible possibilities of molecular electronics.

    [...]

    But because of the recent rapid and radical progress in molecular electronics - where individual atoms and molecules replace lithographically drawn transistors - and related nanoscale technologies, we should be able to meet or exceed the Moore's law rate of progress for another 30 years. By 2030, we are likely to be able to build machines, in quantity, a million times as powerful as the personal computers of today - sufficient to implement the dreams of Kurzweil and Moravec.

    As this enormous computing power is combined with the manipulative advances of the physical sciences and the new, deep understandings in genetics, enormous transformative power is being unleashed. These combinations open up the opportunity to completely redesign the world, for better or worse: The replicating and evolving processes that have been confined to the natural world are about to become realms of human endeavor.

    In designing software and microprocessors, I have never had the feeling that I was designing an intelligent machine. The software and hardware is so fragile and the capabilities of the machine to "think" so clearly absent that, even as a possibility, this has always seemed very far in the future.

    But now, with the prospect of human-level computing power in about 30 years, a new idea suggests itself: that I may be working to create tools which will enable the construction of the technology that may replace our species. How do I feel about this? Very uncomfortable. Having struggled my entire career to build reliable software systems, it seems to me more than likely that this future will not work out as well as some people may imagine. My personal experience suggests we tend to overestimate our design abilities.

    Given the incredible power of these new technologies, shouldn't we be asking how we can best coexist with them? And if our own extinction is a likely, or even possible, outcome of our technological development, shouldn't we proceed with great caution?

    The dream of robotics is, first, that intelligent machines can do our work for us, allowing us lives of leisure, restoring us to Eden. Yet in his history of such ideas, Darwin Among the Machines, George Dyson warns: "In the game of life and evolution there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines." As we have seen, Moravec agrees, believing we may well not survive the encounter with the superior robot species.

    How soon could such an intelligent robot be built? The coming advances in computing power seem to make it possible by 2030. And once an intelligent robot exists, it is only a small step to a robot species - to an intelligent robot that can make evolved copies of itself.

    A second dream of robotics is that we will gradually replace ourselves with our robotic technology, achieving near immortality by downloading our consciousnesses; it is this process that Danny Hillis thinks we will gradually get used to and that Ray Kurzweil elegantly details inThe Age of Spiritual Machines.

    But if we are downloaded into our technology, what are the chances that we will thereafter be ourselves or even human? It seems to me far more likely that a robotic existence would not be like a human one in any sense that we understand, that the robots would in no sense be our children, that on this path our humanity may well be lost.

    Genetic engineering promises to revolutionize agriculture by increasing crop yields while reducing the use of pesticides; to create tens of thousands of novel species of bacteria, plants, viruses, and animals; to replace reproduction, or supplement it, with cloning; to create cures for many diseases, increasing our life span and our quality of life; and much, much more. We now know with certainty that these profound changes in the biological sciences are imminent and will challenge all our notions of what life is.

    Technologies such as human cloning have in particular raised our awareness of the profound ethical and moral issues we face. If, for example, we were to reengineer ourselves into several separate and unequal species using the power of genetic engineering, then we would threaten the notion of equality that is the very cornerstone of our democracy.

    [...]

    Awareness of the dangers inherent in genetic engineering is beginning to grow, as reflected in the Lovins' editorial. The general public is aware of, and uneasy about, genetically modified foods, and seems to be rejecting the notion that such foods should be permitted to be unlabeled.

    But genetic engineering technology is already very far along. As the Lovins note, the USDA has already approved about 50 genetically engineered crops for unlimited release; more than half of the world's soybeans and a third of its corn now contain genes spliced in from other forms of life.

    [...]

    Then, last summer, Brosl Hasslacher told me that nanoscale molecular electronics was now practical. This was new news, at least to me, and I think to many people - and it radically changed my opinion about nanotechnology. It sent me back to Engines of Creation. Rereading Drexler's work after more than 10 years, I was dismayed to realize how little I had remembered of its lengthy section called "Dangers and Hopes," including a discussion of how nanotechnologies can become "engines of destruction." Indeed, in my rereading of this cautionary material today, I am struck by how naive some of Drexler's safeguard proposals seem, and how much greater I judge the dangers to be now than even he seemed to then. (Having anticipated and described many technical and political problems with nanotechnology, Drexler started the Foresight Institute in the late 1980s "to help prepare society for anticipated advanced technologies" - most important, nanotechnology.)

    The enabling breakthrough to assemblers seems quite likely within the next 20 years. Molecular electronics - the new subfield of nanotechnology where individual molecules are circuit elements - should mature quickly and become enormously lucrative within this decade, causing a large incremental investment in all nanotechnologies.

    Unfortunately, as with nuclear technology, it is far easier to create destructive uses for nanotechnology than constructive ones. Nanotechnology has clear military and terrorist uses, and you need not be suicidal to release a massively destructive nanotechnological device - such devices can be built to be selectively destructive, affecting, for example, only a certain geographical area or a group of people who are genetically distinct.

    An immediate consequence of the Faustian bargain in obtaining the great power of nanotechnology is that we run a grave risk - the risk that we might destroy the biosphere on which all life depends.

    As Drexler explained:
    "Plants" with "leaves" no more efficient than today's solar cells could out-compete real plants, crowding the biosphere with an inedible foliage. Tough omnivorous "bacteria" could out-compete real bacteria: They could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. Dangerous replicators could easily be too tough, small, and rapidly spreading to stop - at least if we make no preparation. We have trouble enough controlling viruses and fruit flies.

    Among the cognoscenti of nanotechnology, this threat has become known as the "gray goo problem." Though masses of uncontrolled replicators need not be gray or gooey, the term "gray goo" emphasizes that replicators able to obliterate life might be less inspiring than a single species of crabgrass. They might be superior in an evolutionary sense, but this need not make them valuable.

    The gray goo threat makes one thing perfectly clear: We cannot afford certain kinds of accidents with replicating assemblers.
    Gray goo would surely be a depressing ending to our human adventure on Earth, far worse than mere fire or ice, and one that could stem from a simple laboratory accident. Oops.

    It is most of all the power of destructive self-replication in genetics, nanotechnology, and robotics (GNR) that should give us pause. Self-replication is the modus operandi of genetic engineering, which uses the machinery of the cell to replicate its designs, and the prime danger underlying gray goo in nanotechnology. Stories of run-amok robots like the Borg, replicating or mutating to escape from the ethical constraints imposed on them by their creators, are well established in our science fiction books and movies. It is even possible that self-replication may be more fundamental than we thought, and hence harder - or even impossible - to control. A recent article by Stuart Kauffman in Nature titled "Self-Replication: Even Peptides Do It" discusses the discovery that a 32-amino-acid peptide can "autocatalyse its own synthesis." We don't know how widespread this ability is, but Kauffman notes that it may hint at "a route to self-reproducing molecular systems on a basis far wider than Watson-Crick base-pairing."

    In truth, we have had in hand for years clear warnings of the dangers inherent in widespread knowledge of GNR technologies - of the possibility of knowledge alone enabling mass destruction. But these warnings haven't been widely publicized; the public discussions have been clearly inadequate. There is no profit in publicizing the dangers.

    The nuclear, biological, and chemical (NBC) technologies used in 20th-century weapons of mass destruction were and are largely military, developed in government laboratories. In sharp contrast, the 21st-century GNR technologies have clear commercial uses and are being developed almost exclusively by corporate enterprises. In this age of triumphant commercialism, technology - with science as its handmaiden - is delivering a series of almost magical inventions that are the most phenomenally lucrative ever seen. We are aggressively pursuing the promises of these new technologies within the now-unchallenged system of global capitalism and its manifold financial incentives and competitive pressures.
    This is the first moment in the history of our planet when any species, by its own voluntary actions, has become a danger to itself - as well as to vast numbers of others.

    It might be a familiar progression, transpiring on many worlds - a planet, newly formed, placidly revolves around its star; life slowly forms; a kaleidoscopic procession of creatures evolves; intelligence emerges which, at least up to a point, confers enormous survival value; and then technology is invented. It dawns on them that there are such things as laws of Nature, that these laws can be revealed by experiment, and that knowledge of these laws can be made both to save and to take lives, both on unprecedented scales. Science, they recognize, grants immense powers. In a flash, they create world-altering contrivances. Some planetary civilizations see their way through, place limits on what may and what must not be done, and safely pass through the time of perils. Others, not so lucky or so prudent, perish.
    That is Carl Sagan, writing in 1994, in Pale Blue Dot, a book describing his vision of the human future in space. I am only now realizing how deep his insight was, and how sorely I miss, and will miss, his voice. For all its eloquence, Sagan's contribution was not least that of simple common sense - an attribute that, along with humility, many of the leading advocates of the 21st-century technologies seem to lack.

    I remember from my childhood that my grandmother was strongly against the overuse of antibiotics. She had worked since before the first World War as a nurse and had a commonsense attitude that taking antibiotics, unless they were absolutely necessary, was bad for you.

    It is not that she was an enemy of progress. She saw much progress in an almost 70-year nursing career; my grandfather, a diabetic, benefited greatly from the improved treatments that became available in his lifetime. But she, like many levelheaded people, would probably think it greatly arrogant for us, now, to be designing a robotic "replacement species," when we obviously have so much trouble making relatively simple things work, and so much trouble managing - or even understanding - ourselves.

    I realize now that she had an awareness of the nature of the order of life, and of the necessity of living with and respecting that order. With this respect comes a necessary humility that we, with our early-21st-century chutzpah, lack at our peril. The commonsense view, grounded in this respect, is often right, in advance of the scientific evidence. The clear fragility and inefficiencies of the human-made systems we have built should give us all pause; the fragility of the systems I have worked on certainly humbles me.

    We should have learned a lesson from the making of the first atomic bomb and the resulting arms race. We didn't do well then, and the parallels to our current situation are troubling.

    [...]

    We know that in preparing this first atomic test the physicists proceeded despite a large number of possible dangers. They were initially worried, based on a calculation by Edward Teller, that an atomic explosion might set fire to the atmosphere. A revised calculation reduced the danger of destroying the world to a three-in-a-million chance. (Teller says he was later able to dismiss the prospect of atmospheric ignition entirely.) Oppenheimer, though, was sufficiently concerned about the result of Trinity that he arranged for a possible evacuation of the southwest part of the state of New Mexico. And, of course, there was the clear danger of starting a nuclear arms race.

    Within a month of that first, successful test, two atomic bombs destroyed Hiroshima and Nagasaki [...] Yet the overriding truth was probably very simple: As the physicist Freeman Dyson later said, "The reason that it was dropped was just that nobody had the courage or the foresight to say no."

    It's important to realize how shocked the physicists were in the aftermath of the bombing of Hiroshima, on August 6, 1945. They describe a series of waves of emotion: first, a sense of fulfillment that the bomb worked, then horror at all the people that had been killed, and then a convincing feeling that on no account should another bomb be dropped. Yet of course another bomb was dropped, on Nagasaki, only three days after the bombing of Hiroshima.

    [...]

    Two years later, in 1948, Oppenheimer seemed to have reached another stage in his thinking, saying, "In some sort of crude sense which no vulgarity, no humor, no overstatement can quite extinguish, the physicists have known sin; and this is a knowledge they cannot lose."

    [...]

    Nearly 20 years ago, in the documentary The Day After Trinity, Freeman Dyson summarized the scientific attitudes that brought us to the nuclear precipice:
    "I have felt it myself. The glitter of nuclear weapons. It is irresistible if you come to them as a scientist. To feel it's there in your hands, to release this energy that fuels the stars, to let it do your bidding. To perform these miracles, to lift a million tons of rock into the sky. It is something that gives people an illusion of illimitable power, and it is, in some ways, responsible for all our troubles - this, what you might call technical arrogance, that overcomes people when they see what they can do with their minds."
    Now, as then, we are creators of new technologies and stars of the imagined future, driven - this time by great financial rewards and global competition - despite the clear dangers, hardly evaluating what it may be like to try to live in a world that is the realistic outcome of what we are creating and imagining.

    [...]

    In our time, how much danger do we face, not just from nuclear weapons, but from all of these technologies? How high are the extinction risks?

    The philosopher John Leslie has studied this question and concluded that the risk of human extinction is at least 30 percent, while Ray Kurzweil believes we have "a better than even chance of making it through," with the caveat that he has "always been accused of being an optimist." Not only are these estimates not encouraging, but they do not include the probability of many horrid outcomes that lie short of extinction.

    Faced with such assessments, some serious people are already suggesting that we simply move beyond Earth as quickly as possible. We would colonize the galaxy using von Neumann probes, which hop from star system to star system, replicating as they go. This step will almost certainly be necessary 5 billion years from now (or sooner if our solar system is disastrously impacted by the impending collision of our galaxy with the Andromeda galaxy within the next 3 billion years), but if we take Kurzweil and Moravec at their word it might be necessary by the middle of this century.

    What are the moral implications here? If we must move beyond Earth this quickly in order for the species to survive, who accepts the responsibility for the fate of those (most of us, after all) who are left behind? And even if we scatter to the stars, isn't it likely that we may take our problems with us or find, later, that they have followed us? The fate of our species on Earth and our fate in the galaxy seem inextricably linked.

    [...]

    Clarke continued: "Looking into my often cloudy crystal ball, I suspect that a total defense might indeed be possible in a century or so. But the technology involved would produce, as a by-product, weapons so terrible that no one would bother with anything as primitive as ballistic missiles."

    In Engines of Creation, Eric Drexler proposed that we build an active nano-technological shield - a form of immune system for the biosphere - to defend against dangerous replicators of all kinds that might escape from laboratories or otherwise be maliciously created. But the shield he proposed would itself be extremely dangerous - nothing could prevent it from developing autoimmune problems and attacking the biosphere itself.

    Similar difficulties apply to the construction of shields against robotics and genetic engineering. These technologies are too powerful to be shielded against in the time frame of interest; even if it were possible to implement defensive shields, the side effects of their development would be at least as dangerous as the technologies we are trying to protect against.

    These possibilities are all thus either undesirable or unachievable or both. The only realistic alternative I see is relinquishment: to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge.

    Yes, I know, knowledge is good, as is the search for new truths. We have been seeking knowledge since ancient times. Aristotle opened his Metaphysics with the simple statement: "All men by nature desire to know." We have, as a bedrock value in our society, long agreed on the value of open access to information, and recognize the problems that arise with attempts to restrict access to and development of knowledge. In recent times, we have come to revere scientific knowledge.

    But despite the strong historical precedents, if open access to and unlimited development of knowledge henceforth puts us all in clear danger of extinction, then common sense demands that we reexamine even these basic, long-held beliefs.

    It was Nietzsche who warned us, at the end of the 19th century, not only that God is dead but that "faith in science, which after all exists undeniably, cannot owe its origin to a calculus of utility; it must have originated in spite of the fact that the disutility and dangerousness of the 'will to truth,' of 'truth at any price' is proved to it constantly." It is this further danger that we now fully face - the consequences of our truth-seeking. The truth that science seeks can certainly be considered a dangerous substitute for God if it is likely to lead to our extinction.

    If we could agree, as a species, what we wanted, where we were headed, and why, then we would make our future much less dangerous - then we might understand what we can and should relinquish. Otherwise, we can easily imagine an arms race developing over GNR technologies, as it did with the NBC technologies in the 20th century. This is perhaps the greatest risk, for once such a race begins, it's very hard to end it. This time - unlike during the Manhattan Project - we aren't in a war, facing an implacable enemy that is threatening our civilization; we are driven, instead, by our habits, our desires, our economic system, and our competitive need to know.

    I believe that we all wish our course could be determined by our collective values, ethics, and morals. If we had gained more collective wisdom over the past few thousand years, then a dialogue to this end would be more practical, and the incredible powers we are about to unleash would not be nearly so troubling.

    One would think we might be driven to such a dialogue by our instinct for self-preservation. Individuals clearly have this desire, yet as a species our behavior seems to be not in our favor. In dealing with the nuclear threat, we often spoke dishonestly to ourselves and to each other, thereby greatly increasing the risks. Whether this was politically motivated, or because we chose not to think ahead, or because when faced with such grave threats we acted irrationally out of fear, I do not know, but it does not bode well.

    The new Pandora's boxes of genetics, nanotechnology, and robotics are almost open, yet we seem hardly to have noticed. Ideas can't be put back in a box; unlike uranium or plutonium, they don't need to be mined and refined, and they can be freely copied. Once they are out, they are out. Churchill remarked, in a famous left-handed compliment, that the American people and their leaders "invariably do the right thing, after they have examined every other alternative." In this case, however, we must act more presciently, as to do the right thing only at last may be to lose the chance to do it at all.

    As Thoreau said, "We do not ride on the railroad; it rides upon us"; and this is what we must fight, in our time. The question is, indeed, Which is to be master? Will we survive our technologies?

    We are being propelled into this new century with no plan, no control, no brakes. Have we already gone too far down the path to alter course? I don't believe so, but we aren't trying yet, and the last chance to assert control - the fail-safe point - is rapidly approaching. We have our first pet robots, as well as commercially available genetic engineering techniques, and our nanoscale techniques are advancing rapidly. While the development of these technologies proceeds through a number of steps, it isn't necessarily the case - as happened in the Manhattan Project and the Trinity test - that the last step in proving a technology is large and hard. The breakthrough to wild self-replication in robotics, genetic engineering, or nanotechnology could come suddenly, reprising the surprise we felt when we learned of the cloning of a mammal.

    [...]

    Verifying relinquishment will be a difficult problem, but not an unsolvable one. We are fortunate to have already done a lot of relevant work in the context of the BWC and other treaties. Our major task will be to apply this to technologies that are naturally much more commercial than military. The substantial need here is for transparency, as difficulty of verification is directly proportional to the difficulty of distinguishing relinquished from legitimate activities.

    I frankly believe that the situation in 1945 was simpler than the one we now face: The nuclear technologies were reasonably separable into commercial and military uses, and monitoring was aided by the nature of atomic tests and the ease with which radioactivity could be measured. Research on military applications could be performed at national laboratories such as Los Alamos, with the results kept secret as long as possible.

    The GNR technologies do not divide clearly into commercial and military uses; given their potential in the market, it's hard to imagine pursuing them only in national laboratories.

    [...]

    Verifying compliance will also require that scientists and engineers adopt a strong code of ethical conduct, resembling the Hippocratic oath, and that they have the courage to whistleblow as necessary, even at high personal cost. This would answer the call - 50 years after Hiroshima - by the Nobel laureate Hans Bethe, one of the most senior of the surviving members of the Manhattan Project, that all scientists "cease and desist from work creating, developing, improving, and manufacturing nuclear weapons and other weapons of potential mass destruction."

    In the 21st century, this requires vigilance and personal responsibility by those who would work on both NBC and GNR technologies to avoid implementing weapons of mass destruction and knowledge-enabled mass destruction.

    Thoreau also said that we will be "rich in proportion to the number of things which we can afford to let alone." We each seek to be happy, but it would seem worthwhile to question whether we need to take such a high risk of total destruction to gain yet more knowledge and yet more things; common sense says that there is a limit to our material needs - and that certain knowledge is too dangerous and is best forgone.

    Neither should we pursue near immortality without considering the costs, without considering the commensurate increase in the risk of extinction. Immortality, while perhaps the original, is certainly not the only possible utopian dream.

    I recently had the good fortune to meet the distinguished author and scholar Jacques Attali, whose book Lignes d'horizons (Millennium, in the English translation) helped inspire the Java and Jini approach to the coming age of pervasive computing, as previously described in this magazine. In his new book Fraternités, Attali describes how our dreams of utopia have changed over time:
    "At the dawn of societies, men saw their passage on Earth as nothing more than a labyrinth of pain, at the end of which stood a door leading, via their death, to the company of gods and to Eternity. With the Hebrews and then the Greeks, some men dared free themselves from theological demands and dream of an ideal City where Liberty would flourish. Others, noting the evolution of the market society, understood that the liberty of some would entail the alienation of others, and they sought Equality."
    Jacques helped me understand how these three different utopian goals exist in tension in our society today. He goes on to describe a fourth utopia, Fraternity, whose foundation is altruism. Fraternity alone associates individual happiness with the happiness of others, affording the promise of self-sustainment.

    This crystallized for me my problem with Kurzweil's dream. A technological approach to Eternity - near immortality through robotics - may not be the most desirable utopia, and its pursuit brings clear dangers. Maybe we should rethink our utopian choices.

    Where can we look for a new ethical basis to set our course? I have found the ideas in the book Ethics for the New Millennium, by the Dalai Lama, to be very helpful. As is perhaps well known but little heeded, the Dalai Lama argues that the most important thing is for us to conduct our lives with love and compassion for others, and that our societies need to develop a stronger notion of universal responsibility and of our interdependency; he proposes a standard of positive ethical conduct for individuals and societies that seems consonant with Attali's Fraternity utopia.

    The Dalai Lama further argues that we must understand what it is that makes people happy, and acknowledge the strong evidence that neither material progress nor the pursuit of the power of knowledge is the key - that there are limits to what science and the scientific pursuit alone can do.

    Our Western notion of happiness seems to come from the Greeks, who defined it as "the exercise of vital powers along lines of excellence in a life affording them scope."

    Clearly, we need to find meaningful challenges and sufficient scope in our lives if we are to be happy in whatever is to come. But I believe we must find alternative outlets for our creative forces, beyond the culture of perpetual economic growth; this growth has largely been a blessing for several hundred years, but it has not brought us unalloyed happiness, and we must now choose between the pursuit of unrestricted and undirected growth through science and technology and the clear accompanying dangers.

    It is now more than a year since my first encounter with Ray Kurzweil and John Searle. I see around me cause for hope in the voices for caution and relinquishment and in those people I have discovered who are as concerned as I am about our current predicament. I feel, too, a deepened sense of personal responsibility - not for the work I have already done, but for the work that I might yet do, at the confluence of the sciences.

    But many other people who know about the dangers still seem strangely silent. When pressed, they trot out the "this is nothing new" riposte - as if awareness of what could happen is response enough. They tell me, There are universities filled with bioethicists who study this stuff all day long. They say, All this has been written about before, and by experts. They complain, Your worries and your arguments are already old hat.

    I don't know where these people hide their fear. As an architect of complex systems I enter this arena as a generalist. But should this diminish my concerns? I am aware of how much has been written about, talked about, and lectured about so authoritatively. But does this mean it has reached people? Does this mean we can discount the dangers before us?
    Knowing is not a rationale for not acting.
    Can we doubt that knowledge has become a weapon we wield against ourselves?

    The experiences of the atomic scientists clearly show the need to take personal responsibility, the danger that things will move too fast, and the way in which a process can take on a life of its own. We can, as they did, create insurmountable problems in almost no time flat. We must do more thinking up front if we are not to be similarly surprised and shocked by the consequences of our inventions.

    [...]

    Each of us has our precious things, and as we care for them we locate the essence of our humanity. In the end, it is because of our great capacity for caring that I remain optimistic we will confront the dangerous issues now before us.

    My immediate hope is to participate in a much larger discussion of the issues raised here, with people from many different backgrounds, in settings not predisposed to fear or favor technology for its own sake.

    As a start, I have twice raised many of these issues at events sponsored by the Aspen Institute and have separately proposed that the American Academy of Arts and Sciences take them up as an extension of its work with the Pugwash Conferences. (These have been held since 1957 to discuss arms control, especially of nuclear weapons, and to formulate workable policies.)

    It's unfortunate that the Pugwash meetings started only well after the nuclear genie was out of the bottle - roughly 15 years too late. We are also getting a belated start on seriously addressing the issues around 21st-century technologies - the prevention of knowledge-enabled mass destruction - and further delay seems unacceptable.

    So I'm still searching; there are many more things to learn. Whether we are to succeed or fail, to survive or fall victim to these technologies, is not yet decided. I'm up late again - it's almost 6 am. I'm trying to imagine some better answers, to break the spell and free them from the stone.

    Source: http://www.wired.com/wired/archive/8...ic=&topic_set=

    -----------

    That article was written over ten years ago. Manuel De Landa's concerns and foresight were revealed over twenty years ago. Things are accelerating.

    De Landa's views are represented by the excerpts here, from his book War in the Age of Intelligent Machines.

    We must raise the collective awareness of these issues and pause. We can begin again when we understand more about the spiritual nature of reality. Then, maybe we can begin building hearts for machines instead of brains and go from there.

    We need to spread the word. Shedding light on these concerns is exactly what needs to happen because it's this overlooked potential that is lurking in the dark.

    If indeed we have already crossed the threshold, we need to be spiritually prepared for the future and concentrate on healing ourselves and, collectively, the planet.

  7. Link to Post #65
    Ecuador Unsubscribed
    Join Date
    3rd February 2011
    Location
    California
    Age
    36
    Posts
    1,584
    Thanks
    3,721
    Thanked 10,195 times in 1,429 posts

    Default Re: URGENT! It will be here soon! Please be AWARE (The Growth of the Internet)

    -----------

    Ultra High-Speed Robotics




  8. Link to Post #66
    Ecuador Unsubscribed
    Join Date
    3rd February 2011
    Location
    California
    Age
    36
    Posts
    1,584
    Thanks
    3,721
    Thanked 10,195 times in 1,429 posts

    Default Re: URGENT! It will be here soon! Please be AWARE (The Growth of the Internet)

    Quote Posted by Jeffrey Sewell-Holloway (here)
    -----------

    A Roadmap for U.S. Robotics - From Internet to Robotics



    Here is some of the projections from the PDF:

    In ten years ...
    • Human-robot interaction will be made intuitive and transparent, such that the human’s intent is seamlessly embodied by the robotic system. Interfaces should be automatically customized to the specific user to maximize intuitiveness of the interface. Interfaces will estimate the user's intent, rather than simply executing the user's commands that may be subject to human imperfections.
    • Robots will autonomously maintain longer, repeated interactions in a broader set of domains in controlled environments. They will offer a combination of human-led and robot-led interactions using open dialog including speech, gesture, and gaze behaviors in limited domains.
    • Adaptive and learning systems will be extended to operate on multi-modal, long-term data (months and more) and large-scale, Internet-based data of users with similar characteristics to generate user-specific models that highlight differences over time and between users. Robot systems will analyze social media to model user personality and relationships. Learned user models will support comprehensive interaction in extended contexts.
    • Autonomous vehicles will be capable of driving in any city and on unpaved roads, and exhibit limited capability for off-road environment that humans can drive in, and will be as safe as the average human-driven car. Vehicles will be able to safely cope with unanticipated behaviors exhibited by other vehicles (e.g., break down or malfunction). Vehicles will also be able to tow other broken down vehicles. Vehicles will be able to reach a safe state in the event of sensor failures.
    • A robot that interacts with users to acquire sequences of new skills to perform complex assembly or actions. The robot has facilities for recovery from simple errors encountered.
    • Inherently safe (hardware and software) professional and personal mobile robots, with manipulation, operating in cooperation with untrained humans in all professional environments
    • As perceptual capabilities improve, robots can acquire more complex skills and differentiate specific situations in which skills are appropriate. Multiple skills can be combined into more complex skills autonomously. The robot is able to identify and reason about the type of situation in which skills may be applied successfully. The robot has a sufficient understanding of the factors that affect the success so as to direct the planning process in such a way that chances of success are maximized.
    • Robots shall be able to contribute their plans to local knowledge bases shared with other robots, and to efficiently identify plans, or parts thereof, that can be reused in lieu of solving complex problems from scratch. In the long term, large, efficiently searchable repositories of robot plans shall become available on a planetary scale.
    • Robots will have the ability to integrate sensor readings acquired while executing a plan in order to update the underlying statistical model and autonomously decide when and what to re-plan. The goal is the development of systems capable of generating and updating plans enabling uninterrupted, loosely supervised operation for periods of months.

    Here's an excerpt from the Defense and Homeland Security section of the document.
    At the height of the intervention in Iraq and Afghanistan, more than 25,000 robotics systems were deployed with a fairly even divide between ground and aerial systems. Unmanned aerial systems allow for extended missions, and the risk to the pilot is eliminated. Today, more than 50% of the pilots entering the Air Force become operators of remotely piloted systems rather than becoming regular airplane pilots. The opportunity for the deployment in civilian airspace is explored through a new FAA initiative. The dual-use opportunities are tremendous. In a decade, airfreight may be transported coast-to-coast or transoceanic by remotely piloted air-crafts.

    [...]

    The use of robotics technology for homeland security and defense continures to grow as innovative technology has improved the functionality and viability of search and rescue efforts, surveillance, explosives countermeasures, fire detection, and other applications.
    Anyways, the document doesn't address the developments of nanotechnology and genetics. Those will grow as well -- exponentially. In ten years technological advancements would have increased a thousand fold, and in twenty years -- about one million fold. The amount of technological progress made in the year 2000 will be made every 30 seconds in 2020. That's exponential.
    Last edited by Jeffrey; 18th December 2013 at 02:41.

  9. Link to Post #67
    Avalon Member Freed Fox's Avatar
    Join Date
    10th December 2012
    Location
    neither here nor there
    Posts
    807
    Thanks
    4,728
    Thanked 5,819 times in 768 posts

    Default Re: URGENT! It will be here soon! Please be AWARE (The Growth of the Internet)

    Post #64, though quite long, is truly excellent.

    This issue could very well be of paramount importance, and more people need to be paying attention to it rather than heresay regarding aliens, demons, channeled BS, etc. Heck, if a demonic entity were to actually make an overt move on our mortal physical plane, it would most likely be via these GNR technologies, IMO (or some eventual manifestation/by-product of them... as previously speculated with the 'Ahriman' correlations).

    Quote As Thoreau said, "We do not ride on the railroad; it rides upon us"; and this is what we must fight, in our time. The question is, indeed, Which is to be master? Will we survive our technologies?
    Thank you Jeffrey.
    Last edited by Freed Fox; 18th December 2013 at 04:59.
    Mercy, forgiveness, and compassion are the most virtuous forms of love
    Let your heart not be hardened by injustice and tribulation

  10. The Following 3 Users Say Thank You to Freed Fox For This Post:

    AwakeInADream (19th December 2013), Jeffrey (18th December 2013), william r sanford72 (18th December 2013)

  11. Link to Post #68
    Ecuador Unsubscribed
    Join Date
    3rd February 2011
    Location
    California
    Age
    36
    Posts
    1,584
    Thanks
    3,721
    Thanked 10,195 times in 1,429 posts

    Default Re: URGENT! It will be here soon! Please be AWARE (The Growth of the Internet)


  12. Link to Post #69
    Avalon Member Carmody's Avatar
    Join Date
    19th August 2010
    Location
    Winning The Galactic Lottery
    Posts
    11,389
    Thanks
    17,597
    Thanked 82,316 times in 10,234 posts

    Default Re: URGENT! It will be here soon! Please be AWARE (The Growth of the Internet)

    Quote Posted by Jeffrey (here)
    Quote Posted by skippy (here)
    Quote Posted by AwakeInADream (here)
    About the AI machine that will enslave us in the future. I keep asking myself 'who is programming it?'. I think the answer to that question is 'we are'. This thing exists now in a dormant state, just silently collecting the data that it will use against us in the future, and we are feeding it everytime we use a search engine, youtube or shop online. The machine is currently asking itself, 'what do people want?', and we are telling it that we are happy to be slaves, and that we like it when someone (or something) else makes our decisions for us (so long as they are varied enough to keep us entertained).
    These machines are manmade. Somewhat they are part of human history and evolution. I am not sure whether it is functional to see this AI as some sort of new outside enemy. Technology, is acting like a mirror, reflecting parts of our collective self [...]
    I just hope it doesn't grow to reflect the collective shadow self.



    The internet spawned from a military project. Military robotics are driven by predatory mechanisms and warfare mentalities. Paramilitary technologies focus on data collecting and surveillance. I hope that we can turn the tide as mariposafe posted, and there is indeed a positive, spiritual environment in which to foster the growth of technology ... unfortunately, it appears that it's developing too fast and in an overall negative way.

    Notice the correlation in the image with the Jungian iceberg. Technology is slowly making people more unconscious, myself included sadly (I have to admit that I spend a lot of time on the internet, but I try to balance that out), addicted to information. Infonauts surf the web and many people are glued to their gadgets, social media websites, televisions, etc ...

    Remember how cell phones revolutionized the world? It connected more people through an electronic device. More and more we substituted human to human interaction with an interaction between two people with technology in the middle. Is technology a facilitator or a barrier? It's an important question to think about imo.

    We are on the verge of another nexus point, another wave of change that will revolutionize the way we interact with each other. While it will have many unique benefits, I think it will have worse consequences on society.

    Virtual reality is about to explode with the arrival of the Oculus Rift (thanks to Carmody for bringing this to our attention a while back). It has a nasty dark side as well with the slew of perverted games coming out along with it. Even the regular games, while they seem cool at first, will be more addicting than a smart phone. It will all be connected to the internet and we will be projecting our consciousness into cyberspace, thereby making us more unconscious.



    This will have a profane effect on human to human interaction. And we thought that television and gaming desensitized people! It's going to change the way people do gaming, and eventually change the way television/movies are watched (as in the next 5 years). I can even see how we may be surfing the internet in 3d using this device, our awareness flying through cyberspace with the aid of this device. This will just feed the beast -- the beast that will mirror our shadow self if we aren't careful. It lives in the abyss of cyberspace. People aren't being careful. I've been careless at times and I've been waking up to that fact.

    I sincerely care for the people on this forum, for my family, and for my friends. I don't want society to go down this road, and many will not. The ones that do, there will be a friction created when their actions start affecting the people who don't want any part of it. That's why we need to voice our concern and think about the implications of such technologies before leaping into them because of their superficial, perceived benefits (or because of their profitability).
    A reddit post that gets into the subject of the rift fairly deeply. A well intended and well done thread, for reddit! Some intelligent commentary is nice to see.

    My Moms opinion about the Oculus: Fear the future
    Quote Hello /r/Oculus!

    Today, a friend visited me with his the Oculus Rift. I was able to try it and was pretty impressed by the experience. When my mom entered the room, she bursted out in laughter. Reasonable, considering the fact that I was wearing a black box on my head, but when I asked her to try it out for herself, she agreed.

    We played the Tuscany Demo* . My Mom was also impressed like me, but in a negative way. She took the Oculus off her head with a frightened expression in her eyes. She told us that this is incredible, and that she never head about this. After we told her that this is just the beginning (DK1) her face literally changed to an expression I had never seen before. Something between astonishment and fear.

    I was really surprised, because usually she likes new upcoming technology (e.g. smartphones), but apparently this was "too" much.

    One the one hand I am worried, but on the other hand I am happy, because obviously this technology has the power to stun "mankind" already at the early stage of the development.

    But this is not the reason why I felt the urge to write a post about it. After my friend left she wanted to hear more about the Oculus. I told her everything I know. Her reply was: "This is really incredible, the world changed so much in the last 20 years. Fortunately I will die before technology can get the upper hand on mankind. Smartphones already changed so much in the social framework."

    Little does she know how fast the Oculus is improving, and not only that she will witness the DK2, but also the CV. **

    Although her opinion is pretty common for "old people" (sorry mom), I think it is reasonable to fear the incredible possibilities of VR.

    Thinking about it, Oculus might change more than we can imagine right now. And this is amazing and scary at the same time. I fear the future, at least a little bit.
    (87 comments and still unfolding)

    * (link is of a 'full' [relatively] body motion version, about 1-1.5 years away from public capacity to easily buy, I suspect)
    ** (DK1='Development Kit one', DK2 = 'Development Kit two' [more advanced unit], CV = consumer version, or CV1, first consumer release [finalized consumer version of DK2]. FYI, approx 60,000 of the DK1 have been sold, so far. Re 'old people', his mother is 45 years old.)


    And a second one:

    Danger and ethics in virtual reality
    Quote I've seen a lot of people excited for VR horror games and I don't think they know what they are getting themselves into. Excuse the crappy mspaint art, I felt it needed some visuals otherwise people would ignore it.

    Imgur

    I'm not trying to say that horror/violence should be banned, but it should have some restrictions. There are other problems that will arise from VR, such as desensitization to realistic violence and danger, and some things we may not even be aware of. I'm posting this here in the hopes that some ratings and regulations can be put in place before people have their minds warped. VR looks like a pretty big rabbit hole to me, and I hope we aren't jumping in without a parachute.
    (60 comments and unfolding)
    Last edited by Carmody; 9th March 2014 at 23:59.
    Interdimensional Civil Servant

  13. The Following 6 Users Say Thank You to Carmody For This Post:

    CdnSirian (7th July 2014), Jeffrey (9th March 2014), kirolak (12th March 2014), Reinhard (11th March 2014), Shikasta (9th March 2014), Sophocles (9th March 2014)

  14. Link to Post #70
    Canada Avalon Member DeDukshyn's Avatar
    Join Date
    22nd January 2011
    Location
    From 100 Mile House ;-)
    Language
    English
    Age
    50
    Posts
    9,394
    Thanks
    29,778
    Thanked 45,445 times in 8,541 posts

    Default Re: URGENT! It will be here soon! Please be AWARE (The Growth of the Internet)

    Actually, I tried on a full VR simulation with 3D goggles, head / body tracking and everything, when I was a student, over 20 years ago. Yes, over 20 years ago. The processing hardware was a relatively small computer box built by Sun Microsystems (bought by Oracle I believe a few years back) that ran dual RISC processors. It was a medical VR demo and we could explore the human body in full 3D VR. They also had an x86 version but it was basically useless compared to what the RISCS in the Sun could do. This was before GPU technology got even remotely the floating point power that they are known for today - in fact GPU tech was almost nill back in the day -- good ol CPUs had to do all the work.

    My point. I actually can't believe how long it has taken for technology like the oculus rift to be developed, something seems to have stalled this progress ... then again, I figured we'd have a lot better tech these days thinking back from what I thought the world would be like in 20 years, twenty years ago. We got smart phones and internet. Besides that, hardly anything. Barely an electric car even ...

    That said, I am sure there is a money milking umbrella that helps determine the speed of growth of technology ... Adding another factor to the mix, we all know military tech is likely hundreds of years in advance of what tech gets released to the public.

    Just a thought, inspired by how amazingly advanced and "high tech" the Oculus seems to some.
    When you are one step ahead of the crowd, you are a genius.
    Two steps ahead, and you are deemed a crackpot.

  15. The Following 4 Users Say Thank You to DeDukshyn For This Post:

    Carmody (10th March 2014), Jeffrey (9th March 2014), Pweeky (9th March 2014), Reinhard (11th March 2014)

  16. Link to Post #71
    Canada Avalon Member taurad's Avatar
    Join Date
    28th January 2011
    Posts
    170
    Thanks
    552
    Thanked 369 times in 132 posts

    Default Re: URGENT! It will be here soon! Please be AWARE (The Growth of the Internet)

    Quote Posted by skippy (here)
    Don't know if this one has already been posted on this board. Funny that they have been putting camera's (vision) in the robot's belly and knees..


    the resemblance to LE DIABLE is uncanny, skippy...good find

  17. The Following 2 Users Say Thank You to taurad For This Post:

    Jeffrey (11th March 2014), Reinhard (11th March 2014)

  18. Link to Post #72
    Great Britain Avalon Member AngelArmy's Avatar
    Join Date
    25th February 2014
    Language
    English
    Age
    46
    Posts
    136
    Thanks
    72
    Thanked 478 times in 107 posts

    Default Re: URGENT! It will be here soon! Please be AWARE (The Growth of the Internet)

    Quote Posted by Blake Elder (here)
    Yes, Google has been very busy lately, acquiring many robotics companies

    http://singularityhub.com/2013/12/08...even-startups/

    As the article mentions, they wish to take robotics to "the next level", making it far more accessible to everyone.
    childrens programmes are filled with robots, and films like Wall E ad Robots

  19. The Following 2 Users Say Thank You to AngelArmy For This Post:

    Jeffrey (11th March 2014), taurad (12th March 2014)

  20. Link to Post #73
    Canada Avalon Member taurad's Avatar
    Join Date
    28th January 2011
    Posts
    170
    Thanks
    552
    Thanked 369 times in 132 posts

    Default Re: URGENT! It will be here soon! Please be AWARE (The Growth of the Internet)

    Jeffrey
    thanks for this topic

    thanks for these two vidz

    i found it very intriguing, that companies like Honda, Toyota, South Korean, US and European co that have been working secretly for 2 decades on their Robotic prototypes, now out of a sudden, big big Joint-Venture-Reveal-Your-Patent Programme!!!!

    also they used, AGAIN, a catastrophic event (Fukushima) to push it...



    I'm not sure about fully human-conscience-cloned humanoids thou...

    i personally do not believe and comprehend myself being compressed in a few gigabytes...

    there's no way any human being can achieve that, no matter how much time they experiment on it...

    THERE'S NO WAY!!!

    We all have a ceiling limitation...

    as a matter a fact, because of ceiling potential limitation, hierarchies exist in nature...

    dolphins can do everything in the water...no matter how well these animals evolve and perfect their skills, they have a ceiling potential...they can not jump their limitation/boundaries and fly in the atmosphere...

    How can a group of (excellent) computer script engineers, be able to transcend our non-negotiable boundaries, and program the hardware to a script, that it's beyond our realm and they don't know how was created to begin with!!!

    Hell, some of these experts, as randomly as ordinary ppl, after a hard day of cloning my soul into a silicon chip, have to rush home to shower and catch their session of marriage-counselling!!! They cannot even comprehend their own existence!!!

    Yet, they are able to program a pile of metal junk to forget a task, which is against the computing principles, to feel embarrassment/disappointment for forgetting (?), to have to rush afterwords to make it up for it (??), then having to misbalance the rest of the time, have a breakdown and collapse to the point that need substances to manage the hallucinations, which makes them join a religious sect (?????)

    how the **** do you program a hardware to do these, when you are conflicted yourself???

    Having said that, I do acknowledge the danger of locking up all big essentials on robotic power...hydro, gas, power lines, transportation, banking, food communication etc...

    If we want to override the system, GOOD LUCK!!! we're not talking about reseting a personal computer's user profile anymore!!!

    My understanding is that the system remains locked, run by itself in reference to the script, but for reasons i explained, i do not believe the hardware can begin create scripts, that are not overseen in the original programming...

    what i mean is, if the system is locked, and for some extreme reason we lose the admin privileges to input, then why/how would the hardware start scripting their own programming IF they were not required to script while idling...

    it doesn't make any sense from the programming point of view, no matter their capabilities...


    Last edited by taurad; 12th March 2014 at 02:03.

  21. Link to Post #74
    Canada Avalon Member DeDukshyn's Avatar
    Join Date
    22nd January 2011
    Location
    From 100 Mile House ;-)
    Language
    English
    Age
    50
    Posts
    9,394
    Thanks
    29,778
    Thanked 45,445 times in 8,541 posts

    Default Re: URGENT! It will be here soon! Please be AWARE (The Growth of the Internet)

    Facebook is buying Oculus VR for $2 Billion ....
    When you are one step ahead of the crowd, you are a genius.
    Two steps ahead, and you are deemed a crackpot.

  22. Link to Post #75
    Avalon Member toad's Avatar
    Join Date
    14th November 2011
    Location
    127.0.0.1
    Age
    37
    Posts
    669
    Thanks
    310
    Thanked 1,473 times in 472 posts

    Default Re: URGENT! It will be here soon! Please be AWARE (The Growth of the Internet)

    Growth is good.
    The minute you settle for less than you deserve, you get even less than you settled for.
    -- Maureen Dowd --

  23. Link to Post #76
    Canada Avalon Member AjaJane's Avatar
    Join Date
    24th March 2014
    Location
    Vancouver
    Posts
    10
    Thanks
    59
    Thanked 36 times in 8 posts

    Default Re: URGENT! It will be here soon! Please be AWARE (The Growth of the Internet)

    Here's a video about a robot police officer prototype. It's not AI, but after they are integrated into the field making them AI would be the next step.

  24. The Following 3 Users Say Thank You to AjaJane For This Post:

    Bill Ryan (29th March 2014), Davidallany (14th April 2014), nomadguy (30th March 2014)

  25. Link to Post #77
    UK Avalon Founder Bill Ryan's Avatar
    Join Date
    7th February 2010
    Location
    Ecuador
    Posts
    34,268
    Thanks
    208,959
    Thanked 457,528 times in 32,788 posts

    Default Re: URGENT! It will be here soon! Please be AWARE (The Growth of the Internet)

    Quote Posted by AjaJane (here)
    Here's a video about a robot police officer prototype. It's not AI, but after they are integrated into the field making them AI would be the next step.
    Thank you! (And Aja, a warm welcome to the forum! )

    I assume you've seen Matt Damon's [character's] encounter with robot police in Elysium?


  26. The Following 4 Users Say Thank You to Bill Ryan For This Post:

    AjaJane (5th April 2014), Atlas (29th March 2014), Davidallany (14th April 2014), InTheBackground (31st March 2014)

  27. Link to Post #78
    Malta Avalon Member
    Join Date
    30th July 2011
    Age
    58
    Posts
    90
    Thanks
    343
    Thanked 350 times in 78 posts

    Default Re: URGENT! It will be here soon! Please be AWARE (The Growth of the Internet)

    ....just a little note: Elysium may be a derivative of the so-called Elysian fields that reputedly surrounded Atlantis.



    One interesting take on Elysium :

    Last edited by MalteseKnight; 6th April 2014 at 16:58.

  28. The Following User Says Thank You to MalteseKnight For This Post:

    Davidallany (14th April 2014)

  29. Link to Post #79
    Ecuador Avalon Member Davidallany's Avatar
    Join Date
    21st February 2011
    Location
    Loja
    Language
    English
    Age
    50
    Posts
    1,970
    Thanks
    7,564
    Thanked 6,056 times in 1,577 posts

    Default Re: URGENT! It will be here soon! Please be AWARE (The Growth of the Internet)

    Quote Posted by MalteseKnight (here)
    ....just a little note: Elysium may be a derivative of the so-called Elysian fields that reputedly surrounded Atlantis.



    One interesting take on Elysium :

    Hi MalteseKnight. Thank you for the interesting video clip. Cheers

+ Reply to Thread
Page 4 of 4 FirstFirst 1 4

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts