The first theory is incredibly popular in Cyber-Punk science fiction. People may be able to use chemical brain augmentation along with nanobots that repair the damage caused by cognitive enhancing drugs and at the same time keep the body healthy and young. People could acquire implants for wireless communication with anyone, like having a telepathic cell phone implanted in their brains. You could just think about who you want to talk to and then communicate with them through your thoughts. We could move beyond our biological physical limitations with cybernetic limbs, telescopic eye implants, integrated super hearing aids and cognitive enhancing super computers hard wired into our brains. Many of these technologies already exist but they are expensive and not entirely refined. Even when they do become more readily available, there is the question of who gets to use them? Will only the rich become the cyborgs? Will technological augmentation lead to a new kind of class warfare? Sci-fi works like Ghost In The Shell by Atsuko Tanaka and Neuromancer by William Gibson both address some of these questions.
We've all seen the ultimate sci-fi outcome of the second theory in movies like the Matrix and Terminator. Artificial Intelligence is invented and machines start to build more machines, eventually they become so advanced that they try to eliminate humans. Sci-fi writer Isaac Asimov addressed this problem with his popular Three Laws of Robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Of course what makes sci-fi interesting is when something goes wrong. If Artificial Intelligence were to advance to a point beyond programming and become more like actual human consciousness, these three laws would no longer apply. The robot would have the ability to decide whether or not it should follow the laws. A truly advanced A.I. would be indistinguishable from human consciousness.
The question is whether or not there is a technological wall. Is there a point where no more technological advancement can be made? Will there be a point where super computers are so fast that they can not possibly go any faster? Will innovation stop when there is nothing left to invent? I think that nearly everything may become possible but that doesn't mean that what is possible will be created. A point of infinite potential may be reached that will open up a huge area of potential innovation but we will still be limited by things like space, time and natural resources. Many technological innovations will be conceptual, but we will be unable to implement them all. Many theories on the Singularity suggest that everything that is possible will come to be, but unless we can create material out of nothing, our resources for creation will remain limited. Maybe we will be able to create material out of nothing and no limitations will apply. I personally find it fun to think about possible scenarios but looking that far in the future is nothing but speculation. What do you think the Technological Singularity will be like?
No comments:
Post a Comment