Intelligent Design Outsmarted By Artificial Intelligence
If a debate were held between a person whose views are belief-based, versus a robot whose perspective is fact-based, logically, the robot should win. But a robot winning a debate against a human being would never happen because the scales would be tipped in our favor simply because – we can.
If we were as fact-obsessed and logical as we were decades ago, publicity-seekers who fabricate the truth would neither be making news, nor appearing on TV shows explaining away their convoluted views. Much less, running for president.
Thus, as we puzzle over making sense out of nonsense, the logical, amiable, diplomatic robot is rapidly gaining ground, surpassing humans.
For about a 100 years, Moore’s Law has predicted that computer power would double every two years. Intel’s co-founder, Gordon Moore, famously made the comment that doubling transistors on a microchip would guarantee consumers faster, better electronics at a rapid rate. This probably explains why we continually have to buy new computers before getting tired of the old one.
But as the Heisenberg Uncertainty Principle would have it, Moore’s Law is about to be broken. It is not because the transistors aren’t shrinking; it is because there is simply not enough energy to power them all once being loaded onto a chip. This could mean a slowdown in the rate technology advances in the coming years.
Regarding the Heisenberg Uncertainty Principle: In physics, silicon at the molecular level is unstable. The tiny transistors are so powerful, and become so overheated that they can melt the silicon and cause the electrons to leak out. Computer scientists have known we would surpass silicon’s physical limits to hold more and more transistors at some point.
Even today, the most advanced microprocessors have so many transistors on them that they are unable to power them all at the same time. Many of the transistors on a chip are left unpowered, while others are put to work.
Meanwhile, it’s no secret that physicists are researching optical computers, quantum computers, DNA computers, protein computers and various kinds of computers down to the molecular, microscopic level, none of which are ready to debut. Yet.
So as the question was posed to Physicist, Michio Kaku, “Do you believe in the coming singularity?” Or in other words, will machines take over and become smarter than us?
Dr. Kaku, in considering Moore’s Law, and the Heisenberg Uncertainty Principle, responded, “I don’t know. However, if by the end of the century, the technical problems are worked out, we might be able to create machines that are as smart as us. Right now, our machines are as smart as insects. Eventually they will be smart as mice, and after that, as smart as dogs and cats. Then if our machines become as smart as monkeys, at that point they can become potentially dangerous. Monkeys can formulate their own plans; they can formulate their own strategies, their own goals. At that point, I would say let’s put a chip in their brain to shut them off if they get murderous thoughts.”
It seems inevitable that robots will surpass many humans in 20 years. To get a feel for the skill level of machines today, go to a website called, Cleverbot.com and experience interacting with bots. The conversation level is a bit amazing, and perhaps even more pleasant than conversing with a few humans.
Machines have the advantage of computing logically while having no emotional baggage. The one advantage humans have over machines is creativity. However, if robots one day have feelings, and the ability to recognize and question their own existence, or even perhaps have the cognitive skills of a monkey, we would certainly have a problem. We would not be able to simply put a chip in robots to turn them off, as Dr. Kaku suggests. At that point, the question would be: Do we have the legal right to “turn off,” or “kill” a robot?
© September 1, 2011 Reiko Eoh