In most people’s minds, computers can ‘think’ far faster than the human mind. But is that really true? consider just what happens in the human mind during a conversation:
1. The ears must detect the variations in air pressure, convert it to a electronic signal and send it over the nervous system to the auditory processing center of the brain.
2. The auditory processing center must decode that message and determine if the sound was a word, after filtering out random background noises cause by the wind, a bird call, nearby machinery, or what have you. This is no small trick, and computers today still have problems in this area.
3. Now the speech/language center of the brain gets involved. It must determine what that word was, link it to any prior words, and do a lookup of the meaning of the word, before delivering the result to the prefrontal lobe as something that needs to be looked at by the ‘consciousness’.
4. Now the ‘you’, the ‘thinking’ part, has to take this piece of information, link it with the database storage of your entire life experience, cross-correlate and index it with all that information to help determine what that word means to you and what associations you have with that meaning, and add the visuals: who are you talking to, what is their facial expression, their body language, the tonal quality of the word – all things which may modify the exact meaning of what has been said. And note that the visual processing involves at least as many steps as does the auditory, and is being performed simultaneously to give your consciousness that complete, real-time picture of what is happening.
5. All the words must be processed to determine the actual complete sense of what has been said to you, so now the short-term memory storage must also be accessed, bringing with it the entire gamut of information that was associated with each of the prior words that had been processed.
6. A response must be composed. Once more, both short and long term memory must be accessed, appropriate words chosen to convey the desired meaning, and signals sent to motor controls for throat , lungs, voice-box, lips, and mouth to actually deliver the response.
Given that average speech rates are 200-300 words per minute, this means average word generation is taking about 200 milliseconds. Which means, when you look at the individual actions taken by the brain and associated nervous system, that they are processing things in micro-, or perhaps even nano-, second time frames. This compares quite favorably with most computer speeds.
But you say that computers can calculate arithmetical sums far faster and far more accurately than people! While this is normally true (but just look at what some so-called ‘lightning calculator’ humans can do as a comparison stick), it ignores the fact that computers are extremely single minded – even those programmed to do multi-threaded multi-tasking. The human brain continuously processes, weighs, and forms decision trees about a tremendous amount of information from the ‘outside’ world, integrating it to a gestalt map that informs and influences everything we do or say. And it is in exactly this area that computers compare poorly to humans, and why it’s still true that we haven’t yet built anything that even approaches what most would call a true ‘artificial’ intelligence.
Now part of this gap is a deficiency in how we program computers, an item that is continuously being worked on, with improvements constantly being made, but these improvements, so far at least, have been coming at a pretty linear rate – no great ahas! that have taking computer processing up in giant leaps. Part of the reason for this is that we still don’t understand just exactly how the human brain does what it does, so making a computer mimic it is a little bit of a guessing game. Until we gain a better understanding of just how the brain functions, I don’t think anything like Asimov’s autonomous robots are going to arrive.
So we have a few years, at least, until Colossus takes over the world, and we all end up as slaves to it.