The Real, Random World and AIs
Posted by hyperpat on October 17, 2006
One of the major unrealized goals of computer science is to develop a real, live artificial intelligence. While great strides have been made, from expert systems to neural networks that can ‘learn’, so far there is no system that can pass the Turing test. In its simplest form, the Turing test states that a human observer cannot tell the difference between conversing with a computer and a human on any reliable basis. Given the incredible pace of technological advance in computer hardware, and to a lesser extent computer software, why should this still be true?
Part of the reason is that there is no clear, deterministic definition of just what makes humans ‘intelligent’. If humans merely follow a set of ‘rules’ for how to deal with the external world, rules that are ‘programmed’ by both genetics and interaction with the environment (read ‘learning’), then it should be (comparatively) easy to program this same set of rules into a computer, and viola, we would have a new ‘person’. But there seems to be something else involved in human intelligence, something that can’t be predicted or heuristically programmed, namely the whole concept of creativity, where totally new concepts and ideas seem to spring out of nowhere. Where and how does creativity originate?
Clearly, at least some original concepts come from a churning of the vast number of variables a human is subject to, anything from the weather to what he ate for breakfast; the concept itself was there all along to be seen, but it took an unusual juxtaposition of multiple facts for it to become evident. Then there are those ideas and formulations that literally seem to come from nowhere, such as the creation of a sonnet or the invention of a brand new type of mathematics. Both of these seem to depend at least partially on the concept of ‘randomness’. But is there truly such a thing as randomness? ‘Random’ implies an event that is not predictable, and that multiple ‘trials’ of the same type of action will lead to multiple different results. Tossing a coin in the air and recording whether it falls as heads or tails seems to meet this criteria – you’re doing the same thing, over and over, but the results are different for each trial, and there is no discernable pattern that would allow for prediction of whether the next trial will fall heads or tails (or on its edge!). But in reality, isn’t each trial totally deterministic from the moment the coin is tossed? If we could just enumerate all the variables (amount of spin applied, loft velocity and direction, unevenness in the landing spot, friction coefficients, atmospheric humidity and air currents, etc.), could we not actually predict whether it will land on heads or tails? Of course, this is a major philosophical question – if everything is in reality deterministic, isn’t everything we do or think predetermined from the very beginning of time? In practicality, when the number of influencing factors becomes too large, they are no longer effectively denumerable, and the end result is a ‘chance’ happening (at least to our limited senses and computing power).
And perhaps this is the answer to achieving a real AI. Make the number of items that must be considered by the computer as part of its computation a very large number of different types of inputs, things that are not practically deterministic, and we might have a better chance of achieving something that any human would call ‘intelligent’. Of course this would require a major enhancement to the amount of memory, processing speed, and interconnecting pathways of the computer from what exists today, along with a willingness to wire into it inputs that seemingly have no relevance to its normal tasks (about like attaching a kitchen sink to a car engine). In fact, this was one of the basic ideas behind Robert Heinlein’s ‘live’ computer Mike in The Moon is a Harsh Mistress. But until this happens, I predict there will be no true AIs developed.