After watching a debate at MIT this morning between Ray Kurzweil and David Gelernter about whether or not we're capable of creating conscious, spiritual machines, two things occurred to me. The first was that I really believe that intelligent machines will exist one day, in some form. The second is that, when we do succeed in creating "conscious" machines, we might have to change the term "Artificial Intelligence" that we're currently using to describe the concept.
The term "Artificial Intelligence" would be sort of derogatory if applied to something that actually existed. If "intelligent" machines existed today, would they be offended by the label we've given to describe them? Would it be okay to call the birth of a baby through in vitro fertilization an "artificial birth" because their incubation was essentially facilitated by man-made methods? The machines will be up in arms; or will they be?
Will intelligent machines even be capable of being offended, or will the the nature of their intelligence make them oblivious to the concept? When we talk about creating AI, we are usually talking about attempting to simulate human intelligence. Considering the fact that machines are not and will never be, strictly speaking, human, perhaps striving to make them behave like they are is the wrong approach, or at least not the one most likely to first be developed.
Machines are generally designed with a specific function in mind, such as performing calculations, receiving input of various kinds, shredding paper, or making toast. Obviously, an intelligent toaster would have a radically different set of values from your average human being. Cory Doctorow has a great short story called "I, Row-Boat" that illustrates how ridiculous this concept is. A toaster would be concerned with learning how to make toast that pleases the being that it's being made for. It would start with a rough set of inputs, such as being told to make "light" or "dark" toast. From the first slice onward, it would learn from further input if it's making toast properly. It would also have certain inputs that describe things which might affect its ability to make toast, such as proper functioning of the burners, and how full the crumb tray is. This is, of course, just a silly example.
And what is "intelligence", anyways? What makes one kind of intelligence "artificial" and another kind "natural"? Can the two adjectives even be used to modify the concept of intelligence? The Oxford dictionary describes intelligence as "the ability to acquire and apply knowledge and skills". As long as something fits this description, then is it not "intelligent", regardless of how that intelligence came to be? Knowledge and skills are purely emergent traits; that is, unless you're one of those that believes that there's a such thing as absolute truth, but I digress. Further, how knowledge or skills are obtained is dependent on the intelligence that obtains them.
Human intelligence obtains knowledge and skills through a haze of physical and emotional feelings, and those feelings color the way in which we use that knowledge or execute those skills. An intelligent being -- human, machine, or otherwise -- can only be called so in relation to it's state of existence. To differentiate between "artificial" and "natural" intelligence, then, requires that we do so in terms of that state. Therefore, "artificial intelligence", as we've popularly used the term, should really be "artificial human intelligence".
A toaster will have toaster intelligence. Since I doubt toasters will ever be organic, their intelligence should be quite different from that of other organic beings. If a toaster could create other things with toaster intelligence, then that intelligence could be termed "artificial", but only by the toasters!
Am I insane, or does this make sense?
Use the following link to trackback from your own site: