Politically Correct Artificial Intelligence 6

Posted by Ryan Kinderman Thu, 11 Jan 2007 06:21:00 GMT

After watching a debate at MIT this morning between Ray Kurzweil and David Gelernter about whether or not we're capable of creating conscious, spiritual machines, two things occurred to me. The first was that I really believe that intelligent machines will exist one day, in some form. The second is that, when we do succeed in creating "conscious" machines, we might have to change the term "Artificial Intelligence" that we're currently using to describe the concept.

The term "Artificial Intelligence" would be sort of derogatory if applied to something that actually existed. If "intelligent" machines existed today, would they be offended by the label we've given to describe them? Would it be okay to call the birth of a baby through in vitro fertilization an "artificial birth" because their incubation was essentially facilitated by man-made methods? The machines will be up in arms; or will they be?

Will intelligent machines even be capable of being offended, or will the the nature of their intelligence make them oblivious to the concept? When we talk about creating AI, we are usually talking about attempting to simulate human intelligence. Considering the fact that machines are not and will never be, strictly speaking, human, perhaps striving to make them behave like they are is the wrong approach, or at least not the one most likely to first be developed.

Machines are generally designed with a specific function in mind, such as performing calculations, receiving input of various kinds, shredding paper, or making toast. Obviously, an intelligent toaster would have a radically different set of values from your average human being. Cory Doctorow has a great short story called "I, Row-Boat" that illustrates how ridiculous this concept is. A toaster would be concerned with learning how to make toast that pleases the being that it's being made for. It would start with a rough set of inputs, such as being told to make "light" or "dark" toast. From the first slice onward, it would learn from further input if it's making toast properly. It would also have certain inputs that describe things which might affect its ability to make toast, such as proper functioning of the burners, and how full the crumb tray is. This is, of course, just a silly example.

And what is "intelligence", anyways? What makes one kind of intelligence "artificial" and another kind "natural"? Can the two adjectives even be used to modify the concept of intelligence? The Oxford dictionary describes intelligence as "the ability to acquire and apply knowledge and skills". As long as something fits this description, then is it not "intelligent", regardless of how that intelligence came to be? Knowledge and skills are purely emergent traits; that is, unless you're one of those that believes that there's a such thing as absolute truth, but I digress. Further, how knowledge or skills are obtained is dependent on the intelligence that obtains them.

Human intelligence obtains knowledge and skills through a haze of physical and emotional feelings, and those feelings color the way in which we use that knowledge or execute those skills. An intelligent being -- human, machine, or otherwise -- can only be called so in relation to it's state of existence. To differentiate between "artificial" and "natural" intelligence, then, requires that we do so in terms of that state. Therefore, "artificial intelligence", as we've popularly used the term, should really be "artificial human intelligence".

A toaster will have toaster intelligence. Since I doubt toasters will ever be organic, their intelligence should be quite different from that of other organic beings. If a toaster could create other things with toaster intelligence, then that intelligence could be termed "artificial", but only by the toasters!

Am I insane, or does this make sense?

Trackbacks

Use the following link to trackback from your own site:
http://kinderman.net/trackbacks?article_id=politically-correct-artificial-intelligence&day=10&month=01&year=2007

Comments

Leave a comment

  1. kell 1 day later:

    At 6:54 in the morning, this actually makes sense. I don't have time to write much, other than this thought: toasters could have artificial human intelligence. it could be programmed to act angry or pout when it messes up the toast. it could be happy and do happy toaster things when it gets it right. it could get frustrated with an owner who varies his toasting preferences too much. i'm not sure how a toaster would pout, but there's my random thought for the morning. Cheers!

  2. Ryan Kinderman 1 day later:

    I would argue that being angry or pouting are not strictly related to intelligence. They are an effect of our state of existence. Since a toaster could never share our state of existence, it would be impossible for it to understand what being angry would mean. If it can't understand what it means, it can't learn from it, and it is therefore a meaningless concept which contributes nothing to the furtherance of its knowledge and skills.

    In order to understand what anger is as we understand it, it would have to share our chemical and biological make-up, which are responsible for our moods. Moreover, the toaster would have to share our experience as well.

  3. kell 4 days later:

    But if it's "artificial" intelligence, I still think a toaster could be programmed to act like a human...it could be given a "personality" with a set of pre-determined responses to various experiences. To all outsiders, it would be acting like a human. The toaster wouldn't "understand" its responses in the human sense, but then again, it's not real intelligence, it's artificial. You could probably even program the toaster to "learn" from its experiences and respond accordingly based on previous recorded responses. I don't think the toaster would need to understand that it was acting human, since we're talking about artificial intelligence (looks like intelligence, but it isn't!).

  4. Guy http://gpatrick.blogspot.com 18 days later:

    Interesting thoughts, guys.

    Instead of considering whether machine intelligence has an analog in human experience, it might be more fruitful to consider the structural homology between this issue and the issues of utopia and otherness. Paraphrasing Fredric Jameson, the desire for utopia involves a kind of (no doubt unconscious) desire for self-annihilation. We are here and now, embroiled in the world’s material constraints, imbricated in its economic realities. These facts, however we may understand them, are ineluctable. We cannot escape. Development, progress, improvement—these processes, whether or not we accept their accompanying rhetoric—alter our lives only within the realm of the given. In other words, change is profane, complicated, and only an idiot would suggest that our endeavors are capable of producing perfection, heaven on earth, etc. For that perfect world is free of our present constraints; utopia is utopia and by its very nature is radically outside of our experience. To actualize it then, to make it a reality, would necessarily mean the end of everything that we consider as us. We are a part of this complex and conditional world; there is no part for us in that future space.

    For the very same reasons, we cannot explain the experience of the intelligent toaster. It’s intelligence could be described and named, but these descriptions and names would only be approximations, conjectures. To think the other, be it the utopia of our desires or the seemingly analogous thinking machine, is merely to offer conjecture. In reality, we have no access to its intelligence. We cannot say whether it thinks like a human or like anything else. And that is because we cannot inhabit that space and because we cannot share the experience of the toaster; by their very being, they are closed off from us, separate, outside, untouchable. To know the toaster’s life and intelligence is to be the toaster—but by becoming the toaster, however impossible that mutation would be, we lose our human intelligence and experience. This works vice versa. Thus, the distance between toasters and humans is vast and cannot be traversed. Our attempts to communicate with each other are tragedies by definition. ;-)

    To steal more from Jameson, I’d also like to bring up his discussion of Solaris. In his work, he’s talking about the Stanislav Lem novel, but his illustrations work just as easily with either the Russian or American version of the film. In the American version, the astronauts who cycle around the mysterious planet of Solaris seek to make contact with it (it appears intelligent) and understand its intent. This project is quickly jettisoned, however, when copies of the astronauts’ dead and earthbound loved ones begin to appear on the ship. Disaster and drama follows. But what is central about this experience is the gulf between the astronauts and Solaris. We can offer ideas about why the planet is throwing up doubles of the astronauts’ loved ones: the planet is attempting to kill off and sabotage the humans, it is attempting to please the humans (the central character, played by George Clooney, can now reunite with his dead wife), or maybe Solaris is simply a mute and mindless producer, it reads the minds of the astronauts and makes its copies with no purposive intent. But these possibilities are ultimately unsatisfying because they can never be verified. Solaris is obviously an intelligent thing, but as for our ability to understand its intelligence, intent, and even experience—that is impossible, outside of us, a fact we cannot attain. Just like the character of the toaster’s intelligence.

    So how about this: Regarding this gulf, could we begin to think of traversing it by deconstructing the very notion of intelligence? When we consider the problems of artificial intelligence in relationship to humans, what would happen if we did away with a humanist definition of man? Would a post-human better understand the machine? What about the cyborg (the hybrid)?

  5. Ryan Kinderman 26 days later:

    Your final questions further validate your previous points -- since neither a post-human, whatever that might be, nor a cyborg could be called human, by the commonly-accepted meaning of the word, it would have an intelligence that we could not understand.

    Regardless, I disagree with the idea that we can't possibly imagine a new reality for ourselves which, from what I understand from your response :P, is the general idea you're trying to espouse. Utopia is a rather ambiguous goal to aim for in such a mental modification, but I do believe that we are capable of adapting to new realities; we do it all the time when we meet someone new, or have a new experience.

    It's likely that if we were to take a thinking human from our world and put them in a vastly different one, even this "Utopia" that you speak of, that this person would be able to adapt, given enough time and quality genes. It might take a year, it might take ten, but in the end the person who has adapted to the new world would not be the same person as the one that first arrived.

    It seems to me like Frederic Jameson has a right mighty stick up his bum.

  6. Guy http://gpatrick.blogspot.com 28 days later:

    You may be misunderstanding my usage of Jameson and by this route thinking him in dire need of a colorectal surgeon.

    It is not that we cannot imagine a new reality for ourselves. Rather, any alternative reality we can imagine is always a product of the contemporary—of our given historical moment, mode of production, social climate, etc. I think Jameson would agree with you: we are capable of an astonishing ingenuity. We can conceive and (at times) fashion phenomenal technologies, worlds, creatures, and ways of being much different than what we are used to. But these ingenious productions, however diverse, are something altogether different from the utopic figures I spoke of previously. They are no doubt powered by a kind of utopic impulse (“[t]he desire called utopia” is really the subject of Jameson’s interest), but beyond any kind of evaluative judgment they are always hybrid things, admixtures, conglomerations.

    In contrast, the utopia (which is less a possibility and more of an abiding figure whose discursive shape has changed very little) is a fantasy of a future time and place that has somehow jumped the tracks and succeeded in existing outside of the constraints of history and the given. It is a thing that is constitutively not an admixture or conglomeration. It is, in effect, a figure of radical alterity and otherness; it is so different from what is proper to us, so alien, and so free of the situations and problems of our world, that it is in this way inconceivable. We, with our persons firmly rooted in the history and given, could never visit this space. We are essentially its anti-thesis. And unless we are believers in magic, we know that this society is impossible. I am not saying, nor do I think Jameson would say, that we cannot imagine and maybe even bring into being a better, more equitable, less exploitive, sustainable world or worlds that might now seem strange, bizarre, vastly different. I am saying, however, that any kind of imagined space will always reflect our present situation and will always have a history traceable to our own. To modify a cliché: we cannot escape our history, nor can the things of the future.

    In fact, Jameson’s project is largely invested in exposing how classic literary utopias – from More’s Utopia to contemporary sci-fi – can be seen as failures to escape history. For example, More constructs his perfect society by first abolishing the concept of private property – a new concept at the time, a product of the rise of the bourgeois class, and one that was a major anxiety of the England of the early 14th century.

    But what I find more striking, and what gets us back to the issue of alternative intelligences, is the fact that we have invested in certain figures both a utopic character and a sense of otherness antithetical to human ways of being: aliens, robots, animals, etc. My sense, Jameson aside, is that we cannot truly know the subjective experience of beings that are not us: so the “intelligences” of super smart toasters, robots, cloning-capable alien planets. From a human perspective, these beings are anterior to history. They may have been with us all along, but we cannot know our history from their perspective. They are closed off from us, because they are fundamentally not us.

    Returning to my prior question: to be more explicit, in regards to the possibilities that arise from what I understand to be normal, non-utopic, hybrid ideas, what could come of the post-human and the cyborg? The post-human, no doubt, bears a certain multiplicity but let’s consider first a couple of its discursive configurations. On this end of things, one form of the post-human is that which remains biologically human but operates subjectively in a way that is far beyond human ways of thinking and being. So Nietzsche’s idealization, the Super Man, as the being that surpasses the slave mentality of that mustachioed fella’s Europe. Or Italian philosopher Giorgio Agamben’s post-human who, in a Derridean manner, comes to view the law (and by fiat, the state) as not things to submit to, not things of a transcendent authority, but texts of our own creation inviting manipulation and play. In both of these cases, the shift to the post-human is a shift in perspective and subjective experience, and does not necessitate (even as it does not forbid) an alteration in biology. Despite their respective differences, they suggest that the coming into being of a new human is the adoption of a new discursive orientation, a way of seeing that is not human but something else. Thus, for this figure, my question is: can we become post-human via a radical shift in perspective, and would this shift enable us to understand—if only partially—the intelligence of a figure deemed anterior to “human” intelligence?

    On the other of things is the cyborg. Following the same logic, and acknowledging the biological differences, we might ask a version of the same question and ask if this hybrid figure would come (even) closer.

    But even if these figures come closer, it seems obvious that the problem also becomes one of looking backward. That is, once the orientation has shifted and the subjective experience has changed, what was once familiar and everyday as a human is now irrecoverable. And isn’t this already the truth of things? We do not know the lives of those who are hardly distant; decades past appear to us as mere archives of nostalgia, uncanny and weird. When we couple this with the deep amnesia of everyday Americans, things become even more muddled.

Comments