AI: The Technological Trickster

By ai-depot | October 26, 2002

AI in the Social Sciences

Some researchers have noticed advances in thought often result not from asking the correct questions, but rather from learning to formulate the questions in such a way that they can be answered with the some degree of objectivity and precision. As Thomas Khun noted, this process sometimes results in new paradigms replacing older ones.

This paper will examine current knowledge about what the brain is doing when it is thinking, will attempt to compare and contrast that with what machines do when we consider them to be thinking, and see if the interaction may shed any light on what is happening when meaning is somehow developed from material forms interacting in complex systems. Most present studies have assumed meaning is derived from matter in some way that is currently not understood. My hypothesis assumes meaning precedes matter, at least in mental representation, and that the thought process attaches symbol to meaning rather than the reverse.

To avoid confusion, I will use the term “brain” to refer to the evolved organic information processor we find within the bodies of evolved organisms, and the term “computer” to refer to the mechanical information-processing version of same, usually found outside the body.

In 1979, Hubert R. Dreyfus wrote a book called What Computers Can’t Do. He was hesitant to say they would never be able to accomplish all the things brains can do, but he did attempt to categorize the differences. He suggested “…no one has any idea how to program needs into a machine…” (Dreyfus, p. 298) but expressed optimism that if such ever becomes possible we would do so. And should.

In the past fifteen years, much progress has been made in developing computers that do some of the things Dreyfus predicted, such as play chess at a master level and analyze large amounts of information in everything from particle accelerators to medical diagnoses. In many of the things human brains are capable of in what Dreyfus calls Area IV problem solving where no known algorithms can be made to operate, (ie: solving riddles and open-structured problems, translating a natural language or recognition of varied and distorted patterns), (Dreyfus, p. 292) apparently little has changed.

It is possible we have reached the limits of what computers are able to do in mimicking brain activity. Increases in quantity and complexity beyond a certain level do not always result in higher degrees of knowledge. Whether computers can think or not, it is reasonable to assume they do what they do quite differently from the way humans and other animals think. I will investigate these differences and to see if we might make some generalizations about them.

An important sub-hypothesis concerns how thinking machines, (or machines that mimic thinking for those who prefer the speciescentric position) may aid and enhance, or inhibit, human thought and actions. Will our dependence on computers result in the loss of certain human abilities that are important to our survival? Perhaps the Year 2000 glitch will provide a working empirical model.

Pages: 1 2 3 4 5 6 7 8

Tags: none
Category: essay |

Comments