Classical and Connectionist Cognitive Models
By ai-depot | September 2, 2002
An introduction to the classical and connectionist classifications, discussing the advantages and disadvantages of each approach, and highlighting the difficulties faced in deciding which approach to adopt during development.
Written by Linden Hutchinson.
Representationalism
Introduction
The philosophical debate between Classicalists and Connectionists has been raging for over a decade. Ever since Fodor and Pylyshyn (1988) discounted Connectionism as a valid alternative to the Language of Thought (LOT), (rather they see it as an alternative implementation of LOT), philosophers have argued ceaselessly over the merits of their argument. In AI terms, the debate is analogous to the debate between symbolic and sub-symbolic. From the point of view of AI programmers and developers, the philosophical debate is largely inconsequential, especially when one considers that neither classical nor connectionist approaches have solved many of the issues of cognition. However, understanding the theoretical pros and cons of these approaches is essential if the AI practitioner is not to be blindly led by proponents of a particular methodology and ultimately fail in their attempt at designing an appropriate system. This essay is an introduction to the classical and connectionist classifications, discussing the advantages and disadvantages of each, and highlighting the difficulties faced in deciding which approach to adopt.
Representationalism
Both the classical and connectionist view of cognition is rooted in representationalism. This is the idea that there are mental states within the brain that correspond to the physical states of the world. A representation is an idea or a concept of some proposition, for instance the proposition that “The Sky is Blue”. When we are thinking of the concept “The Sky is Blue” our minds are in a certain state that corresponds to the physical reality. However, the representation is rarely manifested in the simple form of an idea, rather it is attached to an attitude. For example, one may believe, desire or hope that the sky is blue today. Further to this, there is a causal relation between attitudes, so perceiving that the sky is blue tends to cause believing that the sky is blue, whilst doubting it is so tends to interact with wishing it were so (Lormand, 1991). In this sense, the idea does not disappear when ones attitude changes and an idea is typically involved in many different attitudes
The Physical Symbol System Hypothesis
Representationalism as defined above suffices to explain how the mind understands the world and forms an internal representation of it, but the question remains as to how an entity translates world events into thoughts, and how thoughts are translated back into the physical world through actions. In other words, what is the principle behind thinking and acting? This is known as the mental-body problem (Lormand, 1991). In AI, the Physical Symbol System Hypothesis (Newell and Simon, 1976), or PSSH, has been the basis for many of the intelligent systems designed over the last twenty years. The PSSH treats the mental-body problem as a computational process, stating that:
“A physical symbol system has the necessary and sufficient means for general intelligent”
.
The PSSH is rooted in a number of explicit assumptions (French and Thomas, 2001):
- The world can be cut up into discrete objects, each of which can designated by a symbol;
- Each symbol refers to an object (e.g. cat, mouse, cheese), an action (e.g. catch, bite, eat), or a state of the world (e.g. stealth, dangerous, terminal);
- Each string of symbols (i.e. expressions) has an interpretation in the real world (e.g. cats like to eat mice);
- Rules and an underlying “logic of thought” govern the manipulation of the symbols and expressions in the system;
- Cognitive agents are, basically, digital computers.
The consequence of this hypothesis is that it is understood that the human brain is a unit that processes symbols. An example of the PSSH in action is in Production Systems. These are systems that compare inputs (states of the world) to a list of If/Then rules that are hard coded into the system in advance. These rules are based on a logic language, such as first order predicate logic, that uses symbols to process expressions. For example, a hypothetical medical expert system might have the following rule:
IF patient HAS chest pain AND low blood pressure THEN patient HAD heart attack
The production system will compare the inputs of the user to its list of rules and determine an outcome. Rules of logic can be used to determine new beliefs about the state of the world. For example, “When I fall in the lake I get wet”:
The production system therefore implements a symbolic model that makes theoretical commitments at the level of production rules, and defines a computationally complete system at that level (Lewis, 1999).
Tags: none
Category: essay |