Classical and Connectionist Cognitive Models

By ai-depot | September 2, 2002

The Language of Thought

The Language of Thought

The PSSH is not the only example of computational architectures, but it is the most prevalent in AI. The Language of Thought (Fodor, 1975), or LOT, can be regarded as a supra-hypothesis to the PSSH. It states that there are certain characteristics of a valid cognitive model that must be present in order for an agent to carry out intelligent action. The PSSH is a particular example (though it must be understood that these were separately developed hypotheses that had no relation to each other than that interpreted post publication) that ensures that a model will have those characteristics.

A simple way of understanding LOT is to consider natural language. In the same way that ideas and beliefs are communicated via syntactic structures with a semantic meaning, Fodor believes that the brain uses a mental language in order to interpret and manipulate information:

“…[The] emphasis upon the syntactical character of thought suggests a view of cognitive processes in general — including, for example, perception, memory and learning — as occurring in a language-like medium, a sort of ‘language of thought’…”

So what is it exactly that a LOT consists of? Basically, an LOT uses combinatorial syntax and semantics in forming mental representations (Fodor and Pyshlyn, 1988). Again, it is useful to look at natural language in order to understand what this means. Syntax in natural language refers to individual parts of a sentence, such as ‘John’, ‘loves’ and ‘Mary’. Each of these parts is a syntactic component that combines together to form a sentence. The syntax and grammar are important, because these are essentially the rules of the representation system. So for example, ‘loves Mary John’ is syntactically incorrect because it disobeys the rules of the model. What is understood by the sentence (i.e. that ‘John loves Mary’) is referred to as the semantic meaning. Just to make things slightly more complicated, the individual syntactic components, such as ‘John’ and ‘Mary’, also have a semantic meaning of their own. This means that the full sentence is a syntactic combination, in that it is well-formed according to the rules of the representation structure, and also a semantic one in that what is understood by ‘John’, ‘loves’ and ‘Mary’ combine to form a whole sentence with a new semantic meaning.

LOT models therefore obey certain principles (Fodor and Pyshlyn, 1988):

  • There is a distinction between structurally atomic and structurally molecular representations. This refers to the fact that some concepts or objects are made up of a number of sub-components, whilst others are not. For instance, a computer is made up of a number of components that combine to give the object commonly found on the office desk. For efficiency, a computer is not usually referred to as the processor-memory-harddisk-graphicscard-soundcard-modem-keyboard-mouse-monitor device. The concept is the same for a LOT, though the exact nature of the atomic and molecular relationship in the brains representational system (such as how many atomic symbols there are and what the atomic symbols are) is not yet fully understood.
  • Structurally molecular representations have syntactic constituents that are themselves either structurally molecular or are structurally atomic. In other words, the molecular representation of computer is itself made up of other molecular representations (such as cpu, modem and motherboard) as well as atomic representations.
  • The semantic content of a (molecular) representation is a function of the semantic contents of its syntactic parts, together with its constituent structure. As discussed above, the semantic meaning of ‘John loves Mary’ is derived from what is understood by the individual elements ‘John’, ‘loves’ and ‘Mary’. The order that these are put together also helps to determine the meaning of the expression.
  • Structure sensitivity of process. This means that the structure of the cognitive model defines how mental states are transformed. A symbolic architecture uses the rules of syntax to transform an expression that satisfies a given structural description into another expression that also satisfies the structural description. For example, a cognitive model may infer P from P&Q.

The classical view is that the combinatorial structure of mental representations has a corresponding structural relation among physical properties of the brain – hence the Physical Symbol System Hypothesis. Fodor and Pylyshyn (1988) write that “the physical properties onto which the structure of the symbols are mapped are the very properties that cause the system to behave as it does” (original emphasis). The LOT hypothesis explains how a physical system can implement systematic relations among attitudes, in much the same way that representationalism explains the relations between different attitudes with the same idea.

Objection to the Classical Model

The classical view of cognition helps to understand how the mental-body problem can be solved – that is, how inferential processes can be carried out within a physical system such as human being. It has the advantage of being computational in nature, which makes it somewhat analogous to logic processing and can be implemented in a computer relatively easily. It is also instinctively attractive. It is well understood that humans like to deal with images and symbols, and constantly attempt to classify and categorise the world around them. There are, however, objections to symbolic models.

The first objection is the essentially serial nature of classical symbol systems. This means that for inference to be quick, a devastatingly fast computer is needed to search through all of its symbolic relationships and expressions. This is significant because of the expert phenomenon – as an expert acquires more experience, and therefore more symbols and causal relationships between these symbols, he also becomes noticeably quicker at a given task. This kind of ability is difficult to explain using symbolic systems. In fact, as a greater number of causal links are established, the growth in complexity is exponential. As more experience is gained, new symbolic relationships must propagate through an ever-increasing number of combinations. Additionally, one must ask what the exact relationship between the body and mind is – can a symbolic system account for coordination? Attempts to build robots with an inbuilt representation of the world move incredibly slowly, despite the speed of their processors being many times faster than that of the human brain.

Another consideration is how symbolic models account for intuition (the “aha” factor as a leap of deduction is made). If inference is simply (sic) a case of computation, then why can conclusions be reached without fully understanding the path from problem to solution? With symbol systems, to determine an outcome should be a case of tracing back through the path of deduction from the conclusion to the to initial problem, but the “aha” phenomenon precludes this method.

Thirdly, symbol systems tend to be very brittle. If a symbol system is provided with only partial information, or that information is slightly different to what is considered normal, it normally fails utterly to provide a meaningful answer. This is in stark contrast to the human mind that can deal with all manner of partial, new, or even contradicting information, with a gradual decline in effectiveness.

Pages: 1 2 3 4

Tags: none
Category: essay |

Comments