Psychology Wiki

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Other fields of psychology: AI · Computer · Consulting · Consumer · Engineering · Environmental · Forensic · Military · Sport · Transpersonal · Index


Soar (also spelled SOAR) is a symbolic cognitive architecture, created by John Laird, Allen Newell, and Paul Rosenbloom at Carnegie Mellon University. It is both a view of what cognition is and an implementation of that view through a computer programming architecture for Artificial Intelligence (AI). Since its beginnings in 1983 and its presentation on a paper in 1987 it has been widely used by AI researchers to model different aspects of human behavior.

The main goal of the Soar project is to be able to handle the full range of capabilities of an intelligent agent, from highly routine to extremely difficult open-ended problems. In order for that to happen, according to the view underlying Soar, it needs to be able to create representations and use appropriate forms of knowledge (such as procedural, declarative, episodic, and possibly iconic). Soar should then address a collection of mechanisms of the mind. Also underlying the Soar architecture is the view that a symbolic system is necessary and sufficient for general intelligence (see brief comment on neats versus scruffies). This is known as the physical symbol system hypothesis. The views of cognition underlying Soar is tied to the psychological theory expressed in Allen Newell's book, Unified Theories of Cognition.

Although the ultimate goal for Soar is to achieve general intelligence, there is no claim that this goal has already been reached. Advocates of the system recognize that Soar is still missing some important aspects of intelligence. Currently there are projects underway to add episodic and semantic memories to Soar as well as support for emotions. Some additional examples of missing capabilities include automatically creating new representations on its own, such as through hierarchical clustering.

Soar is based on a production system, i.e. it uses explicit production rules to govern its behaviour (these are roughly of the form "if... then...", as also used in expert systems). Problem solving can be roughly described as a search through a problem space (the collection of different states which can be reached by the system at a particular time) for a goal state (which represents the solution for the problem). This is implemented by searching for the states which bring the system gradually closer to its goal. Each move consists of a decision cycle which has an elaboration phase (in which a variety of different pieces of knowledge bearing the problem are brought to Soar's working memory) and a decision procedure (which weighs what was found on the previous phase and assigns preferences to ultimately decide the action to be taken).

If the decision procedure just described is not able to determine a unique course of action, Soar may use different strategies, known as weak methods to solve the impasse. These methods are appropriate to situations in which knowledge is not abundant. Some examples are means-ends analysis (which may calculate the difference between each available option and the goal state) and a type of hill-climbing. When a solution is found by one of these methods, Soar uses a learning technique called chunking to transform the course of action taken into a new rule. The new rule can then be applied whenever Soar encounters the situation again (that is, there will be no longer an impasse).

ACT, e.g. ACT-R is another cognitive architecture by John R. Anderson that operates on similar principles. Other cognitive architectures are CLARION, ICARUS, DUAL, and Psi.

External links[]

References[]

de:SOAR {{enWP|Soar (cognitive architecture)}