Assessment |
Biopsychology |
Comparative |
Cognitive |
Developmental |
Language |
Individual differences |
Personality |
Philosophy |
Social |
Methods |
Statistics |
Clinical |
Educational |
Industrial |
Professional items |
World psychology |
Language: Linguistics · Semiotics · Speech
The cohort model in psycholinguistics and neurolinguistics is a model of lexical retrieval first proposed by William Marslen-Wilson in the late 1980s.[1] It attempts to describe how visual or auditory input (i.e., hearing or reading a word) is mapped onto a word in a hearer's lexicon. According to the model, when a person hears speech segments real-time, each speech segment "activates" every word in the lexicon that begins with that segment, and as more segments are added, more words are ruled out, until only one word is left that still matches the input.
Background Information[]
Prior to jumping into information about the cohort model, it is important to understand some other terms that are important when talking about lexical retrieval. First of all, the lexicon is the store of words in a person's mind.[2] It is how our vocabulary is stored and is similar to a mental dictionary. A lexical entry is all the information about a word and the lexical storage is the way the items are stored for peak retrieval. Finally, lexical access is the way that people access the information in the mental lexicon.
Model[]
The cohort model is based on the concept that auditory or visual input to the brain stimulates neurons as it enters the brain, rather than at the end of a word.[3] This fact was demonstrated in the 1980s through experiments with speech shadowing, in which subjects listened to recordings and were instructed to repeat aloud exactly what they heard, as quickly as possible; Marslen-Wilson found that the subjects often started to repeat a word before it had actually finished playing, which suggested that the word in the hearer's lexicon was activated before the entire word had been heard.[4] Findings such as these led Marslen-Wilson to propose the cohort model in 1987.[5]
There are three main stages in this model. Under this model, auditory lexical retrieval begins with the first one or two speech segments, or phonemes, reach the hearer's ear, at which time the mental lexicon activates every possible word that begins with that speech segment.[6] This occurs during the "access stage" and all of the possible words are known as the cohort. [7] The words that are activated by the speech signal but are not the intended word are often called "competitors."[8] Identification of the target word is more difficult with more competitors. [9] As more speech segments enter the ear and stimulate more neurons, causing the competitors that no longer match the input to be "kicked out" or to decrease in activation.[6][10] The processes by which words are activated and competitors rejected in the cohort model are frequently called "activation and selection" or "recognition and competition." These processes continue until an instant, called the recognition point,[6] at which only one word remains activated and all competitors have been kicked out. This is also known as the uniqueness point and it is the point where the most processing occurs. [7] The selection stage occurs when only one word is left from the set. [7]
For example, in the auditory recognition of the word "candle," the following steps take place. When the hearer hears the first two phonemes /k/ and /æ/ ((1) and (2) in the image), he or she would activate the word "candle," along with competitors such as "candy," "can," "cattle," and numerous others. Once the phoneme /n/ is added ((3) in the image), "cattle" would be kicked out; with /d/, "can" would be kicked out; and this process would continue until the recognition point, the final /l/ of "candle," were reached ((5) in the image).[11] The recognition point need not always be the final phoneme of the word; the recognition point of "slander," for example, occurs at the /d/ (since no other English words begin "sland-");[4] all competitors for "spaghetti" are ruled out as early as /spəɡ/;[11] Jerome Packard has demonstrated that the recognition point of the Chinese word huŏchē ("train") occurs before huŏch-;[12] and a landmark study by Pienie Zwitserlood demonstrated that the recognition point of the Dutch word kapitein (captain) was at the vowel before the final /n/.[13]
Since its original proposal, the model has been adjusted to allow for the role that context plays in helping the hearer rule out competitors,[6] and the fact that activation is "tolerant" to minor acoustic mismatches that arise because of coarticulation (a property by which language sounds are slightly changed by the sounds preceding and following them).[14]
Experimental evidence[]
Much evidence in favor of the cohort model has come from priming studies, in which a "priming word" is presented to a subject and then closely followed by a "target word" and the subject asked to identify if the target word is a real word or not; the theory behind the priming paradigm is that if a word is activated in the subject's mental lexicon, the subject will be able to respond more quickly to the target word.[15] If the subject does respond more quickly, the target word is said to be "primed" by the priming word. Several priming studies have found that when a stimulus that does not reach recognition point is presented, numerous words targets were all primed, whereas if a stimulus past recognition point is presented, only one word is primed. For example, in Pienie Zwitserlood's study of Dutch compared the words kapitein ("captain") and kapitaal ("capital" or "money"); in the study, the stem kapit- primed both boot ("boat," semantically related to kapitein) and geld ("gold," semantically related to kapitaal), suggesting that both lexical entries were activated; the full word kapitein, on the other hand, primed only boot and not geld.[13]
Later experiments refined the model. For example, some studies showed that "shadowers" (subjects who listen to auditory stimuli and repeat it as quickly as possible) could not shadow as quickly when words were jumbled up so they didn't mean anything; those results suggested that sentence structure and speech context also contribute to the process of activation and selection.[4]
Other applications[]
Text-messaging programs on cellular phones, Microsoft Word's and Google "autocomplete" feature and other text-input programs (such as commercial GPS systems) utilize this method.[How to reference and link to summary or text]
See also[]
References[]
- ↑ Packard, 287.
- ↑ [1], The Free Dictionary
- ↑ Altmann, 71.
- ↑ 4.0 4.1 4.2 Altmann, 70.
- ↑ Marslen-Wilson, W. (1987). "Functional parallelism in spoken word recognition." Cognition, 25, 71-102.
- ↑ 6.0 6.1 6.2 6.3 Packard, 288.
- ↑ 7.0 7.1 7.2 HARLEY, T. A. (2009). Psychology of language, from data to theory. New York: Psychology Pr.
- ↑ Ibrahim, Raphiq (2008). Does Visual and Auditory Word Perception have a Language-Selective Input? Evidence from Word Processing in Semitic languages. The Linguistics Journal 3 (2).
- ↑ [2], Goldwater, Sharon (2010).
- ↑ Altmann, 74.
- ↑ 11.0 11.1 Brysbaert, Marc, and Ton Dijkstra (2006). "Changing views on word recognition in bilinguals." in Bilingualism and second language acquisition, eds. Morais, J. & d’Ydewalle, G. Brussels: KVAB.
- ↑ Packard, 289.
- ↑ 13.0 13.1 Altmann, 72.
- ↑ Altmann, 75.
- ↑ Packard, 295.
- Altmann, Gerry T.M. (1997). "Words, and how we (eventually) find them." The Ascent of Babel: An Exploration of Language, Mind, and Understanding. Oxford: Oxford University Press. pp. 65-83.
- Packard, Jerome L (2000). "Chinese words and the lexicon." The Morphology of Chinese: A Linguistic and Cognitive Approach. Cambridge: Cambridge University Press. pp. 284-309.
This page uses Creative Commons Licensed content from Wikipedia (view authors). |