Psychology Wiki
Psychology Wiki

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Cognitive Psychology: Attention · Decision making · Learning · Judgement · Memory · Motivation · Perception · Reasoning · Thinking  - Cognitive processes Cognition - Outline Index


This article is in need of attention from a psychologist/academic expert on the subject.
Please help recruit one, or improve this page yourself if you are qualified.
This banner appears on articles that are weak and whose contents should be approached with academic caution.
Risk
Brain animated color nevit

Articles

Risk perception is the subjective judgment that people make about the characteristics and severity of a risk. The phrase is most commonly used in reference to natural hazards and threats to the environment or health, such as nuclear power. Several theories have been proposed to explain why different people make different estimates of the dangerousness of risks. Three major families of theory have been developed: psychology approaches (heuristics and cognitive), anthropology/sociology approaches (Cultural Theory) and interdisciplinary approaches (Social Amplification of Risk Framework).

Early Theories[]

The study of risk perception arose out of the observation that experts and lay people often disagreed about how risky various technologies and natural hazards were.

The mid 1960’s saw the rapid rise of nuclear technologies and the promise for clean and safe energy. However, public perception shifted against this new technology. Fears of both longitudinal dangers to the environment as well as immediate disasters creating radioactive wastelands turned the public against this new technology. The scientific and governmental communities asked why public perception was against the use of nuclear energy when all of the scientific experts were declaring how safe it really was. The problem, from the perspectives of the experts, was a difference between scientific facts and an exaggerated public perception of the dangers [1].

A key early paper was written in 1969 by Chauncey Starr.[2] Starr used a revealed preference approach to find out what risks are considered acceptable by society. He assumed that society had reached equilibrium in its judgment of risks, so whatever risk levels actually existed in society were acceptable. His major finding was that people will accept risks 1,000 greater if they are voluntary (e.g. driving a car) than if they are involuntary (e.g. a nuclear disaster).

This early approach assumed that individuals behave in a rational manner, weighing information before making a decision. Individuals have exaggerated fears due to inadequate or incorrect information. Implied in this assumption is that additional information can help people understand true risk and hence lessen their opinion of danger [3]. While researchers in the engineering school did pioneer research in risk perception, by adapting theories from economics, it has little use in a practical setting. Numerous studies have rejected the belief that additional information, alone, will shift perceptions [4].

Psychology Approach[]

The psychology approach began with research in trying to understand how people process information. These early works maintain that people use cognitive heuristics in sorting and simplifying information which lead to biases in comprehension. Later work built on this foundation and became the psychometric paradigm. This approach identifies numerous factors responsible for influencing individual perceptions of risk, including dread, newness, stigma, and other factors [5].

Heuristics and Biases[]

The earliest psychometric research was done by psychologists Daniel Kahneman and Amos Tversky, who performed a series of gambling experiments to see how people evaluated probabilities. Their major finding was that people use a number of heuristics to evaluate information. These heuristics are usually useful shortcuts for thinking, but they may lead to inaccurate judgments in some situations -- in which case they become cognitive biases.

  • The Availability heuristic: events that can be more easily brought to mind or imagined are judged to be more likely than events that could not easily be imagined.
  • The Anchoring heuristic: people will often start with one piece of known information and then adjust it to create an estimate of an unknown risk -- but the adjustment will usually not be big enough.
  • Asymmetry between gains and losses: People are risk averse with respect to gains, preferring a sure thing over a gamble with a higher expected utility but which presents the possibility of getting nothing. On the other hand, people will be risk-seeking about losses, preferring to hope for the chance of losing nothing rather than taking a sure, but smaller, loss (e.g. insurance).
  • Threshold effects: People prefer to move from uncertainty to certainty over making a similar gain in certainty that does not lead to full certainty. For example, most people would choose a vaccine that reduces the incidence of disease A from 10% to 0% over one that reduces the incidence of disease B from 20% to 10%.

Another key finding was that the experts are not necessarily any better at estimating probabilities than lay people. Experts were often overconfident in the exactness of their estimates, and put too much stock in small samples of data[6].

Psychmetric Paradigm[]

Research within the psychometric paradigm turned to focus on the roles of affect, emotion, and stigma in influencing risk perception. Melissa Finucane and Paul Slovic have been among the key researchers here. These researchers first challenged Starr's article by examining expressed preference -- how much risk people say they are willing to accept. They found that, contrary to Starr's basic assumption, people generally saw most risks in society as being unacceptably high. They also found that the gap between voluntary and involuntary risks was not nearly as great as Starr claimed.

Slovic and team found that perceived risk is quantifiable and predictable. People tend to view current risk levels as unacceptably high for most activities[7]. All things being equal, the greater people perceived a benefit, the greater the tolerance for a risk[8]. If a person derived pleasure from using a product, people tended to judge its benefits as high and its risks as low. If the activity was disliked, the judgments were opposite[9]. Research in psychometrics has proven that risk perception is highly dependent on intuition, experiential thinking, and emotions.

Psychometric research identified a broad domain of characteristics that may be condensed into three high order factors: 1) the degree to which a risk is understood, 2) the degree to which it evokes a feeling of dread, and 3) the number of people exposed to the risk. A dread risk elicits visceral feelings of terror, uncontrollable, catastrophe, inequality, and uncontrolled. An unknown risk is new and unknown to science. The more a person dreads an activity, the higher its perceived risk and the more that person wants the risk reduced [10].

Anthropology/Sociology Approach[]

The anthropology/sociology approach posits risk perceptions as produced by and supporting social institutions [11]. In this view, perceptions are socially constructed by institutions, cultural values, and ways of life.

Cultural Theory[]

"The Cultural Theory of risk" (with capital C and T). Cultural Theory is based on the work of anthropologist Mary Douglas and political scientist Aaron Wildavsky first published in 1982[12].

In Cultural Theory, Douglas and Wildavsky outline four “ways of life” in a grid/group arrangement. Each way of life corresponds to a specific social structure and a particular outlook on risk. Grid categorizes the degree to which people are constrained and circumscribed in their social role. The tighter binding of social constraints limits individual negotiation. Group refers to the extent to which individuals are bounded by feelings of belonging or solidarity. The greater the bonds, the less individual choice are subject to personal control[13]. Four ways of life include: Hierarchical, Individualist, Egalitarian, and Fatalist.

Risk perception researchers have not widely accepted Cultural theory. Even Douglas says that the theory is controversial; it poses a danger of moving out of the favored paradigm of individual rational choice of which many researchers are comfortable[14].

Interdisciplinary Approach[]

Social Amplification of Risk Framework[]

The Social Amplification of Risk Framework (SARF), combines research in psychology, sociology, anthropology, and communications theory. SARF outlines how communications of risk events pass from the sender through intermediate stations to a receiver and in the process serve to amplify or attenuate perceptions of risk. All links in the communication chain, individuals, groups, media, etc., contain filters through which information is sorted and understood.

The theory attempts to explain the process by which risks are amplified, receiving public attention, or attuned, receiving less public attention. The theory may be used to compare responses from different groups in a single event, or analyze the same risk issue in multiple events. In a single risk event, some groups may amplify their perception of risks while other groups may attune, decrease, and their perceptions of risk.

The main thesis of SARF states that risk events interact with individual psychological, social and other cultural factors in ways that either increase or decrease public perceptions of risk. Behaviors of individuals and groups then generate secondary social or economic impacts while also increasing or decreasing the physical risk itself [15].

These ripple effects caused by the amplification of risk include enduring mental perceptions, impacts on business sales, and change in residential property values, changes in training and education, or social disorder. These secondary changes are perceived and reacted to by individuals and groups resulting in third-order impacts. As each higher-order impacts are reacted to, they may ripple to other parties and locations. Traditional risk analyses neglect these ripple effect impacts and thus greatly underestimate the adverse effects from certain risk events. Public distortion of risk signals provides a corrective mechanism by which society assesses a fuller determination of the risk and its impacts to such things not traditionally factored into a risk analysis[16].

See also[]

References[]

  1. Douglas, Mary. Risk Acceptability According to the Social Sciences. Russell Sage Foundation, 1985.
  2. "Social Benefits versus Technological Risks" in Science Vol. 165, No. 3899. (Sep. 19, 1969), pp. 1232-1238
  3. Douglas, Mary. Risk Acceptability According to the Social Sciences. Russell Sage Foundation, 1985.
  4. [Freudenburg, William R., “Risk and Recreancy: Weber, the Division of Labor, and the Rationality of Risk Perceptions.” Social Forces 71(4), (June 1993): 909-932.]
  5. Tversky, Amos and Daniel Kahneman. “Judgment under Uncertainty: Heuristics and Biases.” Science 185(4157) (September 1974): 1124-1131.
  6. Slovic, Paul, Baruch Fischhoff, Sarah Lichtenstein. “Why Study Risk Perception?” Risk Analysis 2(2) (1982): 83-93.
  7. Slovic, Paul, ed. The Perception of Risk. Earthscan, Virginia. 2000.
  8. Slovic, Paul, Baruch Fischhoff, Sarah Lichtenstein. “Why Study Risk Perception?” Risk Analysis 2(2) (1982): 83-93.
  9. Gregory, Robin & Robert Mendelsohn. “Perceived Risk, Dread, and Benefits.” Risk Analysis 13(3) (1993): 259-264
  10. Slovic, Paul, Baruch Fischhoff, Sarah Lichtenstein. “Why Study Risk Perception?” Risk Analysis 2(2) (1982): 83-93
  11. Wildavsky, Aaron and Karl Dake. “Theories of Risk Perception: Who Fears What and Why?” American Academy of Arts and Sciences (Daedalus) 119(4) (1990): 41-60.
  12. Douglas, Mary and Aaron Wildavsky. Risk and Culture. University of California Press, 1982.
  13. Thompson, Michael, Richard Ellis, Aaron Wildavsky. Cultural theory. Westview Press, Boulder, Colorado, 1990.
  14. Douglas, Mary. Risk and Blame: Essays in Cultural theory. New York: Routledge, 1992.
  15. Kasperson, Roger E., Ortwin Renn, Paul Slovic, Halina Brown, Jacque Emel, Robert Goble, Jeanne Kasperson, Samuel Ratick. “The Social Amplification of Risk: A Conceptual Framework.” Risk Analysis 8(2) (1988): 177-187.
  16. Kasperson, Jeanne X., Roger E. Kasperson. The Social Contours of Risk. Volumne I: Publics, Risk Communication & the Social Amplification of Risk. Earthscan, Virginia. 2005
This page uses Creative Commons Licensed content from Wikipedia (view authors).