Psychology Wiki

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Other fields of psychology: AI · Computer · Consulting · Consumer · Engineering · Environmental · Forensic · Military · Sport · Transpersonal · Index


This article needs rewriting to enhance its relevance to psychologists..
Please help to improve this page yourself if you can..


Opinion polls are surveys of opinion using sampling. They are usually designed to represent the opinions of a population by asking a small number of people a series of questions and then extrapolating the answers to the larger group.

History of opinion polls[]

The first known example of an opinion poll was a local straw vote conducted by The Harrisburg Pennsylvanian in 1824, showing Andrew Jackson leading John Quincy Adams by 335 votes to 169 in the contest for the United States Presidency. Such straw votes—unweighted and unscientific—gradually became more popular; but they remained local, usually city-wide phenomena. In 1916, the Literary Digest embarked on a national survey (partly as a circulation-raising exercise) and correctly predicted Woodrow Wilson's election as President. Mailing out millions of postcards and simply counting the returns, the Digest correctly called the following four presidential elections.

In 1936, however, the Digest came unstuck. Its 2.3 million "voters" constituted a huge sample; however they were generally more affluent Americans who tended to have Republican sympathies. The Literary Digest did nothing to correct this bias. The week before election day, it reported that Alf Landon was far more popular than Franklin D. Roosevelt. At the same time, George Gallup conducted a far smaller, but more scientifically-based survey, in which he polled a demographically representative sample. Gallup correctly predicted Roosevelt's landslide victory. The Literary Digest went out of business soon afterwards, while the polling industry started to take off .

Gallup launched a subsidiary in the United Kingdom, where it correctly predicted Labour's victory in the 1945 general election, in contrast with virtually all other commentators, who expected the Conservative Party , led by Winston Churchill, to win easily.

By the 1950s, polling had spread to most democracies. Nowadays they reach virtually every country, although in more autocratic societies they tend to avoid sensitive political topics. In Iraq, surveys conducted soon after the 2003 war helped to measure the true feelings of Iraqi citizens to Saddam Hussein, post-war conditions and the presence of US forces.

For many years, opinion polls were conducted mainly face-to-face, either in the street or in people's homes. This method remains widely used, but in some countries it has been overtaken by telephone polls, which can be conducted faster and more cheaply. In recent years, Internet surveys have become increasingly popular, but most of these draw on whomever wishes to participate rather than a scientific sample of the population, and are therefore not truly accurate.. One site, Opinion Republic (http://www.opinionrepublic.com) is an experiment to capture public opinion and then converge on the most broadly accepted opinions. Whereas typical polls have pre-determined response options, Opinion Republic allows responders to determine the response choices.

Sample and polling methods[]

Template:Incoherent

File:Voter poll.jpg

Voter polling questionnaire on display at the Smithsonian Institution

Opinion polls for many years were maintained through telecommunications or in person-to-person contact. Methods and techniques vary, though they are widely accepted in most areas. Verbal, ballot, and processed types can be conducted efficiently, contrasted with other types of surveys, systematics, and complicated matrices beyond previous orthodox procedures. Opinion polling developed into popular applications through popular thought, although response rates for some surveys declined. Also, the following has also led to differentiating results:[1] Some polling organizations, such as Angus Reid Strategies, YouGov and Zogby use Internet surveys, where a sample is drawn from a large panel of volunteers, and the results are weighed to reflect the demographics of the population of interest. In contrast, popular web polls draw on whoever wishes to participate, rather than a scientific sample of the population, and are therefore not generally considered professional.

Benchmark polls[]

A benchmark poll is generally the first poll taken in a campaign. It is often taken before a candidate announces their bid for office but sometimes it happens immediately following that announcement after they have had some opportunity to raise funds. This is generally a short and simple survey of likely voters.

A benchmark poll serves a number of purposes for a campaign, whether it is a political campaign or some other type of campaign. First, it gives the candidate a picture of where they stand with the electorate before any campaigning takes place. If the poll is done prior to announcing for office the candidate may use the poll to decide whether or not they should even run for office. Secondly, it shows them where their weaknesses and strengths are in two main areas. The first is the electorate. A benchmark poll shows them what types of voters they are sure to win, those who they are sure to lose, and everyone in-between those two extremes. This lets the campaign know which voters are persuadable so they can spend their limited resources in the most effective manner. Second, it can give them an idea of what messages, ideas, or slogans are the strongest with the electorate.[2]

Brushfire polls[]

Brushfire Polls are polls taken during the period between the Benchmark Poll and Tracking Polls. The number of Brushfire Polls taken by a campaign is determined by how competitive the race is and how much money the campaign has to spend. These polls usually focus on likely voters and the length of the survey varies on the number of messages being tested.

Brushfire polls are used for a number of purposes. First, it lets the candidate know if they have made any progress on the ballot, how much progress has been made, and in what demographics they have been making or losing ground. Secondly, it is a way for the campaign to test a variety of messages, both positive and negative, on themselves and their opponent(s). This lets the campaign know what messages work best with certain demographics and what messages should be avoided. Campaigns often use these polls to test possible attack messages that their opponent may use and potential responses to those attacks. The campaign can then spend some time preparing an effective response to any likely attacks. Thirdly, this kind of poll can be used by candidates or political parties to convince primary challengers to drop out of a race and support a stronger candidate.

Tracking polls[]

A tracking poll is a poll repeated at intervals generally averaged over a trailing window.[3] For example, a weekly tracking poll uses the data from the past week and discards older data.

A key benefit of tracking polls is that the trend of a tracking poll (the change over time) corrects for bias: regardless of whether a poll consistently over or underestimates opinion, the trend correctly reflects increases or decreases [citation needed].

A caution is that estimating the trend is more difficult and error-prone than estimating the level – intuitively, if one estimates the change, the difference between two numbers X and Y, then one has to contend with the error in both X and Y – it is not enough to simply take the difference, as the change may be random noise. For details, see t-test. A rough guide is that if the change in measurement falls outside the margin of error, it is worth attention.


Potential for inaccuracy[]

Sampling error[]

All polls based on samples are subject to sampling error which reflects the effects of chance in the sampling process. The uncertainty is often expressed as a margin of error. The margin of error does not reflect other sources of error, such as measurement error. A poll with a random sample of 1,000 people has margin of sampling error of 3% for the estimated percentage of the whole population. A 3% margin of error means that 95% of the time the procedure used would give an estimate within 3% of the percentage to be estimated. The margin of error can be reduced by using a larger sample, however if a pollster wishes to reduce the margin of error to 1% they would need a sample of around 10,000 people. In practice pollsters need to balance the cost of a large sample against the reduction in sampling error and a sample size of around 500-1,000 is a typical compromise for political polls. (Note that to get 500 complete responses it may be necessary to make thousands of phone calls.)

Since the margin of error differs slightly with the percentage the margins of error in polls is usually reported for the 50-50 split; the margin of error is smaller for 40-60, 30-70, 20-80, etc. splits.

Nonresponse[]

Since some people do not answer calls from strangers, or refuse to answer the poll, poll samples may not be representative samples from a population. Because of this selection bias, the characteristics of those who agree to be interviewed may be markedly different from those who decline. That is, the actual sample is a biased version of the universe the pollster wants to analyze. In these cases, bias introduces new errors, one way or the other, that are in addition to errors caused by sample size. Error due to bias does not become smaller with larger sample sizes. If the people who refuse to answer, or are never reached, have the same characteristics as the people who do answer, the final results will be unbiased. If the people who do not answer have different opinions then there is bias in the results. In terms of election polls, studies suggest that bias effects are small, but each polling firm has its own formulas on how to adjuct weights to minimize selection bias.

Response bias[]

Survey results may be affected by response bias, where the answers given by respondents do not reflect their true beliefs. This may be deliberately engineered by unscrupulous pollsters in a push poll, but more often is a result of the detailed wording or ordering of questions (see below). Respondents may deliberately try to manipulate the outcome of a poll by e.g. advocating a more extreme position than they actually hold in order to boost their side of the argument or give rapid and ill-considered answers in order to hasten the end of their questioning. Respondents may also feel under social pressure not to give an unpopular answer. If the results of surveys are widely publicised this effect may be magnified - the so-called spiral of silence.

Wording of questions[]

It is well established that the wording of the questions, the order in which they are asked and the number and form of alternative answers offered can influence results of polls. Thus comparisons between polls often boil down to the wording of the question. One way in which pollsters attempt to minimize this effect is to ask the same set of questions over time, in order to track changes in opinion. The most effective controls, used by attitude researchers, are:

  • asking enough questions to allow all aspects of an issue to be covered and to control effects due to the form of the question (such as positive or negative wording), the adequacy of the number being established quantitatively with psychometric measures such as reliability coefficients, and
  • analyzing the results with psychometric techniques which synthesize the answers into a few reliable scores and detect ineffective questions.

These controls are not widely used in the polling industry.

Coverage bias[]

Another source of error is the use of samples that are not representative of the population as a consequence of the methodology used. For example, telephone sampling has a built-in error because in many times and places, those with telephones have generally been richer than those without. Alternately, in some places, many people have only mobile telephones. Because pollers cannot call mobile phones (due to technical reasons resulting from the way that telephone numbers for the poll are generated), these individuals will never be included in the polling sample. If the subset of the population without cell phones differs markedly from the rest of the population, these differences can skew the results of the poll. Polling organizations have developed many weighting techniques to help overcome these deficiencies, to varying degrees of success.


An oft-quoted example of opinion polls succumbing to errors was the UK General Election of 1992. Despite the polling organisations using different methodologies virtually all the polls in the lead up to the vote (and exit polls taken on voting day) showed a lead for the opposition Labour party but the actual vote gave a clear victory to the ruling Conservative party.

In their deliberations after this embarrassment the pollsters advanced several ideas to account for their errors, including:

  • Late swing. The Conservatives gained from people who switched to them at the last minute, so the error was not as great as it first appeared.
  • Nonresponse bias. Conservative voters were less likely to participate in the survey than in the past and were thus underrepresented.
  • The spiral of silence. The Conservatives had suffered a sustained period of unpopularity as a result of economic recession and a series of minor scandals. Some Conservative supporters felt under pressure to give a more popular answer.

The relative importance of these factors was, and remains, a matter of controversy, but since then the polling organisations have adjusted their methodologies and have achieved more accurate predictions in subsequent elections.

Polling organizations[]

There are many polling organizations. The most famous is the Gallup poll, created by George Gallup.

Other major polling organizations in the United States include:

  • Quinnipiac Polls, run by Quinnipiac University in Hamden, Connecticut, and started as a student project.
  • The Pew Charitable Trusts conducts polls concentrating on media and political beliefs.
  • The Harris Poll.
  • Nielsen Ratings, virtually always for television.
  • Zogby International has been tracking public opinion since 1984.

In the United Kingdom, the most notable "pollsters" are:

  • MORI. This polling organisation is notable for only selecting those who say that they are "likely" to vote. This has tended to favour the Conservative Party in recent years.
  • YouGov, an online pollster.
  • NOP
  • ICR
  • Populus, official The Times pollster.

In Australia the most notable companies are:

  • Newspoll
  • Roy Morgan Research

All the major television networks, alone or in conjunction with the largest newspapers or magazines, in virtually every country with elections, operate polling operations, alone or in groups.

The best-known failure of opinion polling to date in the United States was the prediction that Thomas Dewey would defeat Harry S. Truman in the 1948 U.S. Presidential election. Major polling organizations, including Gallup and Roper, indicated a landslide victory for Dewey.

In the United Kingdom, most polls failed to predict the Conservative election victories of 1970 and 1992, and Labour's victory in 1974. However, their figures at other elections have been generally accurate.

The influence of opinion polls[]

By providing information about voting intentions, opinion polls can sometimes influence the behaviour of electors. This phenomenon is known as a Bandwagon effect when the poll prompts voters to back the candidate shown to be winning in the poll and as a Boomerang effect where the likely supporters of the candidate shown to be winning feel that s/he is "home and dry" and that their vote is not required, thus allowing another candidate to win. In the United Kingdom general election, 1997, then Cabinet Minister, Michael Portillo's constituency of Enfield was believed to be a safe seat but opinion polls showed the Labour candidate Stephen Twigg steadily gaining support, which may have prompted undecided voters or supporters of other parties to support Twigg in order to remove Portillo - an example of tactical voting.

References[]

See also[]

  • Exit poll
  • confidence interval
  • push poll
  • 'Shy Tory Factor'
  • Straw poll
  • Voodoo poll
  • All-Russia Center for the Study of Public Opinion
  • Deliberative opinion poll

External links[]

This page uses Creative Commons Licensed content from Wikipedia (view authors).
  1. Cite error: Invalid <ref> tag; no text was provided for refs named cantril
  2. Kenneth F. Warren (2001). "in Defense of Public Opinion Polling." Westview Press. p. 200-1.
  3. About the Tracking Polls