Of media opinion polls in Malawi: The case of Sunday Times

It is now obvious that the media in Malawi has joined the international practice of conducting popularity surveys of presidential candidates.

The latest results of such surveys were released in The Sunday Times  newspaper of September 8, 2013.

Never mind what the results showed, what is important is whether such surveys provide a basis for celebrations in those that are said to have won, or distress in those that are said to have fared badly.

To determine the answer to this question, probably some more questions are fundamental. How effective and reliable are the methods that are used by the media in Malawi to conduct such surveys?

Can such methods establish that those sending SMSs or responding online are the voting age? Can they determine that the respondents form a sample that is representative of voting patterns? Can they determine random selection of respondents to prevent bias? Can they determine that the respondents are registered and will vote?

Malawi 2014 presidential candidates: Joyce Banda, Peter Mutharika, Atupele Muluzi and Lazarus Chakwela
Malawi 2014 presidential candidates: Joyce Banda, Peter Mutharika, Atupele Muluzi and Lazarus Chakwela

Until these questions are answered, celebrating or shuddering at the results of such surveys can at best only be inconsequential and premature. Of course they don’t just have to be dismissed. Critical analysis of the methodology, the competence, capacities and credibility of the pollster must be the determinant.

The Sunday Times’ own admission, for example, defeats the credibility of its own poll. The newspaper says “though too early to draw meanings from this poll, it seems Dr. Chakwera has strong support among the urban population. The majority of our voting population, however, is in rural areas. It’s hard to tell which direction the rural vote will go.”

 

In simple terms, what the Sunday Times is saying is that their sampling is not representative. Further, the newspaper brings into question the credibility of their methods, and provides a whole premise for interrogating into the motive of publishing results of a poll whose sample, by their own admission, is not adequate to generate a credible result.

But this problem is not peculiar to the Sunday Times. Internationally, the record until now has been flawed by a widespread failure to distinguish among polls that carry varying levels of credibility. No one can expect editors and reporters to be polling experts, but they should have the good sense to consult those who are.

Forecasting election outcomes from poll results is a tricky business. For instance, the famous Literary Digest fiasco of 1936, which predicted a victory for Alf Landon, was based on postcards returned by the magazine’s largely Republican readers – a biased sample if there ever was one.

There were two reasons for the equally famous failure of the 1948 pollsters (memorialized in the photo of a triumphant Harry Truman holding up the front page of the Chicago Tribune with its “Dewey Defeats Truman” headline).

The leading polling organizations (Gallup, Roper and Crossley) relied on the sometimes biased selection of respondents by interviewers using arbitrary quotas for women, blacks and low income people.

And the pollsters cut off their interviewing well before the crucial date, though many undecided people changed their minds in that interval.

In 2000, all opinion polls all agreed in predicting a close election, though no one could have predicted the failure of Florida’s “butterfly ballot” and the “hanging chads” – let alone the Supreme Court’s decision to implant George W. Bush in the White House.

The fundamental difficulty in election polling is determining who is actually going to vote. Research organizations try to solve the problem by asking people whether they are registered, whether they have voted in past elections and how they assess their own intentions to vote.

But turnout can be strongly affected by the unanticipated circumstances including weather, and the strength of inclinations to choose a particular candidate can be swayed by unforeseen news just before the election.

Political choices are volatile, like all expressions of opinion. This is especially true for that large portion of the electorate who do not have an intense commitment to a party or office-seeker.

The projectability of all survey findings depends in good measure on sample size, and sample size is determined by the budget.

The polls run by leading news organizations are generally done well, but none of them can afford to do as many interviews as researchers would like to have, in order to make valid generalizations about sub-groups in the population.

This may sound weird, but it takes just as many interviews with women, the physically challenged or young people aged 18-24 as it takes for a sample of the whole population, to produce findings that have the same level of statistical confidence.

 

Polling is not the same as tossing a penny 100 times, 500 times and 2,000 times and observing how close the results come to 50-50 for heads and tails. Polling is a human enterprise, in which errors can creep in from faulty questionnaires, poor interviewer training, mistakes in coding and keying answers.

The key to good research is transparency – willingness to set forth exactly what was done and leave the books open for anyone to check on it.

Professional pollsters commit themselves to do just that. But many hundreds firms, including the media in Malawi, don’t do this. The explanation of their methods is sometimes deliberately opaque or not given at all.

It thus brings the question: can the polls really be trusted? In the 1948 US Presidential election, for example, the polls predicted certain victory for Republican Thomas E. Dewey.

Without waiting for the official count of the votes, newspapers throughout the country proclaimed in their headlines, “Dewey Defeats Truman.”

The rest is history; Harry S. Truman was elected the 31st President of the United States. Therefore, in viewing the results of any public opinion poll, it might be useful to ask the following questions:

Who was interviewed?

Generally speaking, the accuracy of a poll depends upon the degree to which the characteristics of the people being interviewed are really similar to those of the group they are supposed to represent. For example, the polling of sixteen-year-olds to predict the outcome of an election would be very questionable since they cannot vote.

Also, as a general rule, the greater the number of people interviewed, the more likely the prediction will be accurate. Everything else being equal, an election poll of 100,000 out of two million voters is more likely to produce accurate results than a poll of 1,000 out of the same number.

It is important to point out, though, that, depending on the efficiency of the pollster, small national samples of fewer than 2,000 can predict quite accurately for the entire electorate.

Besides, those interviewed should be selected in a random fashion. This is usually done to avoid or lessen the possibility of allowing any “unaccounted for” bias or characteristics… of those being interviewed…to influence the results.

Under what conditions were the interviews conducted?

Generally speaking, unclear, generalised, vain, biased, or emotionally charged questions will produce misleading answers and weaken the accuracy of the results of a poll.

Polls conducted by telephone or through the mails generally do not tend to be as reliable as personal interviews. This is largely due to the fact it is difficult to control who really participates in the poll, the number who respond, and possible misinterpretation of the questions.

When was the poll conducted?

 

It should be noted that the results of a poll are representative … however accurate … of the preferences, views and feelings of a particular group of people at a particular point in time. As a general rule, the more current the poll, the more likely it is to produce meaningful and useful results.

A poll regarding who should be elected president in September 2013, for example, is not likely to be as accurate as a poll taken during election week of the actual election.

Who conducted the poll?

Past reputation and performance can also help determine the validity of the results of a poll. Generally speaking, “novice” pollsters are not likely to be able to compete with professional polling organizations with their large staff’s, seemingly unlimited resources, and sophisticated computer equipment. In addition, polls conducted by groups with an obvious interest in the results should be held suspect until proven otherwise.

What was the percentage of error?

Polling organizations should indicate what the potential for error of their poll is. Based on the size of their sample it is statistically possible to indicate reliability to the reader.

Based on this analysis, the Sunday Times survey is not conclusive to cause celebration or worry. Let them do better next time.

Follow and Subscribe Nyasa TV :

Sharing is caring!

Follow us in Twitter
33 Comments
newest
oldest most voted
Inline Feedbacks
View all comments
Read previous post:
Headmaster dies after robbery attack

Namiwawa Private Schools Headmaster, George Malola on Monday night fainted before  he was pronounced dead on arrival at Mwaiwathu Private...

Close