By BENJIE OLIVEROS
Poll surveys on the presidential race by Pulse Asia and Social Weather Station have been generating a lot of controversy. This came to a head when Sen. Richard Gordon sued Pulse Asia and the Social Weather Station (SWS) for damages amounting to P600,000 and filed a petition for a temporary restraining order and writ of preliminary injunction before the Quezon City Regional Trial Court to prevent the release of pre-election surveys until the case is resolved.
Gordon asserted that pre-election surveys tend to create a bandwagon effect thereby denying voters of their right to choose who to vote for based on platforms and credentials. He also questioned the research methods used by the pollsters. Gilbert Teodoro has also been questioning the results of pre-election surveys citing the very small sampling universe being used by pollsters. He even came out with a TV ad criticizing the results of pre-election surveys.
How scientific are surveys, in the first place?
We could gauge the accuracy of surveys according to the following: sampling size, how the random sampling was done, the questionnaire, and how faithful was the implementation to the design.
Would the 2,100 respondents of SWS and the 1,800 of Pulse Asia be enough to represent the 50,723,733 registered voters as of January 2010? What formula did they use?
Did the respondents accurately represent the heterogenous character of the voting population? Were their responses truly reflective of the sentiments of more than 50 million voters? If 39 percent of the respondents, 819 for SWS and 702 for Pulse Asia, said they would vote for Sen. Noynoy Aquino, could we then infer that 19,782, 255 voters would vote for him?
Did the manner by which the respondents were chosen not favor any particular segment of the population? Would you have the same chance of being chosen as respondent as the person sitting next to you?
How were the questions formulated? Did it not limit the choices or lead the answers of respondents toward a particular direction?
How was the actual survey conducted? Were there adjustments or deviations during the implementation? If yes, what were these and did it not affect the accuracy of the survey?
Let us leave the answers to these questions regarding the accuracy and how scientifically done were these surveys to both Pulse Asia and SWS and to the scrutiny of statisticians. The problem is both pollsters are not open about the details of their sampling methods.
The more important question is, what purpose do the surveys serve?
Assuming that the surveys were scientifically done, for candidates, political parties, and party-list groups, it is one of the ways by which they could measure whether one’s campaign is moving forward and achieving the desired results. If a candidate or party is tailing in surveys, then it is a wake up call to strive more or to adjust the campaign plan and strategy. If one is shown to be weak in a particular geographical area then he or she should campaign more in those regions.
However, when the survey is made public, then problems arise. What use would it be to the public? What would be its effects on voters?
It indeed creates a bandwagon effect especially among undecided voters. Not only would they have the tendency to go with the tide, there is a probability that they would no longer consider candidates who are tailing in surveys. Worse, it is not far-fetched that voters who originally chose a candidate who is tailing in surveys could drop that candidate and choose among the front-runners so as not to “waste” one’s vote.
Second, releasing the results of surveys to the public would have a trending effect. It conditions the mind of the public that the candidate leading in surveys would win. And in the event that he or she loses, it would give the impression that he or she was cheated, which may not entirely be accurate. Actually, it could be used either way: it could prepare the ground for people to accept a fraudulent election such as what happened in 2004 or to make false accusations of cheating.
This is no different from the accusations that were hurled against the quick count that was done by the National Movement for Free Elections (NAMFREL) during the 2004 elections Some quarters accused NAMFREL of first releasing the results of the canvassing of votes in precincts where Gloria Macapagal-Arroyo was strong while delaying those from precincts where Fernando Poe Jr. won to condition the mind of the public that Arroyo would win.
Some would contend that, on the contrary, surveys would prevent cheating. But what if the results of surveys were inaccurate and results of the elections were? Surely, there is also a probability that this could be the case. Actually there are other ways of proving fraud without relying on surveys, such as what happened before, during, and after the 2004 elections.
Other than this, the public seems to have no use for pre-election surveys. Surveys could not take the place of elections or compensate for the latter’s flaws. Surveys could even be used to manipulate and influence the results of elections. What is really needed is to make the elections free and credible. But a free, honest, and credible elections could only take place in a truly free and democratic society. Otherwise, the country’s politics would remain dirty, and the elections would continue to be violent, fraudulent and elite-dominated. (Bulatlat.com)