nif:isString
|
-
Data collection was carried out in January and February, 2011, by Opinion Matters, an international consumer research agency based in London. All participants contacted directly (i.e., face-to-face) provided written consent; those participating on-line indicated their consent by clicking on a specific button on the initial web-page associated with the study, while those contacted via telephone provided verbal consent, which was noted in interview records. All data collection protocols and research design were approved by the Ethics Committee of the London School of Hygiene and Tropical Medicine. All participants consented to scientific use of their responses prior to completing the survey questionnaire. The original data will be made available to anyone who requests access. (Requests can be made directly to the second author).
Questions were designed to address each of the factors in the model from Fig 1. The questionnaire was composed of 119 questions, plus 11 background questions about the age, gender, educational attainment and occupation of the respondent, as well as the regional location (within country), demographic and material composition of their household. (See Appendix 1.) The questionnaire was written in English and translated into the dominant mother-tongue and additional languages, where necessary, in non English-speaking countries by professional natives based in the country of the target language (i.e., French in France; German in Germany; Portuguese in Brazil; Hindi and English in India; Arabic and English in Saudi Arabia and UAE; Bahasa and English in Malaysia; French and English in Canada; Simplified and Traditional Chinese in China). All translations were carried out by one native speaker of the relevant country and then proof-read by a second professional native translator. Translators were also asked to make amendments to the questionnaire (if necessary) to incorporate local knowledge and customs (e.g., demographic nuances) and provided English translations for any of these changes. Respondents were asked at the beginning of the questionnaire which language they would like to be interviewed in.
Twelve countries were chosen to represent each of the seven continents (UK, USA, Canada, France, Germany, Australia, South Africa, Malaysia, Brazil, Middle East) with the addition of the two most populated countries in the world (China and India). A sample size of 1000 per country was judged sufficient to establish representative patterns of response, as analyses would not involve more than twenty-five variables simultaneously, and 30 samples per variable is typically more than sufficient to achieve the standard level of statistical power to test hypotheses (i.e., tests with p < 0.05). Within-country samples were collected to reflect splits of gender, age, household income and geographical region, based on WHO data on each country’s population profile for these variables, so that each country’s sample was representative of the overall population in that country. All households were also required to have ready access to a water source as a simple precondition to perform the target behaviours.
For cost-efficiency purposes, identification of respondents through on-line methods was preferred. However, in some countries, not all respondents could not be identified in this way, as lower income respondents often do not have access to the web. To make up a representative sample in these cases, potential participants for telephone or face-to-face interviews were identified at point of contact according to income. The questionnaire was thus delivered through one of three means: online via an email invitation, via computer-assisted telephone interview (CATI), or face-to-face (see Table 1). In each case, question order was randomized in the actual delivery of the questionnaire (with the exception of blocks of questions with a particular format, such as the personality test, and the background data section), in order to ensure that any biasing effects of prior priming of one question on another could be minimized. We note that reviews find little influence of the means of administering questionnaires on reporting biases, which tend to be consistent across such methods. [57, 58] Table data removed from full text. Table identifier and caption: 10.1371/journal.pone.0159551.t001 Breakdown of data collection methods per country. Potential participants for online responses were identified using both the Opinion Matters online panel as well as trusted partners that adhere to the same strict codes of conduct and research guidelines. These panels are actively-managed online global panels recruited for market research purposes. All panelists have gone through a double opt-in process and have agreed to participate in paid online surveys, and to provide honest opinions for market research studies. A wide range of recruitment processes are used to generate the panel including referral, Web advertising and public relations, to partner-recruited panels and alliances with heavily trafficked web portals. Potential panelists were sent an invitation to participate in the survey via email, on a random basis within the target groups for the research. A large number of respondents were eliminated by algorithms in the Globalpark software as the data collection process proceeded, using the following criteria: those whose answers were abnormally patterned (e.g., used the same response for a majority of questions, or answered the questionnaire too quickly, based on the average speed of questionnaire completion)those missing a single response, either demographic or in the main questionnairethose screened out by not fitting into demographic quotas (i.e., respondents have to fulfil the demographic criteria to determine eligibility, and even if eligible, whether the quota for their profile is already full, before they can complete the questionnaire proper).Respondents who fully completed questionnaires received points worth about US$3.00 which could then be redeemed for money or against charitable donations. Safeguards ensured that respondents could not try to complete the questionnaire if it was not appropriate (i.e., they could not proceed unless they had ticked an answer for all questions on each page; if there were alerts for contradictory answers, and so on). Online responses took an average of 20 minutes to complete.
In some countries, it wasn’t possible to rely on panels or on-line respondents to achieve a sample representative of the country’s population. In the cases of Brazil, China, India and Malaysia, datasets were augmented by the inclusion of a component collected via a computer-assisted telephone interviewing (CATI) method. This involved random digit dialling. Unlike the online case, no incentive was used for these respondents. All such interviews were conducted by native language interviewers in each country using CATI-SPSS Dimensions software.
In the case of the Middle East and South Africa, face-to-face interviews were conducted because some segments of the population were known not to own land-line telephones, and hence could not be effectively reached via the CATI method. The way the personal interviews were conducted varied slightly between these two territories. In South Africa, interviewers targeted low-income areas to fulfill the quota stipulations and respondents were then randomly selected in that area. People were approached in the street or a public place, asked the screening questions to verify that they met the screening criteria. Respondents were interviewed and their responses were recorded on paper questionnaires and later inputted into the online research platform. Respondents were incentivised with R30 (approx. 4 USD). Interviews collected in this way lasted an average of 15 to 20 minutes. In the Middle East, interviewers were briefed on the project and performed mock interviews before the beginning of the project. Every interviewer was given a laptop with USB internet connection enabling him/her to conduct the interview from any location. Interviews were therefore carried out at respondents’ homes, coffee shops, malls, internet cafés, or at universities (especially for younger age groups). In principle, incentives were not offered for such interviews; however, in some cases (where interviews were done outside the home), interviewers offered the respondent a cup of coffee, juice or cake. The average interview length varied between 25 to 35 minutes.
Online responses were exported from an integrated research platform (Globalpark) by Opinion Matters personnel into a survey reporting software program (SNAP), and from thence were converted into an Excel spreadsheet. Interviewers recorded telephone and face-to-face responses by hand onto printed data-sheets which were then transcribed into electronic records in an Excel database using codes consistent with the on-line records.
|