首页    期刊浏览 2024年12月05日 星期四
登录注册

文章基本信息

  • 标题:Reliability of stated preferences for cholera and typhoid vaccines with time to think in Hue, Vietnam.
  • 作者:Cook, Joseph ; Whittington, Dale ; Canh, Do Gia
  • 期刊名称:Economic Inquiry
  • 印刷版ISSN:0095-2583
  • 出版年度:2007
  • 期号:January
  • 语种:English
  • 出版社:Western Economic Association International
  • 摘要:Stated preference surveys contingent valuation (CV) and stated choice (SC)--are typically administered to respondents in the course of one phone or in-person interview. Though the format differs, these surveys ask respondents about their willingness to trade income for some environmental or health improvement that is not traded in a market. Because one is not often asked this type of question in everyday life, the answers do not necessarily come easily or reflexively. As every salesperson knows, people often change their minds when they are given overnight to think about a decision and discuss it with others. Despite this, only a few stated preference researchers have explored the effect of giving respondents time to consider their answers. Whittington et al. (1992) and Lauria et al. (1999) gave respondents overnight to think about their answers to a CV survey, but no similar research has been done for SC surveys, which are growing in popularity and are typically more cognitively difficult for respondents to complete than CV surveys.
  • 关键词:Diseases;Prices

Reliability of stated preferences for cholera and typhoid vaccines with time to think in Hue, Vietnam.


Cook, Joseph ; Whittington, Dale ; Canh, Do Gia 等


I. INTRODUCTION

Stated preference surveys contingent valuation (CV) and stated choice (SC)--are typically administered to respondents in the course of one phone or in-person interview. Though the format differs, these surveys ask respondents about their willingness to trade income for some environmental or health improvement that is not traded in a market. Because one is not often asked this type of question in everyday life, the answers do not necessarily come easily or reflexively. As every salesperson knows, people often change their minds when they are given overnight to think about a decision and discuss it with others. Despite this, only a few stated preference researchers have explored the effect of giving respondents time to consider their answers. Whittington et al. (1992) and Lauria et al. (1999) gave respondents overnight to think about their answers to a CV survey, but no similar research has been done for SC surveys, which are growing in popularity and are typically more cognitively difficult for respondents to complete than CV surveys.

We hope to fill this gap in the literature. We use a split-sample experiment to explore the effect of giving respondents time to think about their answers in an in-person SC survey of individuals' demand for cholera and typhoid vaccines in Hue, Vietnam. In addition, we analyze the data using Train and Sonnier's (2003) state-of-the-art mixed logit/hierarchical Bayes (MLHB) estimating procedure. Using Monte Carlo--Markov chain numerical methods, Train and Sonnier's approach avoids several strong assumptions typically employed in estimating qualitative response data. It allows the researcher to model taste parameters that (1) vary among respondents, (2) are correlated, and (3) vary according to distributions other than the normal distribution. This research is one of the first applications of this procedure, and the first using data from a developing country.

We examine two principal questions. First, does giving respondents time to think increase the quality of responses; that is, does it reduce the number of responses that violate utility theory (i.e., internal validity tests)? Second, do respondents who were given time to think give us different answers than those who complete the interview in one session? In particular, does giving respondents extra time affect their willingness to pay (WTP) for vaccines?

We find that respondents who were given time to think failed internal validity tests less frequently, although the number of failures in both subsamples was surprisingly low. Respondents with time to think had lower average WTP for the vaccines than respondents without. We also find that respondents with time to think were more sensitive to the price of the vaccine and to the levels of the two other vaccine attributes (effectiveness in protecting against the disease and the duration of protection), though this difference in taste parameters may be due to differences in variance (scale).

The next section explains why we used stated preference techniques for this application and reviews the literature both on measuring internal validity failures in SC studies and on the effect of giving respondents time to think. The third section discusses our research design, and the fourth introduces our data analysis plan and discusses the advantages of the MLHB estimating procedure. The fifth section briefly describes the study site. The sixth section presents our results, and the final section concludes with a discussion of the results.

II. BACKGROUND

Using Stated Preference Methods to Measure Vaccine Demand

Though not the primary focus of this article, the overall objective of our research in Hue was to estimate demand for new-generation vaccines against cholera and typhoid fever. We used stated preference techniques (CV, SC) because although vaccines for both cholera and typhoid fever exist, they are not widely available to households in Hue. As such, we asked respondents whether they would a purchase a hypothetical vaccine if it were available for sale. These techniques have been widely applied in the environmental field for goods that are not sold in a marketplace (see Hanemann 1994 and Carson 2000 for an introduction to this literature, and Whittington 2002 for applications in developing countries). They have also been used in the health field for goods or services that are not widely available, including vaccines (Canh et al. 2006; Cropper et al. 2004; Suraratdecha et al. 2005; Whittington et al. 2002, 2003).

We conducted both CV and SC surveys in Hue during the summers of 2002 and 2003. Our CV scenario presented respondents with one type of vaccine (cholera or typhoid) and asked if they would purchase it at a given price. In contrast, the SC survey asked respondents to complete several choice tasks, each of which involved choosing between a cholera vaccine, a typhoid vaccine or neither (see Figure 1 for an example task; more details on the research design are provided in the following discussion). The SC framework allows us to explore how respondents trade off different vaccine attributes, to directly compare respondents' preferences for cholera and typhoid fever vaccines, and to test each respondent for preference errors. (1)

Internal Validity Tests

SC surveys may be designed to test whether a respondents' preferences conform to the axioms of utility theory, that is, that they be complete, monotonic, and transitive (Mas-Colell et al. 1995). Completeness requires that given two vaccines, a person must prefer one or the other or be indifferent between the two. Monotonic preferences require that other attributes equal, one should prefer a vaccine with lower price to one with a higher price. Transitive preferences require that if a respondent prefers vaccine X to vaccine Y, and prefers vaccine Y to vaccine Z, he must also prefer vaccine X to vaccine Z. In addition, it may be reasonable to require that preferences be stable within a series of choice questions. For example, if a respondent chooses a typhoid vaccine over a cholera vaccine, he should not reverse his preference if asked the same question a few minutes later. Throughout this article we will refer to preferences which are complete, monotonic, transitive, and stable as consistent preferences. If a respondent answers in a way that is inconsistent with utility theory, we call this a preference error. Appendix B discusses the approach for identifying errors in more detail.

[FIGURE 1 OMITTED]

Surprisingly few nonmarket SC studies have been designed to identify (or report) preference errors. Johnson et al. (2000) summarizes results on the consistency of answers from three SC experiments. Though they do not report transitivity errors, the number of stability and consistency errors varies significantly among the three studies. They find that 65% to 90% of respondents made at least one monotonicity error. The authors find that education, ideology, and fatigue are statistically significant predictors of the number of errors, though in one of the three studies no personal or socioeconomic characteristics were significant predictors. Carlsson and Martinsson (2001) also test for transitivity and stability errors statistically, and find only 1 intransitive respondent in a total sample of 35 respondents.

Similarly, Alpizar and Carlsson (2003) found that the order in which the tasks on transportation choices were presented did not affect preferences. This last study is the only one we are aware of that looks for preference errors using respondents from developing countries, though it is worth noting two points. First, the authors identify preference errors using likelihood ratio tests to see if the two subsample populations' patterns of responses are statistically different; we designed choice tasks to enable us to identify individual respondents who make preference errors. Second, they interviewed only car owners in the capital city of Costa Rica, a sample likely to be richer and more educated than many developing country populations.

Time to Think

Several studies have examined the impact of giving respondents more time to think about a CV scenario (Lauria et al. 1999; Whittington et al. 1992, 1993). These studies have generally shown that subsamples of respondents given time to think have lower WTP than equivalent subsamples who respond to the CV question during one interview. In particular, time to think reduced the percentage of respondents agreeing to pay high offered prices. Giving respondents overnight to think about their response may allow them to more carefully consider their budget constraints or consult with family members or friends. In the case of in-person interviews, it may also allow them to reach their decision outside the (perhaps subtle) influence of the interviewer.

No studies have given SC respondents time to think. Based on the evidence from CV studies, we expect a priori that time to think will reduce average WTP. In addition, we predict that giving respondents overnight will decrease the number of preference errors. Because SC respondents complete several tasks, each of which forces them to make difficult trade-offs, fatigue can be a problem. We expect that that giving respondents overnight to complete the choice tasks will allow them to evaluate the tasks at their own pace, giving them more time to carefully consider the attributes of each alternative presented to them.

III. RESEARCH DESIGN

SC Design

Our SC survey asked respondents to choose between the status quo (buy no vaccine) and two vaccine alternatives with the following attributes: the type of vaccine (cholera or typhoid); the effectiveness of the vaccine in protecting against the disease (50%, 70%, or 99%); the number of years the vaccine would be effective (3 years or 20 years); and the price of the vaccine (US$0.33, 3.22, or 12.9) (see Sur et al. 2006 for details). Although the CV survey asked respondents about the willingness to purchase vaccines for themselves as well as other household members, the SC survey asked only about vaccines for respondents themselves.

Each respondent completed a total of six choice tasks in the SC survey. Four of these tasks were drawn from a main effects, orthogonal task design that maximized statistical efficiency while attempting to minimize cognitive burden for respondents. The design also ensures that each attribute level appears an equal number of times and is uncorrelated with all other attribute levels. Because testing for preference errors (in particular, transitivity errors) requires repeating some alternatives, we added two additional choice tasks to the four from the orthogonal design.

Each respondent was randomly assigned to complete one of three "blocks" of six tasks. For each of these 18 tasks, we therefore have approximately 66 responses in each subsample (200 respondents into three blocks). We did not immediately allow respondents to choose both vaccine alternatives on a task. Instead, after respondents had answered all six choice tasks the interviewer revisited each task for which respondents said they would purchase a vaccine and asked if the respondent would want to purchase both vaccines. (2)

Time to Think Treatment

We split our sample of 400 SC respondents into two equal subsamples. Both subsamples answered the exact same questions, but the structure of the interview differed. The structure of the interview for the no time to think (NTTT) subsample was as follows. First, respondents answered a series of questions on knowledge of and attitudes toward cholera and typhoid fever. Second, interviewers introduced the hypothetical vaccine scenario and the concept of vaccine effectiveness (following Suraratdecha et al. 2005; see also Canh et al. 2006 for more details on the nearly identical structure of the CV survey). Third, interviewers explained the choice tasks and had respondents complete one task as practice. Finally, respondents completed the six choice tasks, each of which was printed on a laminated card. They marked their answers onto the cards with an erasable marker, and the order in which the cards were shown to them was randomized. Finally, respondents answered a series of debriefing questions on the tasks, and provided socioeconomic and demographic information.

The time to think (TTT) subsample completed the first three sections of the interview in the same way, including the explanation of the WTP scenario and the practice choice task, but interviewers then stopped. They scheduled a follow-up interview with respondents for the next day, and asked respondents to complete the six choice tasks overnight. Interviewers returned on the next day (one respondent took two days) to record the respondents' answers and complete the remainder of the survey. To avoid confounding, we isolated the two subsamples in time: we completely finished the NTTT surveys before moving on to the TTT surveys. Thus any potential confounding was limited to giving the TTT respondents more time than one day (if, for example, they heard of the survey from a resident of a nearby commune in the NTTT subsample).

In addition to the split-sample comparison, we are also able to observe whether an individual respondent changes his answers when given time to think: we gave the NTTT subsample the opportunity to revise their answers overnight. At the end of the first interview, interviewers left the cards with the six choice tasks with respondents and scheduled a follow-up interview for the next day "to ask a few additional questions about the choices [the respondent] made." Interviewers returned the next day and, if respondents chose to revise any answers overnight, they also recorded the revised answers.

IV. MODELING FRAMEWORK FOR SC DATA

We analyze the data using a random-parameters, or mixed logit, model. Unlike multinomial (conditional) logit models, mixed-logit models eliminate the independence of irrelevant alternatives assumption, accommodate correlations, and account for unobserved taste heterogeneity among respondents by introducing respondent-specific stochastic elements for each coefficient (Revelt and Train 1998; for more detailed treatments of the theoretical link between SC responses and random utility theory, interested readers should see Alpizar et al. 2001; Alvarez-Farizo and Hanley 2002; Hensher et al. 2005; Ruby et al. 1998). Mixed-logit models estimate a distribution of coefficients for each attribute from the full sample. Augmenting this with hierarchical Bayes estimation uses mixed-logit estimates as priors for obtaining individual-specific posterior parameter estimates (Train and Sonnier 2003).

Because MLHB models use simulated maximum likelihood to estimate a distribution for each attribute, the researcher must make a priori distributional assumptions. Until recently, mixed-logit algorithms could only support normal distributions. There are many instances where researchers may have strong theoretical grounds to reject a normal distribution. In our application, for example, economic theory predicts that the coefficient on price should never be positive--increasing price should not provide positive utility and increase the probability of choosing that alternative. Train and Sonnier (2003) adapted their algorithm to incorporate distributions that support such a priori restrictions and that are transformations of the normal (lognormal, truncated normal, triangular, and log-odds normal distributions). They found an 11% improvement in log likelihoods by using MLHB with transformed normal distributions. On the other hand, Sillano and Ortuzar (2005) have argued that the need for such models is overblown, as a very small portion of actual individual-level predictions typically lie in the "wrong" quadrant. They argue that the advantages of using distributions such as the lognormal are outweighed by other well-known problems (e.g., the lognormal produces implausible predictions at the tails).

Another important issue in interpreting our results is the confounding of utility scale ([sigma]), which is inversely related to variance of responses, with taste coefficients ([beta]). All models that analyze CV or SC data implicitly model not simply taste coefficients but the product of scale and taste coefficients ([sigma][beta]). Researchers typically ignore the scale parameter by assuming that it is equal to one (for an in-depth discussion of this issue, see Louviere et al. 2002). However, when there is reason to believe that the variance of responses differs between subsamples of respondents (for example, in combining revealed and stated preference data, or when the elicitation format differed among respondents), this assumption is unsupported, and one must compare taste coefficients across subsamples with care. Identifying ways to separate scale from taste parameters is a challenge and currently the subject of much debate in the field. Rather than enter that debate, we address the issue by using Swait and Louviere's (1993) two-stage test for the scale heterogeneity, which we will discuss in more detail in the results section of this paper. It is worth noting from the outset, though, that because WTP estimates are essentially ratios they are unaffected by scale: dividing by the price parameter ([sigma] x [[beta].sub.price]) cancels out the scale parameter(s) in the numerator.

We code the data for analysis as follows (see Table 1). Price is coded as a continuous variable (with three levels) and the alternate-specific constant (ASC) as a dummy that is equal to one if respondents purchased neither vaccine and zero otherwise. Vaccine type, effectiveness, and duration are all effects-coded. Unlike dummy-coded variables, effects-coded variables allow the researcher to recover the parameter estimates for every level shown to respondents, including the excluded category. Zero is normalized as the mean effect of all vaccines shown to a respondent, rather than the combination of omitted categories.

Following Small and Rosen (1981) and Hanemann (1984), we calculate the WTP (Hicksian compensating variation) as:

(1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [[mu].sup.1] is the marginal utility of income (the negative of the price coefficient), [V.sub.i0] is the indirect utility function for respondent i evaluated at baseline quality levels, [V.sub.i1] is the indirect utility after the quality change, and C is the policy relevant choice set. Because we assume that citizens will make purchase decisions separately for the two vaccines (i.e., if the vaccines against cholera and typhoid become available in Hue at different times), the relevant choice set has two options: purchasing the cholera (or typhoid) vaccine offered or opting out. Note that the same welfare calculation could be applied to another possible policy context--providing free vaccines against cholera or typhoid (but again, not both simultaneously).

Individual-level covariates (age, gender, education, etc.) were not included directly in the estimating models. Because these characteristics are the same for each choice task that an individual completes, the only way to introduce them is to interact them with one of the attribute variables or with the ASC. This is not only computationally difficult but requires the researcher to make an assumption about how each attitudinal or socioeconomic variable enters the utility function. By interacting income with the ASC variable, for example, one makes the assumption that income only affects the probability of opting out, when it might also affect the price elasticity or the preference for higher vaccine effectiveness. Our approach for analyzing the effects of socioeconomic variables is to take advantage of the fact that the MLHB procedure provides the researcher with a unique set of coefficients for each respondent. After linking those coefficients with each respondent's attitudinal and socioeconomic data, we can compute average WTP for any number of different subgroups (e.g., by age, income, education, etc.).

V. STUDY SITE

Hue is located in central Vietnam about 20 km from the coast, with a population of approximately 282,000. The city is subdivided into 20 urban and 5 semi-urban communes. We completed 400 SC interviews in Hue during July and August 2003. We drew from a sample frame of six communes in the city, four of which were urban and two semi-urban. We restricted our sample to those households with children under age 18 and where the household head was 65 years or younger. We first randomly drew households from this pool, and then randomly chose whether to interview the household head or his/her spouse within a selected household.

VI. RESULTS

Respondents took advantage of the opportunity to think overnight about their choices: TTT respondents reported spending an average of 37 minutes completing the choice cards overnight (only two respondents reported spending no time on the cards). About half of these respondents (47%) discussed the decision with their spouse, but only 4% of respondents talked with people outside the household. Only four respondents used information besides the materials that the interviewer gave them.

Respondents

The median respondent in our sample is a 45-year-old married woman with three children. She has a secondary school education (Table 2). Her household earns US$97 each month and owns their home, which has electricity, a telephone, and a television. Her household uses a flush toilet and drinks water from a private or shared water connection with 24-hour service. She understands the concept of vaccination and has been vaccinated before (but not necessarily for cholera or typhoid). She has heard of cholera and typhoid fever, and understands the sources of these diseases fairly well. On the other hand, the median respondent has not had a case of cholera or typhoid fever in her family and does not know anyone who has had cholera or typhoid.

Does Giving Respondents Time to Think Reduce Preference Errors?

Twenty-two of 200 respondents who were not given time to think made some type of preference error, compared with 14 of 200 respondents in the TTT subsample (Table 3). Only one respondent (in the NTTT subsample) made a transitivity error, and he corrected the mistake when given the chance to revise overnight. This is much lower than the frequency of errors that Johnson et al. (2000) find, though this may be due to differences in the complexity of the choice tasks.

The low number of errors may also be because about half of respondents answered in ways that did not allow us to test for preference errors (Table 4). For example, if a respondent chose to purchase neither vaccine on all of the tasks, or if respondents always chose the cholera vaccine, it is not possible to test for transitivity, stability or monotonicity (see Appendix B). Respondents who always make their choice based on only one attribute of a vaccine (i.e., price) are said to exhibit apparent lexicographic preferences. Apparent lexicographic preferences may arise either from true lexicographic preferences where there are no possible levels of other attributes that would induce trading away from a favored attribute (e.g., "I only care about effectiveness; duration is unimportant"), insufficient range in the levels of other attributes to induce trading away from a heavily favored attribute (e.g., "I care about duration, but even the longest duration was just too short to accept a lower effectiveness"), or a simplifying heuristic to avoid the effort of evaluating the choice task.

If we assume these respondents were not using a simplifying heuristic but rather had true lexicographic preferences, the percent of respondents with some type of preference error in the NTTT group is significantly higher than the TTT group at only the 10% level (first row of Table 5). If we assume these respondents were using a heuristic and drop them from the sample, the difference becomes significant at the 5% level (second row of Table 5). Furthermore, once respondents in the NTTT subsample have been given a chance to revise their answers, the difference disappears, regardless of whether we include apparently lexicographic responses (last two rows of Table 5).

Using a multivariate probit model that controls for age, education, and region (urban or semi-urban), we find that giving time to think reduces the probability of making an error by 31% ceteris paribus, but the effect is only weakly significant (Table 6). Compared to respondents with no education, respondents with secondary school or university education have a lower probability of making an error than respondents with no education (the excluded dummy category). In a second model that interacts time to think with education, giving time to think to someone with primary school education reduces their chance of making an error by 6% ([[beta].sub.time-to-think] + [[beta].sub.TTT x Primary School]). Respondents with secondary school education would make 35% fewer errors if given time to think.

Does Time to Think Decrease Respondents' WTP for Vaccines?

Raw Results. Before discussing the results from the MLHB estimation models, we can draw some conclusions using only the raw response data. Recall that both subsamples completed exactly the same choice tasks, so that for any given choice task we have approximately 66 responses in each subsample. If giving respondents time to think makes them less likely to purchase a vaccine, we expect that a higher percentage of respondents who were given time to think would choose neither vaccine on any given task. In 16 of the 18 tasks, the percentage of respondents choosing neither vaccine was higher in the subsample given time to think (Figure 2). Similarly, in 17 of 18 tasks, the percent of respondents choosing both vaccines was lower in the subsample given time to think (Figure 3).

MLHB Models. Table 7 compares the results of two MLHB models that differ only in how the distribution of the price coefficient is modeled. Note that all responses were included in these MLHB models: we did not exclude respondents who made a preference error. We tested for scale heterogeneity using the approach of Swait and Louviere (1993). We rejected the first hypothesis (that [[sigma].sub.NTT][[beta].sub.NTT] = [[sigma].sub.TTT][[beta].sub.TTT]) with a high degree of confidence (referring the test statistic of 201 to a chi-squared distribution with 7 degrees of freedom gives a p-value very close to zero). We conclude that giving time to think certainly affects responses, but it may affect responses in one of three ways: by changing the variance of responses, by changing the absolute value of taste coefficients, or by changing both. The difficulty lies in identifying which of the three is the truth. As there is currently no consensus in the field on the appropriateness of modeling scale in mixed logits, differentiating among these three explanations is beyond the scope of this article. Comparisons of taste coefficients across subsamples must therefore be made with care.

Six results are important and consistent across models.

As expected, the coefficient on vaccine price is both negative and highly significant in all subsamples. In addition, the absolute value of the mean price coefficient is higher in the subsample given time to think; respondents who had overnight to think about their answers were more sensitive to price. Again, though, this comparison is confounded by possible differences in scale/variance. It seems likely that one of the more important ways in which time to think affects responses is that it allows respondents to more carefully think about other things they might spend their income on besides this vaccine. We would expect that this would increase price elasticity and not simply reduce the variance of the price coefficient, though this is certainly an important and interesting hypothesis to test in further research.

The mean coefficient on the cholera vaccine attribute is positive and statistically significant. Recall that effect-coded variables capture the marginal effect away from the mean effect of all vaccines shown to respondents (normalized to zero), so that a positive coefficient indicates a preference of cholera vaccines over typhoid vaccines (one can calculate the coefficient on typhoid vaccines as the negative of the coefficient on cholera vaccines, or -0.1715 in model 1).

In both subsamples the coefficient for 99% effectiveness is significant and positive. The 70% effectiveness variable, however, is not significantly (or only weakly significantly) different from the mean level. The absolute value of the coefficient is higher in the TTT subsample: giving time to think seemed to make respondents more sensitive to the highest level of effectiveness, though this comparison is confounded by possible differences in scale. Here it seems plausible that giving time to think about this relatively unfamiliar concept might reduce "'noise" in the responses, increasing the scale parameter.

The 20-year duration coefficient is also not significantly different from the mean level, indicating that respondents did not distinguish between a vaccine with a duration of 3 years and a vaccine with a duration of 20 years. This would indicate that effectiveness, vaccine type, and price are more important attributes than duration for respondents in our sample.

[FIGURE 2 OMITTED]

[FIGURE 3 OMITTED]

The coefficient on the ASC is significant and negative in the NTTT subsample, but not statistically different from zero in the TTT subsample. Respondents with more time to think are more likely to choose neither vaccine the statistical equivalent of the results presented in Figures 2 and 3.

Increasing price should not increase the probability of choosing an alternative, but when the estimating procedure tries to fit the response data to a normal distribution some of the probability mass is forced above zero. Using only the estimated mean and standard deviations from model 1, we would predict that about 25% of the normal distribution for NTTT respondents, and 13% of the distribution for TTT respondents, would be positive. Because the MLHB procedure delivers individual-level coefficient estimates, we can also calculate the percent of respondents whose predicted (individual-level) price coefficient is positive. Unlike Sillano and Ortuzar (2005), we find a significant fraction of respondents with positive coefficients (28% of the 200 NTTT respondents, and 15%, of the 200 TTT respondents).

Train and Sonnier's MLHB algorithm avoids this problem by modeling distributions other than the normal distribution. We run two additional models, one that treats price as log normally distributed (model 2) and one that models price as a normal distribution truncated at zero (model 3). Changing the distribution on price does not have a large effect on the coefficient estimates for effectiveness, duration, and ASC (Table 7). In the model where price is modeled as log normal (model 2), however, the average price elasticities are smaller in magnitude than in the models where price is modeled as normally distributed. In both models, however, the average price elasticity increases with time to think: it increases 62% in model 1, 74% in model 2, and 47% in model 3 (though again this difference may be partially or completely due to changes in scale/ variance).

WTP Estimates. WTP for all vaccine bundles is lower in the TTT subsample than in the NTTT subsample (first two columns of Table 8). On average, median WTP among respondents with time to think is half of WTP among those who completed the survey in one interview. This result is statistically robust (the p-value for the t-test of differences in sample means was much less than 0.01 for all vaccines) and is not confounded by potential scale differences. Respondents in our sample, both with and without time to think, have very low or even negative WTP for vaccines with effectiveness less than 70%. This is a somewhat puzzling result and will be discussed in more detail shortly.

We use the fact that MLHB models provide a unique set of coefficient estimates for each respondent to show how WTP changes with gender, education, and income (proxied by quintiles of monthly electricity bill). On average, men have higher WTP than women, with the exception of 50% typhoid vaccines (third and fourth columns of Table 8). WTP is generally higher for more educated and wealthier respondents, though again this effect is clearer for vaccines with greater than 50% effectiveness.

VII. DISCUSSION

Respondents who were given time to think made fewer preference errors, were more likely to purchase neither vaccine, had much higher price elasticities, and had much lower average WTP for vaccines. These results support the findings from prior stated preference work on time to think and strongly suggest that asking respondents to complete SC surveys in one interview (standard practice for stated preference surveys) probably overstates WTP. We also found that respondents in Hue preferred cholera vaccines to typhoid vaccines and were more sensitive to a vaccine's effectiveness than its duration.

There are numerous reasons to be cautious, however, about applying these results to policy. First and foremost, our WTP estimates for 50% effective vaccines were very low or even negative. Although negative WTP values could have plausible economic interpretations (respondents would need to be compensated for taking the free vaccine, perhaps because of a [misplaced] fear of infection, a dislike of medical procedures, or compensation for travel and time costs), it seems more likely that WTP is simply not significantly different from zero for these vaccines.

Though not the primary focus of this article, it may also be useful to compare our SC results with those obtained from the companion CV surveys (Table 9). Note that none of the CV respondents were given time to think. In general, the welfare estimates are similar for vaccines with 70% effectiveness, but quite different for vaccines with 50% or 99% effectiveness. This may be in part because respondents in the CV survey were not very responsive to the level of effectiveness or duration (WTP does not differ much among types of vaccine). This would seem to point out a strength of SC surveys, as one would expect respondents to value a 99% effective vaccine more highly than a 50% vaccine. On the other hand, the results from our SC survey imply that most respondents would be indifferent between taking and not taking a free 50% effective vaccine. However, when we asked a separate sample of respondents an equivalent CV question (i.e., "would you buy a 50% effective, three-year cholera vaccine if it cost X?"), 77% reported that they would purchase the same vaccine at a price of US$0.33.

There seems to be no consensus in the literature on predicting how CV and SC results will differ. Like Foster and Mourato (2003), but unlike Adamowicz et al. (1998) and Boxall et al. (1996), we find that WTP measures are different using CV and SC data. Unlike all three other studies, however, we do not find that one method produces consistently higher or lower estimates. Rather, the difference between the CV and SC welfare estimates varies with attribute levels--WTP from SC is higher than CV for vaccines with high effectiveness but lower for the "worst" vaccines. It is not possible to say which of these elicitation formats is more believable or robust in our context--both have strengths and weaknesses. It is also worth noting the real-world policy context. In our SC experiment, respondents had a choice between vaccines with different levels of effectiveness and, given such a choice, they told us that they strongly preferred more effective vaccines. Though this is certainly useful information for policy makers, in reality this choice will not be available to them; respondents will have to choose between a less than perfect vaccine and no vaccine at all (similar to the questions posed in the CV surveys).

There has also been interest recently in understanding how the context and complexity of choice experiments affects results (Swait and Adamowicz 2001; Swait et al. 2002). In our study the fact that a TTT respondent could see all the choice tasks before answering any of them was both a strength and a weakness of the research design. Because choice tasks are cognitively difficult, one might expect that giving people the chance to study several tasks before answering any would familiarize them better with making trade-offs between attributes (note that the NTTT subsample also completed two practice tasks before beginning the actual choice tasks).

On the other hand, we had little control over the context in which the TTT respondents answered, so that some respondents may not have taken each choice task as an isolated question. For example, a respondent might have looked through all 12 vaccine alternatives on all 6 tasks and chose the alternative that had the best combination of effectiveness, duration, and price for them. He then might have chosen that vaccine on the task in which it was offered, and chose "neither" on all of the other five tasks. Respondents may also have grouped tasks in any number of different ways that made sense to them (i.e., group all the 99% cholera vaccines together, group all the typhoid vaccines which cost less than X, etc.). In fact, a simple task to help identify such qualitative patterns would be to ask respondents to sort the choice tasks into whichever grouping or order makes sense to them. This could be done either before or after actually answering the tasks.

Substitutes matter, of course. Even without time to think, respondents may be locking in on high-quality vaccines and opting out more frequently once they've seen a vaccine with the "best" attributes. If this were true, it would bias upward the coefficients on the effect-coded variables for high attribute levels. For example, suppose many respondents in our study "locked in" on vaccines with 99% effectiveness. Once they saw a task with a 99% effective vaccine, they might have developed a bias against vaccines with 50% or 70% effectiveness and chose neither on any task that did not contain a vaccine alternative that was 99% effective.

APPENDIX A: WTP SCENARIO

Initial Question

"I will now show you six new cards similar to the one I have just shown you. For each card, I would like you to choose which of the vaccine alternatives you would prefer if these alternatives were available to you. The first two vaccine alternatives (A or B) are to buy either a typhoid vaccine or a cholera vaccine with the characteristics (effectiveness, duration, and price) that are written on the card. If neither vaccine is attractive to you, you may choose not to purchase any vaccine at all (Alternative C). For each card, you will be asked about your choice for yourself, not for other members of your household."

See Figure 1 for an example of the choice card.

Follow-up for Respondents Who Purchased at Least One Vaccine

"Now let's suppose that you had the opportunity to buy both vaccines if you wanted to. If you could buy both vaccine alternatives and the total price for both vaccines is --. would you be able to afford and want to buy both vaccines for yourself?"

APPENDIX B: TESTING FOR PREFERENCE ERRORS

Stability is often the easiest characteristic of respondents' preferences to test. Suppose one choice task asks you to choose between three options: you can purchase either of two vaccine alternatives (vaccines X and Y) or you can purchase neither vaccine and choose "opt out." If you choose vaccine X, you are revealing that you prefer X to Y, and prefer X to no vaccine. Now imagine a new choice task that asks you to choose between vaccine X, vaccine Z, or neither vaccine. If you choose neither vaccine. you are revealing that you prefer no vaccine to vaccine X, which is inconsistent with your first choice.

One tests monotonicity with either "'dominant-pair" comparisons or by comparing answers across choice tasks. A choice task with a dominant pair presents one alternative that is unambiguously better in all attributes than the other alternative (i.e., higher effectiveness, longer duration, lower price). Though such a pair of alternatives can tell us if respondents" preferences are monotonic, they decrease the statistical efficiency of the experimental design because they do not reveal important information about the willingness to trade off attributes.

A second approach to observing monotonicity is to observe responses across choice tasks when at least one alternative is repeated. For example, suppose two vaccine alternatives X and Y are equivalent in all attributes, except that vaccine X has a lower price. Suppose you are asked to compare X with some other vaccine Z, and also to compare Y with Z. If you prefer vaccine Z to vaccine X, then you should not prefer Y to Z, because Y is equivalent to X but has a higher price. Finally, testing for transitivity requires repeating two bundles, such that respondents compare X with Y on one choice task, Y with Z in another choice task, and X with Z on a third task.

The detailed algorithms for calculating preference errors are available from the authors.

ABBREVIATIONS

ASC: Alternate-Specific Constant

CV: Contingent Valuation

MLHB: Mixed Logit/Hierarchical Bayes

NTTT: No Time to Think

SC: Stated Choice

TTT: Time to Think

WTP: Willingness to Pay

doi: 10.1093/ei/cb1005

REFERENCES

Adamowicz, W., P. Boxall, M. Williams, and J. Louviere. "Stated Preference Approaches for Measuring Passive Use Values: Choice Experiments and Contingent Valuation." American Journal of Agricultural Economics, 80, 1998.64-75.

Alpizar. F., and F. Carlsson. "'Policy Implications and Analysis of the Determinants of Travel Mode Choice: An Application of Choice Experiments to Metropolitan Costa Rica." Environment and Development Economics, 8, 2003, 603-19.

Alpizar, F., F. Carlsson, and P. Martinsson. "Using Choice Experiments for Non-Market Valuation." Department of Economics, Goteborg University. 2001.

Alvarez-Farizo. B., and N. Hanley. "Using Conjoint Analysis to Quantify Public Preferences over the Environmental Impacts of Wind Farms. An Example from Spain." Energy Policy. 30, 2002, 107-16.

Boxall, P., W. Adamowicz, J. Swait, M. Williams. and J. Louviere. "A Comparison of Stated Preference Methods for Environmental Valuation." Ecological Economics, 18, 1996, 243-53.

Canh, D. G., D. Whittington, L. T. K. Thoa, N. Utomo, N. T. Hoa, C. Poulos, D. T. D. Thuy, D. Kim, and A. Nyamete. "Household Demand for Typhoid Fever Vaccines in Hue City, Vietnam: Implications for Immunization Programs." Health Policy and Planning, 21(3), 2006, 241-55.

Carlsson, F., and P. Martinsson. "Do Hypothetical and Actual Marginal Willingness to Pay Differ in Choice Experiments?" Journal of Environmental Economics and Management, 41, 2001, 179-92.

Carson, R. "Contingent Valuation: A User's Guide." Environmental Science and Technology, 34, 2000, 1413-18.

Cropper, M. L., M. Haile, J. Lampietti, C. Poulos, and D. Whittington. "The Demand for a Malaria Vaccine: Evidence from Ethiopia." Journal of Development Economics, 75, 2004, 303-18.

Foster, V., and S. Mourato. "Elicitation Format and Sensitivity to Scope." Environmental and Resource Economics, 24, 2003, 141-60.

Hensher, D. A., J. M. Rose, and W. H. Greene. Applied Choice Analysis: A Primer. New York: Cambridge University Press, 2005.

Hanemann, M. "Applied Welfare Analysis with Qualitative Response Models". Working Paper No. 241, University of California, Berkeley, 1984.

Hanemann, W. M. "Valuing the Environment through Contingent Valuation." Journal of Economic Perspectives, 8(4), 1994, 19-43.

Johnson, F. R., K. E. Mathews, and M. F. Bingham. "Evaluating Welfare-Theoretic Consistency In Multiple-Response, Stated-Preference Surveys." TER Technical Working Paper No. T-0003. Triangle Economic Research, Durham, NC, 2000.

Kim, D., D. G. Canh, C. Poulos, L. T. K. Thoa, J. Cook, N. T. Hoa, A. Nyamete, D. T. D. Thuy, J. Deen, N. D. Son, J. Clemens, D. D. Trach, V. D. Thiem, D. D. Anh, and D. Whittington. "Private Demand for Cholera Vaccines in Hue, Vietnam." Draft Report to the International Vaccine Institute. Seoul, Korea, 2005.

Lauria, D. T., D. Whittington, K. Choe, C. Turingan, and V. Abiad. "Household Demand for Improved Sanitation Services: A Case Study of Calamba, Phillipines," in Valuing Environmental Preferences: Theory and Practice of the Contingent Valuation Method, edited by K. Willis and I. Bateman. Oxford: Oxford University Press, 1999, 540-84.

Louviere, J., D. Street, R. Carson, A. Ainslie, J. R. DeShazo, T. Cameron, D. Hensher, R. Kohn, and T. Marley. "Dissecting the Random Component of Utility." Marketing Letters, 13(3), 2002, 177-93.

Mas-Colell, A., M. Whinston, and J. Green. Microeconomic Theory. New York: Oxford University Press, 1995.

McFadden, D. "Conditional Logit Analysis of Qualitative Choice Behavior," in Frontiers of Econometrics, edited by P. Zarembka. New York: Academic Press, 1974.

Revelt, D., and K. Train. "Mixed Logit with Repeated Choices: Households' Choices of Appliance Efficiency Level." Review of Economics and Statistics, 80(4), 1998, 647-57.

Ruby, M., F. R. Johnson, and K. Matthews. "Just Say No: Opt-Out Alternatives and Anglers' Stated Preferences." TER Technical Working Paper T-9801R, Triangle Economic Research, Durham, NC, 1998.

Sillano, M., and J. de Dios Ortuzar. "Willingness-to-Pay Estimation with Mixed-Logit Models: Some New Evidence." Environment and Planning A, 37, 2005, 525-50.

Small, K. A., and H. S. Rosen. "Applied Welfare Economics with Discrete Choice Models." Econometrica. 49, 1981, 105-30.

Sur, D., J. Cook, S. Chatterjee, J. Deen, and D. Whittington. "Increasing the Transparency of Stated Choice Studies for Policy Analysis: Designing Experiments to Produce Raw Response Graphs." Journal of Policy Analysis and Management. forthcoming.

Suraratdecha, C., M. Ainsworth, V. Tangcharoensathien, and D. Whittington. "The Private Demand for an AIDS Vaccine in Thailand." Health Policy, 71, 2005, 271-87.

Swait, J., and W. Adamowicz. "The Influence of Task Complexity on Consumer Choice: A Latent Class Model of Decision Strategy Switching." Journal of Consumer Research, 28, 2001, 135-48.

Swait, J., and J. Louviere. "The Role of the Scale Parameter in the Estimation and Comparison of Multinomial Logit Models." Journal of Marketing Research, 30, 1993, 305-13.

Swait, J., W. Adamowicz, M. Hanemann, A. Diederich, J. Krosnick, D. Layton, W. Provencher, D. Schkade, and R. Tourangeau. "Context Dependence and Aggregation in Disaggregate Choice Analysis." Marketing Letters, 13(3), 2002, 195-205.

Train, K., and G. Sonnier. "Mixed Logit with Bounded Distributions of Partworths," in Applications of Simulation Methods in Environmental Resource Economics, edited by A. Alberini and R. Scarpa. New York: Kluwer Academic, 2003.

Whittington, D. "Improving the Performance of Contingent Valuation Studies in Developing Countries." Environmental and Resource Economics. 22, 2002, 323-67.

Whittington, D., D. T. Lauria, A. M. Wright, K. Choe, J. A. Hughes, and V. Swarna. "Household Demand for Improved Sanitation Services in Kumasi, Ghana: A Contingent Valuation Study." Water Resources Research, 29(6), 1993, 1539-60.

Whittington, D., O. Matsui, J. Frieberger, G. V. Houtven, and S. Pattanayak. "Private Demand for a HIV/ AIDS Vaccine: Evidence from Guadalajara, Mexico." Vaccine, 20, 2002, 2585-91.

Whittington, D., A.C. Pinheiro, and M. Cropper. "The Economic Benefits of Malaria Prevention: A Contingent Valuation Study in Marracuene, Mozambique." Journal of Health and Population in Developing Countries, 2003, 1-27.

Whittington, D., V. K. Smith, A. Okorafor. A. Okore, J. L. Liu, and A. McPhail. "Giving Respondents Time to Think in Contingent Valuation Studies: A Developing Country Application." Journal of Environmental Economics and Management, 22, 1992, 205-25.

(1.) Note that our research design also allows us to compare the results between the CV and SC surveys and examine the policy implications of any differences. This is not, however, the focus of this article. The results of the CV surveys are reported in Canh et al. (2006) and Kim et al. (2005), and we will briefly mention differences between the two methods in the conclusions.

(2.) We did this to maximize the opportunities to observe respondents making trade-offs among vaccine attributes. If a respondent always chooses both vaccines, it is difficult to determine whether, for example, the vaccine's effectiveness is more important than its duration to him.

JOSEPH COOK, DALE WHITTINGTON, DO GIA CANH, F. REED JOHNSON, and ANDREW NYAMETE *

* We thank Edward Norton, Donald Lauria, Vic Adamowicz, Richard Thorsten, Semra Ozdemir, and an anonymous reviewer for helpful comments. This research is part of the Diseases of the Most Impoverished Program (DOMI), administered by the International Vaccine Institute with support from the Bill and Melinda Gates Foundation. The DOMI program works to accelerate the development and introduction of new generation vaccines against cholera, typhoid fever, and shigellosis. The program involves a number of parallel activities including epidemiological studies, social science studies, and vaccine technology transfer. The results will support public decision-making regarding immunization programs for cholera and typhoid fever.

Cook: Doctoral Student, Department of Environmental Sciences and Engineering, School of Public Health, Rosenau Hall. Campus Box 7431, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599. Phone 1-919-360-6476, Fax 1-919-966-7911, [email protected]

Whittington: Professor, Department of Environmental Sciences and Engineering, School of Public Health, University of North Carolina at Chapel Hill. NC, 27599. Phone 1-919-966-7645, Fax 1-919-966-7911, E-mail [email protected]

Canh: Chief, Diarrheal Diseases Epidemiology and Field Research Section, National Institute of Hygiene and Epidemiology (NIHE), Hanoi, Vietnam. E-mail [email protected]

Johnson: Senior Fellow, Research Triangle Institute, Research Triangle Park. NC, 27709. Phone 1-919-5415958, Fax 1-919-541-7222, E-mail [email protected]

Nyamete: Coordinator, DOMI Social Science Task Force, International Vaccine Institute, Seoul, South Korea. Phone 301-856-4084, E-mail [email protected]
TABLE 1
Data Coding for MLHB Models

Variable Description Coding

Cholera vaccine Vaccine Type -1 if typhoid, 1 if
 cholera, 0 if neither
70% Effective Effectiveness -1 if 50%, 1 if 70%, 0
 if 99%, 0 if neither
99% Effective Effectiveness -1 if 50%, 0 if 70%, 1
 if 99%, 0 if neither
20yrDuration Duration -1 if 3 yrs, 1 if 20
 yrs, 0 if neither
Price Price Continuous (US$0.33,
 3.22, 12.90)
ASC Alternate-specific constant Dummy; 1--neither
 vaccine (status quo),
 0 = typhoid vaccine or
 cholera vaccine

TABLE 2
Subsample Socioeconomic Characteristics

 NTTT TTT

N 200 200
Age
Mean age (years) 45 45
Less than 35 years old 9% 14%
Age 35-49 17% 16%
Age 40-44 24% 20%
Age 45-49 18% 20%
Age 50 or older 33% 32%
Education
% Never Attended School 4% 8%
% Primary School (1-5 yrs) 19% 23%
% Secondary School (6-12 yrs) 52% 59%
% University and Postgrad 26% 11%
Urban (= 1 if respondent in 80% 80%
urban commune; semi-urban is
excluded category)

TABLE 3
The Number of Preference Errors in each
TTT Subsample

 NTTT

Type of Error Original Revised TTT

Stability 15 14 10
Monotonicity 12 8 9
Transitivity 1 0 0
Any error 22 18 14
N 200 200 200

Note: Table includes full sample; respondents who
failed effectiveness test twice included.

TABLE 4
The Number of Response Patterns in Both
Subsamples that Show Apparently
Lexicographic Preferences

 NTTT

Response Patterns Original Revised TTT

Always chose cholera vaccine 23 20 20
Always chose typhoid vaccine 44 37 25
Always chose neither vaccine 37 34 56
Total 104 91 101

TABLE 5
Percent of Respondents with Any Type of Preference Error

 NTTT% TTT%
 N Any Error Any Error

Original NTTT answers

All Responses 400 10.5% 6.5%
Excluding apparent 195 21.8% 13.1%
lexicographic responses

Revised NTTT answers

All Responses 400 9.0% 6.5%
Excluding apparent 208 16.5% 13.1%
lexicographic responses

 Probability that
 Difference in Means > 0 (a)

Original NTTT answers

All Responses 0.08
Excluding apparent 0.05
lexicographic responses

Revised NTTT answers

All Responses 0.23
Excluding apparent 0.25
lexicographic responses

TABLE 6
Probit Model Specifications Predicting the
Probability of Any Type of Preference Error

 Model 1 Model 2

TTT (a) -0.31 * 1.66 **
 (0.18) (0.69)
Age (b)
Age 35-39 -0.10 -0.16
 (0.38) (0.39)
Age 40-44 -0.18 -0.16
 (0.37) (0.38)
Age 45-49 0.24 0.23
 (0.35) (0.35)
Age 50 or older 0.23 0.23
 (0.33) (0.33)
Education (b)
Primary School (1-5 yrs) -0.47 -1.27 **
 (0.36) (0.54)
Secondary School (6-12 yrs) -0.70 ** -1.35 ***
 (0.35) (0.52)
University and Postgraduate -0.86 ** -1.65 ***
 (0.41) (0.56)
TTT x Primary School 1.60 **
 (0.78)
TTT x Secondary School 1.31
 (0.74)
TTT x University and 1.89 **
Postgraduate (0.84)
Urban 0.23
 (0.23)
Constant -0.85 * 0.00
 (0.45) (0.57)
Likelihood ratio test (p > [chi square]) 13.51 18.37
N 400 400

Notes. SEs are in parentheses. * indicates significance
at the 10% level, ** at the 5'X, level, and *** at the 1% level.

(a) 0 = Respondents with no time to think, 1 = respondents
with time to think.

(b) Excluded categories: Age < 35 and "Never Attended
School."

TABLE 7
Results of MLHB Multivariate Models

 MLHB Model 1

 All Coefficients
 Normally Distributed

 NTTT TTT

Price -0.1622 *** -0.2622 ***
 (0.014) (0.079)
Cholera vaccine 0.1715 *** 0.1423 **
 (0.045) (0.054)
70% Effective -0.144 * 0.0934
 (0.074) (0.075)
99% Effective 1.014 *** 1.0713 ***
 (0.077) (0.066)
20-year duration -0.0291 0.1009
 (0.061) (0.058)
ASC -0.4837 *** -0.0421
 (0.082) (0.079)
N 200 200
Likelihood ratio [chi square] 1110 1092
McFadden (1974) pseudo-[R.sup.2] 0.42 0.43

 MLHB Model 2

 All Normally Distributed
 Except Price (log normal)

 NTTT TTT

Price -0.0849 *** -0.1475 ***
 (0.009) (0.011)
Cholera vaccine 0.1520 *** 0.1681 ***
 (0.042) (0.049)
70% Effective -0.069 0.0929
 (0.063) (0.074)
99% Effective 0.8432 *** 1.007 ***
 (0.064) (0.074)
20-year duration 0.0477 0.0627
 (0.046) (0.051)
ASC -0.3829 *** 0.0656
 (0.073) (0.064)
N 200 200
Likelihood ratio [chi square] 573 650
McFadden (1974) pseudo-[R.sup.2] 0.22 0.25

 MLHB Model 3

 All Normal Except Price
 (normal, truncated at zero)

 NTTT TTT

Price -0.1968 *** -0.2888 ***
 (0.013) (0.016)
Cholera vaccine 0.1909 *** 0.1654 **
 (0.047) (0.051)
70% Effective -0.1508 * 0.0278
 (0.067) (0.064)
99% Effective 1.035 *** 1.118 ***
 (0.080) (0.076)
20-year duration 0.0406 0.1222 *
 (0.056) (0.056)
ASC -0.626 *** -0.152 *
 (0.072) (0.075)
N 200 200
Likelihood ratio [chi square] 989 1013
McFadden (1974) pseudo-[R.sup.2] 0.38 0.39

Notes: Each sample consisted of 200 respondents who completed a total
of six choice tasks, for a total of 1,200 task responses. * indicated
significance at the 10% level, ** at the 5% level, and *** at the
1% level.

TABLE 8
Estimates of Median Expected WTP for Vaccines (US$), by TTT and
Socioeconomic Characteristics

 By TTT Gender (a)

Vaccine NTTT TTT Male Female

Cholera, 50% 3 yr 1.92 -0.09 -0.08 -0.09
Cholera 50% 20 yr 2.27 0.40 0.42 0.40
Cholera 70% 3 yr 5.92 2.65 3.10 2.52
Cholera 70% 20 yr 6.62 3.50 4.38 3.18
Cholera 70% 3 yr 13.3 5.89 6.87 5.41
Cholera 99% 20 yr 14.4 7.02 8.18 5.92
Typhoid 50% 3 yr 0.39 -0.46 -0.55 -0.45
Typhoid 50% 20 yr 0.69 -0.21 -0.23 -0.20
Typhoid 70% 3 yr 4.36 1.74 1.97 1.67
Typhoid 70% 20 yr 4.54 2.63 3.21 2.07
Typhoid 99% 3 yr 11.25 4.65 5.56 4.44
Typhoid 99% 20 yr 11.82 5.69 7.50 5.30
N 200 200 75 125

 Education (a,b)

 No Primary Second.
Vaccine Educ School School Postgrad

Cholera, 50% 3 yr -0.08 -0.09 -0.08 -0.09
Cholera 50% 20 yr 0.39 0.42 0.40 0.47
Cholera 70% 3 yr 1.75 2.40 2.92 3.03
Cholera 70% 20 yr 2.68 2.96 3.89 4.09
Cholera 70% 3 yr 4.21 5.23 6.66 7.05
Cholera 99% 20 yr 5.29 5.67 7.60 8.77
Typhoid 50% 3 yr -0.35 -0.43 -0.50 -0.58
Typhoid 50% 20 yr -0.14 -0.17 -0.22 -0.17
Typhoid 70% 3 yr 1.02 1.68 1.78 1.91
Typhoid 70% 20 yr 1.71 2.00 2.93 3.13
Typhoid 99% 3 yr 3.33 4.37 5.31 5.64
Typhoid 99% 20 yr 4.23 4.75 6.46 7.28
N 15 45 118 22

 Quintile of Monthly
 Household Electricity Bill (a)

 Lowest Highest
Vaccine Quintile Quintile

Cholera, 50% 3 yr -0.08 -0.11
Cholera 50% 20 yr 0.22 0.55
Cholera 70% 3 yr 2.31 5.34
Cholera 70% 20 yr 2.82 6.36
Cholera 70% 3 yr 4.75 11.04
Cholera 99% 20 yr 5.45 12.16
Typhoid 50% 3 yr -0.43 -0.70
Typhoid 50% 20 yr -0.21 -0.20
Typhoid 70% 3 yr 1.38 4.22
Typhoid 70% 20 yr 1.97 5.43
Typhoid 99% 3 yr 3.71 9.81
Typhoid 99% 20 yr 4.68 11.96
N 43 20

Notes: Based on results from MLHB model 3 (price as truncated normal
distribution).

(a) Drawn only from the TTT subsample.

(b) Primary school (1-5 years), secondary school (6-12 years),
postgraduate (university or postgraduate).

TABLE 9
Comparison of Median WTP (US$) using CV and SC Data

 CV (a) SC MLHB
 Model 3 (d)

 Turnbull Lower
 Bound Mean
Vaccine [Median Range] (b) Probit (c) NTTT TTT

Cholera, 50%, 3 yr 4.50 [1.67-3.33] 4.95-4.98 1.92 -0.09
Cholera 50% 20 yr -- -- 2.27 0.40
Cholera 70% 3 yr 4.43 [0.33-1.67] 4.96-4.98 5.92 2.65
Cholera 70% 20 yr 2.97 [0.33-1.67] 5.14-5.16 6.62 3.50
Cholera 99% 3 yr -- -- 13.3 5.89
Cholera 99% 20 yr 4.96 [0.33-3.33] 6.44-6.45 14.4 7.02
Typhoid 50% 3 yr -- -- 0.39 -0.46
Typhoid 50% 20 yr -- -- 0.69 -0.21
Typhoid 70% 3 yr 3.47 [1.67-3.33] 3.72-4.74 4.36 1.74
Typhoid 70% 20 yr 3.01 [1.67 3.33] 3.35-4.77 4.54 2.63
Typhoid 99% 3 yr 2.52 [1.67-3.33] 2.27-4.77 11.25 4.65
Typhoid 99% 20 yr 4.20 [1.67-3.33] 4.74-4.77 11.82 5.69

(a) All CV respondents completed the survey in one interview (i.e.,
NTTT). It was not possible to show all combinations of the vaccine
types in the CV survey.

(b) From Canh et al. (2006). The median range of WTP from the Turnbull
estimator is the range of initial bids to which 50% of respondents said
yes and 50% said no."

(c) From Kim et al. (2005). Median WTP; range reflects different probit
modeling approaches.

(d) Median of predicted WTP for each of the 200 respondents in each
subsample.
联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有