I recently announced in this blog that my first dissertation paper has been published in Environmental and Resource Economics. In this blog post I summarize this paper and test its hypothesis in a second dataset focusing on oil spill prevention in the aftermath of the Deepwater Horizon oil spill.
Our paper focuses on perhaps the biggest criticism of the contingent valuation method. That is scope insensitivity. We would expect that respondents are willing to pay more to obtain more of the same good, or a good of bigger dimensions. However, this does not always happen.
For example, Boyle et al. (1994) reported the results from a contingent valuation study which elicited people’s willingness to pay to prevent bird death. We expect that people would be willing to pay more to prevent the death of a higher number of bird deaths. However, Boyle et al. (1994) did not find this result, as seen on the figure below. The authors find that the mean WTP to prevent the death of 2000 birds was around $80, while the mean WTP to prevent the death of 20000 birds was lower (around $78). Moreover, none of these mean WTP estimates are statistically different from each other. This is problematic, since the WTP to prevent a medium amount of bird deaths is lower than a smaller amount of bird deaths and it goes counter the expectation of scope sensitivity.
The inability to pass this statistical scope test has lead some researchers to question the validity of their results and also of the contingent valuation method itself.
In our recent paper, we contribute to this discussion by: first documenting thirteen explanations proposed by previous research as to why scope insensitivity may occur, and we then test some of these explanations using our own data on oil spill prevention. We find that four out of the thirteen reasons that we documented actually helps to uncover clearer scope findings. Some of these reasons are perfectly compatible with neoclassic economic theory or general intuition. They can even happen in the most well-designed surveys. The important thing is to be aware that controlling for some of these reasons might matter to find scope sensitivity.
I only used one original dataset in our study, but it might be useful to understand whether these conclusions hold in other studies. In other words, does controlling for these scope insensitivity explanations help improve scope findings in other studies?
The list of explanations for scope insensivity is the following:
- Related to microeconomic theory: 1) Diminishing marginal utility, 2) different utility functions, and 3) Substitutability between market and non-market environmental goods 4) Incomplete multi-stage budgeting;
- Related to how people relate to environmental goods: 5) Experience, familiarity, knowledge and/or use of the environmental good, and 6) Preference Heterogeneity;
- Related to survey design and model estimation: 7) Survey design, 8) Amenity misspecification, 9) Data cleaning, 10) Statistical distribution assumption, and 11) Sample size;
- Related to behavioural economics: 12) Preference Reversal Theory, and 13) Warm Glow.
I will use the Deepwater Horizon CV study dataset that I used in previous blog posts. In the Deepwater Horizon study, a random sample of American citizens were presented with a program to prevent oil spill damages in the next 15 years. Then they were asked to vote for or against this program, given a mandatory increase in taxes. By analyzing people’s choices, the researchers were able to estimate a lower-bound WTP per household of $136 for a smaller set of damages, and $153 for a larger set of damages.
If you want to follow this tutorial by using the same data, feel free to download it by clicking “Main and Non-Response Follow-Up Survey Data“. This is a zip file that includes a Stata datafile and an Excel file. Please save one of these as a csv file, so we can use R to analyze it. I called the dataframe with the results from the study as “deepwater”. I first convert into dummy variables the damages (i.e. scope variable), the yes/no vote to the referendum question and the flagged respondents variables.
deepwater$damages <- 0 deepwater$damages[deepwater$version=="B"] <- 1 deepwater$damages[is.na(deepwater$damages)] <- 0 deepwater$Vote[deepwater$q24=="Against"] <- 0 deepwater$Vote[deepwater$q24=="For"] <- 1 deepwater$flag[deepwater$flag=="Yes"] <- 1 deepwater$flag[deepwater$flag=="No"] <- 0 DATA <- deepwater[deepwater$flag==0,]
I can run a simple logit model to understand how the probability of voting favourably in the referendum yes/no question is affected by the bidvalue and the damages treatment (a smaller versus a larger set of damages).
LOGIT <- glm(Vote ~ bidvalue + damages, data=DATA, family=binomial(link="logit"))
The regression output is shown below. The coefficient associated with the bid value is negative, meaning the probability of saying yes to paying for a program to decrease oil spill incidence goes down with the amount people would have to pay. The coefficient associated with the damage variable is positive, meaning the probability of saying yes goes up if respondents are presented with a larger set of damages. This result confirms that respondents are sensitive to the scope of the good.
> summary(LOGIT) Call: glm(formula = Vote ~ bidvalue + damages, family = binomial(link = "logit"), data = DATA) Deviance Residuals: Min 1Q Median 3Q Max -1.2613 -1.1216 -0.8079 1.2319 1.6828 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 0.0503551 0.0628591 0.801 0.42309 bidvalue -0.0027318 0.0002384 -11.460 < 2e-16 *** damages 0.1857294 0.0688349 2.698 0.00697 **
My goal is to understand how the scope coefficient (0.1857294) changes when we add interactions to this model. We will investigate three of the 13 reasons we identified in our study to see how controlling for these changes scope.
The first reason we explore is to check if budget restrictions affect scope. Perhaps controlling for respondents who have short-run budget restrictions improves scope findings. In this survey, we use question 44 which asked respondents how difficult it would be for them to pay the amount they were asked to pay. We first create a budget dummy variable and then create the interaction with the damages dummy, called damages_budget. The regression output is reported below.
DATA$budget <- 0 DATA$budget[DATA$q44=="Very difficult"|DATA$q44=="Extremely difficult"] <- 1 DATA$damages_budget <- DATA$damages*DATA$budget LOGIT <- glm(Vote ~ bidvalue + damages + damages_budget, data=DATA, family=binomial(link="logit")) summary(LOGIT)
> summary(LOGIT) Call: glm(formula = Vote ~ bidvalue + damages + damages_budget, family = binomial(link = "logit"), data = DATA) Deviance Residuals: Min 1Q Median 3Q Max -1.352 -1.105 -0.662 1.202 2.169 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.0245506 0.0637321 -0.385 0.7 bidvalue -0.0022827 0.0002443 -9.343 < 2e-16 *** damages 0.4612958 0.0725269 6.360 2.01e-10 *** damages_budget -1.6949044 0.1594272 -10.631 < 2e-16 ***
It seems that controlling for respondents with short-term budget restrictions improves our scope findings. The damages_budget coefficient is negative and significant, meaning that respondents with budget restrictions and that get the larger set of damages are less likely to be willing to pay. The coefficient associated with the scope variable (0.4612958) is 2.5 times higher than the baseline coefficient (0.1857294).
We then investigate a second reason for scope insensitivity, that is increased familiarity with the affected site. I hypothesize that visitors are more familiar and should be more sensitive to scope than non-visitors. The information about visitors is in question 33 of this survey. To test this hypothesis, I create a dummy for visitors and the corresponding interaction with the damages variable. The regression output is reported below.
DATA$visitors <- 0 DATA$visitors[DATA$q33=="I have been there"] <- 1 DATA$damages_visitors <- DATA$damages*DATA$visitors LOGIT <- glm(Vote ~ bidvalue + damages + damages_visitors, data=DATA, family=binomial(link="logit")) summary(LOGIT)
> summary(LOGIT) Call: glm(formula = Vote ~ bidvalue + damages + damages_visitors, family = binomial(link = "logit"), data = DATA) Deviance Residuals: Min 1Q Median 3Q Max -1.2617 -1.1206 -0.8071 1.2319 1.6828 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 0.0503362 0.0628618 0.801 0.4233 bidvalue -0.0027316 0.0002384 -11.458 <2e-16 *** damages 0.1833541 0.1004489 1.825 0.0679 . damages_visitors 0.0034138 0.1051308 0.032 0.9741
Unlike my expectations, being a visitor does not led to clearer scope findings. The interaction (damages_visitors) is not statistically different from zero and the scope (damages) coefficient (0.1833541) does not change relative to our initial model (0.1857294). Moreover, it is no longer significant at the 5% level.
Finally, Carson and Groves (2007) suggest that designing a survey that is consequential, that is the respondents perceive the survey’s results as potentially influencing an agency’s actions, implies a higher likelihood of finding scope sensitivity. This means that accounting for consequentiality may improve scope findings.
We interact the damages variable with a dummy variable about people’s perceived effectiveness of the program that would decrease oil spill incidence. This is question 28 in the dataset. If people perceived the program not to be effective, then scope should be harder to find. Our dummy takes the value 1 if the respondent considered the program not to be effective at all, and zero otherwise.
DATA$consequentiality <- 0 DATA$consequentiality[DATA$q28=="Not very effective at all"] <- 1 DATA$damages_consequentiality <- DATA$damages*DATA$consequentiality LOGIT <- glm(Vote ~ bidvalue + damages + damages_consequentiality, data=DATA, family=binomial(link="logit")) summary(LOGIT)
> summary(LOGIT) Call: glm(formula = Vote ~ bidvalue + damages + damages_consequentiality, family = binomial(link = "logit"), data = DATA) Deviance Residuals: Min 1Q Median 3Q Max -1.2854 -1.0459 -0.7432 1.2119 2.2064 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 0.0549680 0.0629671 0.873 0.38268 bidvalue -0.0027597 0.0002395 -11.520 < 2e-16 *** damages 0.2367116 0.0693328 3.414 0.00064 *** damages_consequentiality -2.4546487 0.5233980 -4.690 2.73e-06 ***
The interaction between the scope variable (damages) and the consequentiality dummy is negative and statistically significant. In other words, respondents who were randomly assigned to the larger damages treatment and perceived the program not to be consequential were less likely to be willing to pay for oil spill prevention. Moreover, the scope variable (damages) is slightly higher (0.2367116) than when we do not control for this interaction (0.1833541). Note that the coefficients are not statistically different from each other.
In my original paper, I also tested for consequentiality and budget restrictions, although none of these reasons yielded clearer scope findings. In the case of the Deepwater Horizon CV study, both of these reasons yield clearer scope. However, visitors do not appear to be more sensitive to scope than non-visitors in both surveys. These results on a different dataset confirm that determining which explanations out of the 13 explain scope insensitivity is context-specific and depends on each survey.