# «School of Economics and Political Science, University of St. Gallen Department of Economics Editor: Martina Flockerzi University of St. Gallen School ...»

(i) Why is the homicide rate in those U.S. states which do not apply the death penalty consistently lower compared to those states applying it?

(ii) Why does the number of executions not have an impact on the difference in the development of the homicide rates between those states which do not apply the death penalty compared to those states applying it?

(iii) Why is the homicide rate in Canada consistently lower than in the United States despite the fact that Canada does not apply the death penalty but the United States does?

(iv) Why does the number of executions not have an impact on the difference in the development of the homicide rates between Canada and the United States despite the fact that Canada does not apply the death penalty but the United States does?

So far, no convincing answers have been given to these four questions.

3 A Critical Survey of the Recent Literature [23] The main participants in the new debate are three groups of authors.20) H. DEZHBAKHSH, P. RUBIN and J.M. SHEPHERD (2003)21) as well as H.N. MOCAN and R.K. GITTINGS (2003),

defend the deterrence hypothesis. J.J. DONOHUE and J. WOLFERS (2005, 2009) as well as L.

KATZ, S.D. LEVITT and E. SHUSTOROVICH (2003), J. FAGAN (2005, 2006), J. FAGAN, F.E. ZIMRING and A. GELLER (2006) question it. Finally, J.M. SHEPHERD (2005) presents results in favour of the deterrence as well as the brutalisation effects and tries to explain why these different results occur. When discussing these papers, to make the results comparable we concentrate on the mean of the estimated t-statistics of the execution variable.23) [24] H. DEZHBAKHSH, P. RUBIN and J.M. SHEPHERD (2003) use county level data from 3054 US counties over the period from 1977 to 1996. They use a linear model for the murder rate with county fixed effects and different specifications of the deterrence variable.24) They include the aggravated assault rate as well as the robbery rate as explanatory variables and use Two Stage Least Squares (TSLS) to instrument these two variables. They present altogether 48 estimates of the effect of the conditional probability of execution on the murder rate with a mean t-statistic of -5.47 and a standard deviation of 4.83.25) Their conclusion from their results is “that each execution has resulted, on average, in eighteen fewer murders” (p. 369).

These results were criticised by J.J. DONOHUE and J. WOLFERS (2005). Their main arguments are about the instruments used and about the influence of Texas and California on the results;

the two states which have by far the highest number of executions. Their re-estimation leads to quite different results: the mean t-statistic is -1.43 with a standard deviation of 11.17.26) [25] A second paper from this group of authors is by J.M. SHEPHERD (2004). She uses monthly state-level data for the period from 1977 to 1999 and, including state fixed effects, estimates linear least squares models as well as negative binomial regressions for the murder rate. The mean of their 32 estimated t-statistics is 3.17 with a standard deviation of 2.08. The linear model tells her that in 1999 each execution prevented about three homicides, and additional 4.5 homicides have been prevented in this year by each death penalty sentence.27) [26] Similar evidence is presented by H. DEZHBAKHSH and J.M. SHEPHERD (2006). Aside from some ‘exploratory’ time series regressions for the period from 1960 to 2000 they use

21. See also H. DEZHBAKHSH, and J.M. SHEPHERD (2006), H. DEZHBAKHSH and P. RUBIN (2007), P.R. ZIMMERMANN (2004, 2006) as well as J.M. SHEPHERD (2004).

22. Similar results which are, however, not discussed by the authors are presented in J.R. LOTT and J. WHITLEY (2007).

23. Because statistical significance does not always imply economic significance as well, an obvious alternative would be to ask how many lives are saved by each execution. However, these data are only provided in some of the studies, and we would not be able to distinguish between statistically significant and insignificant effects.

24. They report that the results are robust with respect to changes of the functional form.

25. Given the number of observations, a t-test whether this mean is significantly different from zero would lead to a highly significant result. Such a test assumes, however, that the observations are independent. This assumption is, of course, strongly violated in this situation. Thus, such a test cannot be applied here. In order to perform a meaningful test, an assumption about the correlation between the estimated t-values would be necessary.

26. In calculating the average t-statistics, those estimates exactly reproducing the results of other authors are always excluded.

27. See J.M. SHEPHERD (2004, p. 308).

– 13 – state level data from 1960 to 2000. They use a linear model with the current and lagged number of executions as deterrence variable and include state fixed effects. The mean t-statistic of their 9 time series and 17 panel estimates is -5.81 with a standard deviation of 2.46. According to these estimates, each execution saves about eight lives. These results are also criticised by J.J. DONOHUE and J. WOLFERS (2005). Using the same data set, they show that using other measures of the executions’ risk the deterrence variable is no longer significant. The mean tstatistic reported is -0.767 with a standard deviation of 1.01. This criticism is contradicted by H. DEZHBAKHSH and P. RUBIN (2007). Taking up the suggestions of J.J. DONOHUE and J.

WOLFERS (2005) they present 47 different estimates with a mean t-statistic of -3.62 and a standard deviation of 2.94. They accuse J.J. DONOHUE and J. WOLFERS (2005) of data mining and only emphasising those few results which do not indicate a significant deterrence effect.

[27] Another paper is by H.N. MOCAN and R.K. GITTINGS (2003). They use state data from the years from 1977 to 1999 and get an average t-statistic of their 11 coefficients of -1.98 with a standard deviation of 0.24. Their conclusion is that every execution prevents about five homicides. This paper is again criticised by J.J. DONOHUE and J. WOLFERS (2005). Aside from correcting some programming errors, the main dispute between the two groups is about the construction of the deterrence variable. To construct the murder ratios, H.N. MOCAN and R.K.

GITTINGS (2003) construct the probability of execution as the relation between the number of last year’s executions and the number of death sentences seven years ago. The justification they provide for this procedure is that the average time length on the death row is six years.

J.J. DONOHUE and J. WOLFERS (2005) question this because it is highly implausible that a potential offender is informed about the number of death sentences seven years ago in order to perform this calculation. Therefore, citing the argument of P.R. ZIMMERMANN (2003, p. 170) “that any truly meaningful assessment a potential murderer makes” on the probability of an execution “is likely to be based on the most recent information available to him/her”, they use the relation of last year’s executions to last year’s death sentences. Applying this measure, their re-estimation leads to a mean t-statistic of -0.64 with a standard deviation of 1.21. These results are more or less confirmed by H.N. MOCAN and R.K. GITTINGS (2006) who replicate many models using the specification of J.J. DONOHUE and J. WOLFERS (2005) and find insignificant results in most cases. However, they insist that their measure is correct and performing another 195 regressions they show that their measure is insofar robust as a delay of four or five years does not alter the results qualitatively; excluding the specifications of J.J. DONOHUE and J. WOLFERS (2005) their mean t-statistic is -1.96 with a standard deviation of 0.83. The argument for a four or five years lag is, however, hardly more convincing than the one for a six years lag; it is still implausible that potential offenders use this information for their calculations.28) [28] The data of H.N. MOCAN and R.K. GITTINGS (2003) are also used by J. FAGAN (2006).

However, he estimates the model also for other specifications concerning the deterrence variable, data sources, or eliminating Texas from the data. This results in 13 estimates with a mean of -1.16 and a standard deviation of 1.17. In his paper he criticises the same studies that

28. See also A.K. DILLS, J.A. MIRON and G. SUMMERS (2008, p. 10, FN 13): “their case for the assumed lag is not persuasive.” – 14 – are also discussed in the current paper and concludes that “this cohort of studies and researchers, like Ehrlich before them, has created unjustified confidence in the minds of legislators, death penalty advocates, and a small group of legal scholars about the capacity of death sentences and executions to deter murder” (p. 319).

[29] A further critique of the studies postulating a deterrence effect is provided in J. FAGAN, F.E. ZIMRING and A. GELLER (2006). They criticise the data that have been used so far, because they do not distinguish between those homicides that are punishable by death and the other ones. Using state-level panel data from 1978 to 2000, restricting the murder rates on those homicides that are punishable by death, and estimating linear as well as Poisson regressions, they do not find a significant effect of the existence of a death penalty statute or of the number of executions lagged one or two years on felony homicide rates. The mean of their 24 estimated t-statistics is -0.31 with a standard deviation of 1.24.

[30] Another author who finds, however, a significant deterrent effect is P.R. ZIMMERMANN (2004, 2006). In his 2004 paper he employs a panel of state-level data from 1978 to 1997. He takes into account the potential effect of murders on the probability of execution and compares the results of the t-statistics of four OLS with two TSLS estimates. While the mean of the OLS estimates is -1.31 with a standard deviation of 0.24, the mean of the TSLS estimates is -2.59. According to J.J. DONOHUE and J. WOLFERS (2005, p. 835), P.R. ZIMMERMANN’s (2004) most preferred specification implies that every execution saves 19 lives, with a confidence interval from 7 to 31 lives. Re-estimating this equation and clustering the standard errors according to states in order to take account of autocorrelation of the residuals, J.J.

DONOHUE and J. WOLFERS (2005) derive, however, an interval from saving 54 lives to causing 23 additional homicides per execution, i.e. the deterrence effect is no longer statistically significant. P.R. ZIMMERMANN (2009), on the other hand, suspects that clustering might not be the appropriate measure to cope with the autocorrelation of the residuals and, using alternative methods, finds again significant results of the deterrent effect, but only as long as the deterrence effect is measured by the probability of execution given conviction. If it is measured by the probability of conviction given arrest, the estimated coefficients are still negative but no longer significantly different from zero.

[31] In P.R. ZIMMERMANN (2006) it is investigated whether the method by which the death penalty is effected has an effect on the deterrence. Using a panel of state-level data from 1978 to 2000, only electrocution has a deterrent effect; none of the other four methods, neither lethal injection, nor gas chamber asphyxiation, nor hanging, nor firing squad has a statistically significant effect. Thus, his average t-statistic is -1.05 with a standard deviation of 1.19. Correspondingly, he argues against the change from electrocution to lethal injection that took place in several death penalty states if one hopes that capital punishment has a deterrent effect on potential murderers.