«Christina Gerberding (Deutsche Bundesbank) Franz Seitz (University of Applied Sciences Amberg-Weiden) Discussion Paper Series 1: Economic Studies No ...»
Although the hybrid New Keynesian model has been used widely to analyse the performance of monetary policy rules, there is still considerable disagreement about the appropriate choice of values for key model parameters like the degree of forwardlookingness of the Phillips curve. Depending on the details of the specification, on the estimation method and on the sample period, existing estimates of these parameters differ widely. Hence, it is important to analyse the robustness of the results to variations in the numerical values chosen for key coefficients. In this exercise, we assume that policymakers know the underlying model coefficients and optimise the coefficients of the respective rules subject to this information (this assumption is changed in the next section). Figure 8 shows the losses associated with each of the three policy rules for different values of (a) the degree of backward-lookingness of the Phillips curve, (b) the degree of backward-lookingness of the IS curve, (c) the interest rate elasticity σ, (d) the output-gap elasticity k, (e) the standard deviation of the cost-push shock, (f) the standard deviation of the IS-shock, and (g), the standard deviation of the money demand shock.
Overall, the ranking of the policy rules is quite robust to reasonable changes in the model coefficients. However, some of the results deserve a closer look. First, increasing the degree of backward-lookingness in the Phillips curve to values above 0.4/ 0.5 strongly increases the losses for all rules, but particularly so for the simple speed limit rule. This makes sense as the benefits of a speed limit policy over a conventional Taylor rule rest on its ability to stabilise private sector inflation expectations. In a purely backward-looking model, this channel is absent, and hence, there is no role for inertia and history dependence. However, as described in Section 3.1, the available evidence
suggests that the degree of intrinsic inflation inertia is rather low in the euro area, so that values of γ beyond 0.5 may be considered to lie outside the range of plausible values, at least as far as the euro area is concerned.
Secondly, it is also interesting to consider the implications of introducing a forward-looking element into the IS curve. To do so, we follow Rudebusch (2002) and
Stracca (2007) and rewrite the IS curve as:
where μy is the degree of backward-lookingness in the IS curve. As shown in the second graph (first row) of Figure 8, introducing a forward-looking element into the IS curve does not change the overall ranking of the rules, but it decreases the expected losses as well as the differences in expected loss between the respective rules. As we have seen above, in the benchmark model with a purely backward-looking IS curve and a high degree of persistence in output movements, it is optimal for policymakers to respond strongly to demand shocks. However, if demand becomes more forward-looking, the current level of output will increasingly depend on expected future interest rates.
Rational agents will understand that future interest rates will increase as the present output gap contributes to future inflation, and the increase in interest rate expectations will have a contractionary effect on demand. Hence, there will be less need to react strongly to output (growth). On the other hand, it will become increasingly attractive to reinforce the interest-rate expectations channel by responding to the lagged interest rate.
Hence, the coefficient on the lagged interest rate in the Taylor rule will increase and the rules will become more similar.
Thirdly and not surprisingly, the performance of the money-augmented speed limit rule relative to the other two rules depends upon the prevalence of money demand shocks. As shown in the last graph of Figure 8, increasing the standard deviation of the money demand shock leads to a deterioration in the performance of SPLM relative to the simple speed limit rule which gradually erodes the welfare gain present at baseline parameter values.
4.3. Robustness to parameter uncertainty In the last section, we have examined the relative performance of the simple rules considered under different parameterisations of the hybrid New Keynesian model.
However, as pointed out by Rudebusch (2002), exercises of this type do not capture the model uncertainty faced by monetary policymakers. In practice, policymakers do not know the true values of the model coefficients and would like to have a strategy for monetary policy that will work well even if the coefficients deviate from the policymaker’s best (baseline) guess. During the past decade, the academic literature has developed a growing number of methods to deal with this issue, ranging from the robust control approach developed by Hansen and Sargent to approaches which allow for competing reference models (for an overview, see Brock et al., 2003). While a fullyfledged application of these methods is beyond the scope of the present paper, we will try to shed some light on the issue of robustness to parameter uncertainty by looking at a few special cases.
As mentioned above, the existing literature has identified the degree of endogenous inertia in the inflation process as one of the most critical parameters affecting the evaluation of alternative policies. Hence, it is of particular interest to examine the robustness of our results to misperceptions about the degree of inflation persistence. Table 7 shows the losses which result from applying the rules optimised for three different values of γ (0.0; 0.2; 0.4) in a range of models with varying values of γ.
For example, the results in the middle column are relevant for the policymaker who perceives 0.2 to be the most likely value of γ and optimises the policy rule for that situation. However, the policymaker must consider the performance of the rule if the actual value is not equal to 0.2. As becomes apparent when comparing the losses of the three rules optimised for γ=0.2 for different true values of γ, the money-augmented speed limit rule dominates the other two rules across all possible values of γ considered.
The same holds for the rules optimised for γ=0 and γ=0.4. Therefore, we can conclude that the ranking of the rules under the baseline model parameterisation is robust to misperceptions about the degree of inflation inertia within an empirically plausible range of uncertainty about this parameter.
Taking a closer look at the losses under the money-augmented speed limit rule, we find that the rule is quite robust to overestimation of the degree of inflation inertia: if the perceived γ is greater than the true one, losses go up (compared to the case when policymakers correctly estimate γ), but the increase is fairly limited. Underestimating the degree of inflation inertia results in somewhat higher losses, especially if the true degree of inflation inertia lies at the upper end of the range. Hence, a risk-averse policymaker may prefer to adopt the rule which has been optimised for γ = 0.4. In this respect, our results are in line with those of Walsh (2004) who finds that overestimating the persistence in the inflation process results in a more robust rule than is obtained if the persistence is underestimated.
Another important aspect is whether the optimised rules are robust to misperceptions about the true level of output-gap uncertainty. Table 8 shows the losses under different assumptions regarding the true and perceived parameters of the measurement error process. Consider first the case where the rules have been optimised for baseline model coefficients and baseline uncertainty. Again, we find that the moneyaugmented speed limit rule dominates the other two rules across all possible degrees of output gap uncertainty considered here. The same is true when the policy rules are optimised for a low or a high degree of uncertainty. However, when policymakers use the optimised no-uncertainty rules, the results are somewhat different. In that case, the Taylor rule dominates the speed limit rules when the true degree of output gap uncertainty is low or baseline. On the other hand, if the true degree of output gap uncertainty is high, the naïve use of the optimised no-uncertainty Taylor rule results in a much higher loss than either variant of the speed limit rule.
The fact that strongly underestimating the true degree of output gap uncertainty leads to substantial losses, especially in the case where policymakers do not account for uncertainty, suggests that it may again be better to overestimate the level of output gap uncertainty rather than to underestimate it. In fact, a policymaker who follows a strategy of minimising the worst-case loss will always choose the money-augmented speed limit rule, with the coefficients optimised under the assumption of worst-case output gap uncertainty. Overall, these results are in line with those of Orphanides and Williams (2002) who find that the costs of underestimating the degree of uncertainty are much larger than the costs of overestimating it. Thus, a risk-avoidance strategy would call for over-emphasising the problem of data uncertainty and measurement errors.
5. Conclusions and outlook In the present paper, we have extended the analysis of simple monetary policy rules for the euro area conducted by Stracca (2007) to the case where policymakers face measurement problems with respect to both actual and potential output. To sum up, we have found that a speed limit rule which includes an additional response to money growth outperforms both the standard speed limit rule and more conventional Taylor rules (with and without money) once we account for a realistic degree of output gap uncertainty. The main reason for the welfare gain is that the information on current output growth contained in money growth data allows the central bank to reduce its response to current inflation, thus enabling it to avoid inefficient reactions to cost push shocks.
One reason why we consider these results to be interesting is that they differ from those of Rudebusch (2002) who concludes that augmenting the Taylor rule with a response to output growth does little to improve its performance even with plausible data measurement errors. Moreover, they also differ from the findings of Coenen et al.
(2005) as well as Lippi and Neri (2007) who conclude that money has fairly limited information content as an indicator of contemporaneous aggregate demand in the euro area. Obviously, all of these results are conditional on the structure of the models used, and it is certainly necessary to check their robustness in richer models of the monetary transmission mechanism. One obvious limitation is that in the simple New Keynesian model underlying our analysis, money has no causal role in influencing output or inflation, but is simply one potential indicator of current economic activity (and thus of incipient inflationary pressure). In this sense, our results provide a lower boundary for the usefulness of money in simple monetary policy rules. Obviously, it would be interesting to repeat the analysis in a model which captures the empirically wellestablished role of money as a leading indicator of changes in trend inflation. This is an important task for future research.
Amato, J. and N. Swanson (2001), The real-time predictive content of money for output, Journal of Monetary Economics 48 (1), 3-24.
Andrés, J., J. López-Salido and E. Nelson (2007), Money and the natural rate of interest
- structural estimates for the United States and the euro area, Federal Reserve Bank of St. Louis Working Paper 2007-005A.
Aoki, K. (2006), Price-level determination under dispersed information and monetary policy, The Japanese Economic Review 57 (3), 406–425.
Beck, G. and V. Wieland (2007): Money in monetary policy design: A formal characterization of ECB-style cross-checking, Journal of the European Economic Association 5 (2-3), 524-533.
Berg, C., P. Jansson and A. Vredin (2006), How useful are simple rules for monetary policy? The Swedish experience, Riksbank Research Paper Series No. 12, August 2006.
Brock, W., S. Durlauf and K. West (2003), Policy evaluation in uncertain economic environments, Brookings Papers on Economic Activity 1, 235-301.
Calvo, G. (1983), Staggered prices in a utility maximizing framework, Journal of Monetary Economics 12, 383-398.
Clarida, R., J. Galí and M. Gertler (1998), Monetary policy rules in practice, Some international evidence, European Economic Review 42, 1033-1067.