首页    期刊浏览 2024年12月03日 星期二
登录注册

文章基本信息

  • 标题:The New Keynesian Phillips Curve and lagged inflation: a case of spurious correlation?
  • 作者:Hall, Stephen G. ; Hondroyiannis, George ; Swamy, P.A.V.B.
  • 期刊名称:Southern Economic Journal
  • 印刷版ISSN:0038-4038
  • 出版年度:2009
  • 期号:October
  • 语种:English
  • 出版社:Southern Economic Association
  • 摘要:The New Keynesian Phillips Curve (NKPC) is a key component of much recent theoretical work on inflation. Unlike traditional formulations of the Phillips curve, the NKPC is derivable explicitly from a model of optimizing behavior on the part of price setters, conditional on the assumed economic environment (for example, monopolistic competition, constant elasticity demand curves, and randomly arriving opportunities to adjust prices) (Walsh 2003). In contrast to the traditional specification, in the NKPC framework current expectations of future inflation, rather than past inflation rates, shift the curve (Woodford 2003). Also, the NKPC implies that inflation depends on real marginal cost, and not directly on either the gap between actual output and potential output or the deviation of the current unemployment rate from the natural rate of unemployment, as is typical in traditional Phillips curves (Walsh 2003). A major advantage of the NKPC compared with the traditional Phillips curve is said to be that the latter is a reduced-form relationship; whereas, the NKPC has a clear structural interpretation so that it can be useful for interpreting the impact of structural changes on inflation (Gali and Gertler 1999).
  • 关键词:Inflation (Economics);Inflation (Finance);Keynesian economics;Method of moments (Statistics);Method of moments(Statistics);Monetary policy

The New Keynesian Phillips Curve and lagged inflation: a case of spurious correlation?


Hall, Stephen G. ; Hondroyiannis, George ; Swamy, P.A.V.B. 等


1. Introduction

The New Keynesian Phillips Curve (NKPC) is a key component of much recent theoretical work on inflation. Unlike traditional formulations of the Phillips curve, the NKPC is derivable explicitly from a model of optimizing behavior on the part of price setters, conditional on the assumed economic environment (for example, monopolistic competition, constant elasticity demand curves, and randomly arriving opportunities to adjust prices) (Walsh 2003). In contrast to the traditional specification, in the NKPC framework current expectations of future inflation, rather than past inflation rates, shift the curve (Woodford 2003). Also, the NKPC implies that inflation depends on real marginal cost, and not directly on either the gap between actual output and potential output or the deviation of the current unemployment rate from the natural rate of unemployment, as is typical in traditional Phillips curves (Walsh 2003). A major advantage of the NKPC compared with the traditional Phillips curve is said to be that the latter is a reduced-form relationship; whereas, the NKPC has a clear structural interpretation so that it can be useful for interpreting the impact of structural changes on inflation (Gali and Gertler 1999).

Although the NKPC is appealing from a theoretical standpoint, empirical estimates of the NKPC have, by and large, not been successful in explaining the stylized facts about the dynamic effects of monetary policy, whereby monetary policy shocks are thought to first have an effect on output, followed by a delayed and gradual effect on inflation (Mankiw 2001; Walsh 2003). To deal with what some authors (for example, McCallum 1999; Mankiw 2001; Dellas 2006a,b) believe to be inflation persistence in the data, (l) a response typically found in the literature is to augment the NKPC with lagged inflation on the supposition that lagged inflation receives weight in these equations because it contains information on the driving variables (that is, the variables driving inflation), thereby yielding a "hybrid" variant of the NKPC. A general result emerging from the empirical literature is that the coefficient on lagged inflation is positive and significant, with some authors (for example, Fuhrer 1997; Rudebusch 2002; Rudd and Whelan 2005) finding that inflation is predominantly backward looking.

The hybrid NKPC, however, is itself subject to several criticisms. First, derivations of the hybrid specifications typically rely on backward-looking rules of thumb, so that a "more coherent rationale for the role of lagged inflation" has yet to be provided (Gali, Gertler, and Lopez-Salido 2005, p. 1117). In effect we are losing all the supposed advantages of the clear microfoundations. Second, the idea that the important role assigned to lagged inflation derives from its use as a proxy for expected future inflation is contradicted by the large estimates of the effects of lagged inflation obtained even in specifications that include the discounted sums of future inflations (Rudd and Whelan 2005, p. 1179). (2)

The contention made in this article is that the standard model estimated within the NKPC paradigm is subject to a number of serious econometric problems and that these problems lead not only to ordinary least squares (OLS) being a biased estimator of the true underlying parameters, but that generalized method of moments (GMM) is also subject to these problems in this instance. We will demonstrate that, while GMM and instrumental variables can correctly deal with the standard problem of measurement error and endogeneity, if there are also missing variables and a misspecified functional form, then no valid instruments will exist and GMM becomes inconsistent. Consequently, we argue that the finding of a need for lagged inflation may be a direct result of the biases caused by estimation problems rather than a flaw with the underlying economic theory. We will make this case first at a theoretical level, showing that economic theory clearly suggests both that the standard form of the NKPC is misspecified and that it is subject to omitted variables and misspecified functional form; hence, we will show that GMM is inconsistent. Second, we will apply a time varying coefficient (TVC) estimation procedure that aims to yield consistent estimates under these circumstances, and which finds a coefficient on expected inflation that is essentially unity.

The remainder of this article is divided into three sections. Section 2 briefly summarizes the theoretical derivation of the NKPC and stresses the simplifying assumptions that imply the misspecification of the model. It then goes on to outline the estimation strategy used in this article, building on the work of Swamy et al. (2008). (3) We contrast our TVC estimation approach with that of the GMM, which has been widely applied in previous empirical studies of NKPCs (e.g., Gali and Gertler 1999; Gali, Gertler, and Lopez-Salido 2005; Linde 2005). Section 3 presents empirical results of NKPCs using U.S. quarterly data. We demonstrate that GMM produces the usual result of significant lagged inflation rates while our estimation approach provides coefficients that are much more closely in line with the microfoundations. Section 4 concludes.

2. Theoretical Considerations and Empirical Methodology

The NKPC Is a Misspecified Model

There are a number of ways of deriving the NKPC. A standard way of doing so is based on a model of price setting by monopolistically competitive firms (Gali and Gertler 1999). (4) Following Calvo (1983), firms are allowed to reset their price at each date with a given probability (1-[theta]), implying that firms adjust their price taking into account expectations about future demand conditions and costs, and that a fraction [theta] of firms keep their prices unchanged in any given period. Aggregation of all firms produces the following NKPC equation in log-linearized form

[[??].sub.t] = [beta][E.sub.t][[??].sub.t+1] + [[lambda].sub.1] [s.sub.t] + [[eta].sub.0t] (1)

where [[??].sub.t] is the inflation rate, [E.sub.t] [[??].sub.t+1] is the expected inflation in period t+1 as it is formulated in period t, [s.sub.t] is the (logarithm of) average real marginal cost in percent deviation from its steady state level, and [[eta].sub.0t] is a random error term. The coefficient, [beta], is a discount factor for profits that is on average between 0 and 1, [[lambda].sub.1] = [(1 - [theta])(1 - [beta][theta])]/[theta] is a parameter that is positive, where [theta] is the probability that the firm will change its price in any quarter; [[??].sub.t] increases when real marginal cost, which is a measure of excess demand, increases (as there is a tendency for inflation to increase). Since marginal cost is unobserved, in empirical applications real unit labor cost (ul[c.sub.t]) is often used as its proxy. (5)

If we look a little deeper into the microfoundations, however, we find a number of serious simplifications that underline this equation. Batini, Jackson, and Nickell (2005) emphasize the underpinnings of the NKPC. They begin their derivation with a Cobb-Douglas production function in which capital is replaced by a variable labor-productivity rate. They then assume a representative firm with a simple quadratic cost-minimization objective function and derive a standard NKPC, which even then includes terms in employment. Later, in the same article, they generalize the NKPC to an open economy case, at which point a number of extra variables play an important part, including foreign prices, exchange rates, and oil prices. Given this derivation, it is clear that the standard NKPC involves the following simplifications:

* The basic functional form is misspecified. In the standard derivations the NKPC is a linearization of a theoretical formulation based on quadratic costs and Cobb-Douglas technology. In fact, both of these assumptions are unrealistic. Cobb-Douglas technology is almost always rejected wherever it is tested, and so the real production function must be more complex. Similarly, quadratic objective functions are convenient, but far from realistic. Clearly, according to the theory, the NKPC is a linear version of a much more complex nonlinear model.

* The basic NKPC is subject to the omission of a potentially large number of omitted variables. Batini, Jackson, and Nickell (2005) emphasize the need to include exchange rates, foreign prices, oil prices, employment, and a labor productivity variable. The representative firm assumption could well mean that variables capturing firm heterogeneity are important.

* The variables used in the NKPC are almost certainly measured with error. For example, unit labor costs can only be modeled as the labor share under Cobb-Douglas technology. A constant elasticity of substitution (CES) function would involve a much richer set of variables to properly capture the real wage, but even this function would be only an approximation, as empirical support for CES technology is not overwhelming. Clearly, the representative-firm assumption also suggests that average or total measures of labor share may not be the correct measure. Additionally, there are well-known problems in measuring inflation itself.

Thus, the case is very strong, from a theoretical perspective, that any of the standard NKPC models would be subject to measurement error, omitted variable bias, and a misspecified functional form.

The response of many authors to the poor estimation results often produced from the NKPC is to start to find largely "ad hoc" reasons for augmenting the NKPC with lags. Many authors assume that firms can save costs if prices are changed between price adjustment periods according to a rule of thumb. For example, Gali and Gertler (1999) assume that only a portion (1 - [rho]) of firms is forward-looking and the rest are backward-looking. This implies that only a fraction (1 - [rho]) of firms set their prices optimally, and the rest employ a rule of thumb based on past inflation. Recently, Christiano, Eichenbaum, and Evans (2005) assume that all firms adjust their price each period, but some are not able to re-optimize, so they index their price to lagged inflation. Under the above assumptions, the hybrid NKPC, which includes lagged inflation, can be derived as follows:

[[??].sub.t] = [[omega].sub.f] [E.sub.t] [[??].sub.t+1] + [[lambda].sub.2] [s.sub.t] + [[omega].sub.b] [[??].sub.t-1] + [[eta].sub.1t] (2)

where [[??].sub.t-1] is the lagged inflation and [[eta].sub.1t] is a random error term. The reduced form parameter [[lambda].sub.2] is defined as [[lambda].sub.2] = (1 - [rho]) (1 - [theta]) (1 - [beta][theta])[[phi].sup.-1] with [phi] = [theta] + [rho] [1 - [theta]] (1 - [beta])]. Finally, the two reduced form parameters, [[omega].sub.f] and [[omega].sub.b], can be interpreted as the weights on "backward-" and "forward-looking" components of inflation and are defined as COy = [beta][theta] [[phi].sup.-1] and [[omega].sub.b] = [rho] [[phi].sup.-1], respectively. Unlike the "pure" NKPC, the hybrid NKPC is not derived from an explicit optimization problem.

Assuming rational expectations and that the error terms [[eta].sub.1t], t = 1, 2, ... , are identically and independently distributed (i.i.d.), many researchers employ the GMM procedure to estimate the NKPC and/or its hybrid version. Under GMM estimation, [E.sub.t] [[??].sub.t + l] is replaced by [[??].sub.t+l], which is actual inflation in t + 1, and the method of instrumental variables is used to obtain consistent estimates of the parameters of Equation 2, since [[??].sub.t-1] is correlated with [[eta].sub.1t]. The instrumental variables are correlated with [[??].sub.t+1], [ulc.sub.t], and [[??].sub.t+l], but not with [[eta].sub.1t]. The condition that E([[eta].sub.1/t] | [z.sub.t-1]) = 0, where [z.sub.t-1] is a vector of instruments dated t - 1 and earlier and is assumed to be orthogonal to [[eta].sub.lt], implies the following orthogonality condition:

Et [([??].sub.t] - [lambda].sub.2][ulc.sub.t] - [[omega].sub.b][[??].sub.t-1) [z.sub.t-1] = 0. (3)

To deal with the problems associated with estimation of the standard NKPC, one way to proceed would be to argue that, if the standard NKPC is misspecified, we should be able to derive a better specification, obtaining a new set of estimates on this better specification; such an approach would be in the spirit of Batini, Jackson, and Nickell (2005). However, our view is that while we can be certain that the simple version of the NKPC in Equation 1 or 2 is misspecified, we can never know the true model. Whatever specification we choose will inevitably involve some omitted variables, mismeasured variables, and a misspecified functional form. In effect, we may be confident that Cobb-Douglas technology, for example, is wrong, but we certainly do not know the correct specification for a production function. Nor do we know all the omitted variables that may be important. The strategy adopted here is to employ an estimation technique that aims to give consistent estimates of the two key parameters ([beta], [[lambda].sub.l], in the case of the NKPC represented by Equation 1) in the presence of unknown specification errors. (6)

In the next section, we will demonstrate that, given the multiple forms of misspecification to which the NKPC is subject, the standard GMM estimator cannot be consistent. We will then outline an alternative estimation strategy that can estimate some of the structural parameters of a relationship without specifying either the true or complete model. (7)

A New Estimation Strategy

When studying the relation of a dependent variable, denoted by [Y.sup.*.sub.t], to a hypothesized set of K - 1 of its determinants, denoted by [x.sup.*.sub.1t]. ... , [x.sup.*.sub.K-1,t], where K-1 may be only a subset of the complete set of determinates of [y.sup.*.sub.t], a number of problems may arise. Any specific functional form may be incorrect and may therefore lead to specification errors resulting from functionalform biases. Another problem that can arise in investigating the relationship between the dependent variable and its determinants is that [x.sup.*.sub.1t], ... , [x.sup.*.sub.K-1,t] may not exhaust the complete list of the determinants of [y.sup.*.sub.t, in which case the relation of [y.sup.*.sub.1t], to [x.sup.*.sub.1t], ... , [x.sup.*.sub.K-1,t] may be subject to omitted-variable biases. In addition to these problems, the available data on [y.sup.*.sub.t], [x.sup.*.sub.1t], ... ,[x.sup.*.sub.K-1,t] may not be perfect measures of the underlying true variables, causing errors-in-variables problems. In what follows, we propose the correct interpretations and an appropriate method of estimation of the coefficients of the relationship between [y.sup.*.sub.t] and [x.sup.*.sub.1t], ... , [x.sup.*.sub.K-1,t] in the presence of the foregoing problems.

Suppose that T measurements on [y.sup.*.sub.t], [x.sup.*.sub.1t], ... , [x.sup.*.sub.K-1,t] are made and these measurements are in fact the sums of "true" values and measurement errors: [y.sub.1t] = [y.sup.*.sub.t] + [v.sub.ot], [x.sub.jt] = [x.sup.*.jt] + [v.sub.jt], j = 1, ... ,K - 1, t = 1, ... , T, where the variables [y.sub.t], [x.sub.1t], ... , [x.sub.Kt] without an asterisk are the observable variables, the variables with an asterisk are the unobservable "true" values, and the vs are measurement errors. Also, given the possibilities that the functional forms we are estimating may be misspecified and there may be some important variables missing from [x.sub.1t], ... , [x.sub.K-1,t], we need a model that will capture all these potential problems.

It is useful at this point to clarify what we believe is the main objective of econometric estimation. In our view, the objective is to obtain unbiased estimates of the effect on a dependent variable of changing one independent variable holding constant all other relevant independent variables. That is to say, we aim to find an unbiased estimate of the partial derivative of [y.sup.*.sub.t] with respect to any one of [x.sup.*.sub.jt], j = 1, ... ,K - 1. This interpretation of course is the standard one usually placed on the coefficients of a typical econometric model, but validity of this interpretation depends crucially on the truth of the assumption that the conventional model gives unbiased coefficients, which, of course, is not the case in the presence of model misspecification.

One way to proceed is to specify a set of time-varying coefficients that provide a complete explanation of the dependent variable y. Consider the relationship

[y.sub.t] = [[gamma].sub.0t] + [[gamma].sub.0t] [x.sub.1t] + ... + [[gamma].sub.K-1,t][x.sub.K-1,t] (4)

which we call "the time-varying coefficient (TVC) model." (Note that this equation is formulated in terms of the observed variables.) As this model provides a complete explanation of y, all the misspecification in the model, as well as the true coefficients, must be captured by the TVCs. Note that, if the true functional form is nonlinear, one of the components of each of the TVCs in Equation 4 may be thought of as a partial derivative of the true nonlinear structure and so the TVCs are able to capture any possible function. These TVCs will also capture the effects of measurement errors and omitted variables. The trick is to find a way of decomposing these coefficients into the biased and the bias-free components.

It is important to stress that while we start from a TVC model, and this technique is sometimes referred to as TVC estimation, the objective here is not to simply estimate a model with changing coefficients. We start from Equation 4 because this is a representation of the underlying data generation process, which is correct. This is the case simply because, if the coefficients can vary at each point in time, they are able to explain 100% of the variation in the dependent variable. In the case of the TVC procedure followed in this article, however, we then decompose these varying coefficients into two parts: a consistent estimate of the true structural partial derivative and the remaining part that is due to biases from the various misspecifications in the model. If the true model is linear, we would get back to constant partial derivatives. If the true model is nonlinear, the partial derivatives will be varying with the models variables, and parameters and the coefficients will then vary over time to reflect this circumstance. The key point is that the TVC technique used here produces consistent estimates of structural relationships in the presence of model misspecification.

For empirical implementation, Equation 4 has to be embedded in a stochastic framework. To do so, we need to answer the question: What are the correct stochastic assumptions about the TVCs of Equation 4? We believe that the correct answer is as follows: The correct interpretation of the TVCs and the assumptions about them must be based on an understanding of the model misspecification which comes from any (i) omitted variables, (ii) measurement errors, and (iii) misspecification of the functional form. We expand on this argument in what follows.

Notation and Assumptions

Let [m.sub.t] denote the total number of the determinants of [y.sup.*.sub.t]. The exact value of [m.sub.t] cannot be known at any time. We assume that [m.sub.t] is larger than K-1 (that is, the total number of determinants is greater than the determinants for which we have observations) and possibly varies over time. (8) This assumption means that there are determinants of [y.sup.*.sub.t] that are excluded from Equation 4 since Equation 4 includes only K-1 determinants. Let [x.sup.*.sub.gt], g = K, ... , [m.sub.t] denote these excluded determinants. Let [[alpha].sup.*.sub.0t] denote the intercept, and let both [[alpha].sup.*.sub.jt], J = 1, ... , K-1 and [[alpha].sup.*.sub.gt], g = K, ... , [m.sub.t] denote the other coefficients of the regression of [y.sup.*.sub.t] on all of its determinants. The true functional form of this regression determines the time profiles of [[alpha].sup.*.sub.s]. These time profiles are unknown, since the true functional form is unknown. Note that an equation that is linear in variables accurately represents a nonlinear equation, provided the coefficients of the former equation are time-varying with time profiles determined by the true functional form of the latter equation. This type of representation of a nonlinear equation is convenient, particularly when the true functional form of the nonlinear equation is unknown. Such a representation is not subject to the criticism of misspecified functional form. For g = K, ... , [m.sub.t], let [[lambda].sup.*.sub.0gt] denote the intercept, and let [[lambda].sup.*.sub.jgt], J = 1, ... , K - 1, denote the other coefficients of the regression of [x.sup.*.sub.gt] on [x.sup.*.sub.1t], ... , [x.sup.*.sub.K-1,t]. The true functional forms of these regressions determine the time profiles of [[lambda].sup.*.sub.s].

The following theorem gives the correct interpretations of the coefficients of Equation 4: THEOREM 1. The intercept of Equation 4 satisfies the equation

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (5)

and the coefficients of Equation 4 other than the intercept satisfy the equations

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (6)

PROOF. See Swamy and Tavlas (2001, 2007).

Thus, we may interpret the TVCs in terms of the underlying correct coefficients, the observed explanatory variables, and their measurement errors. It should be noted that, by assuming that the [[lambda].sup.*.sub.s] in Equations 5 and 6 are possibly nonzero we do not require that the determinants of [y.sup.*.sub.t] included in Equation 4 be independent of the determinants of [y.sup.*.sub.t] excluded from Equation 4. Pratt and Schlaifer (1988, p. 34) show that this condition is "meaningless."

By the same logic, the usual exogeneity assumption, viz., the independence between a regressor and the disturbances of an econometric model is "meaningless" if the disturbances are assumed to represent the net effect on the dependent variable of the determinants of the dependent variable excluded from the model. The real culprit appears to be the interpretation that the disturbances of an econometric model represent the net effect on the dependent variable of the unidentified determinants of the dependent variable excluded from the model. In other words, if we make the classical econometric assumption that the error term is an i.i.d. process, then standard techniques go through in the usual way. If, however, we interpret the error term as a function of the misspecification of the model, then it becomes impossible to assert that it is conditionally independent of the included regressors and standard techniques such as instrumental variables are no longer consistent.

By assuming that the [alpha]*s and [lambda]*s are possibly time varying, we do not apriori rule out the possibility that the relationship of [y.sup.*.sub.t] with all of its determinants and the regressions of the determinants of [y.sup.*] excluded from Equation 4 on the determinants of [y.sup.*.sub.t] included in Equation 4 are nonlinear. Note that the last term on the right-hand side of equations in Equation 6 implies that the regressors of Equation 4 are correlated with their own coefficients. (9)

THEOREM 2. For j = 1, ... , K-1, the component [[alpha].sup.*.sub.jt] of [[gamma].sub.jt] in Equation 6 is the direct or bias-free effect of [x.sup.*.sub.jt] on [y.sup.*.sub.t] with all the other determinants of [y.sup.*.sub.t] held constant and is unique.

PROOF. It can be seen from Equation 6 that the component [[alpha].sup.*.sub.jt] of [[gamma].sub.jt] is free of omitted, variables bias [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] measurement-error bias [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]and of functional-form bias, since we allow the [alpha]*s and [lambda]*s to have the correct time profiles. These biases are not unique, as they are dependent on what determinants of [y.sup.*.sub.t] are excluded from Equation 4 and the [v.sub.jt]. Note that [[alpha].sup.*sub.jt] is the coefficient of [x.sup.*sub.jt] in the correctly specified relation of [y.sup.*sub.t] to all of its determinants. Hence [[alpha].sup.*sub.jt] represents the direct, or bias-free, effect of [x.sup.*sub.jt] on [y.sup.*sub.t] with all the other determinants of [y.sup.*sub.t] held constant or the partial derivative of [y.sup.*sub.t] with respect to [x.sup.*sub.jt]. The direct effect is unique because it represents a property of the real world that remains invariant against mere changes in the language we use to describe it (see Zellner 1979, 1988; Pratt and Schlaifer 1984, p. 13; Basmann 1988, p. 73). In effect the direct effect is essentially simply a number and is therefore unique.

The direct effect [[alpha].sup.*sub.jt] is constant if the relationship between [y.sup.*sub.t] and the set of all of its determinants is linear; alternatively, it is variable if the relationship is nonlinear. We often have information from theory as to the right sign of [[alpha].sup.*sub.jt]. Any observed correlation between [y.sub.t] and [x.sub.jt] is spurious if [[alpha].sup.*sub.jt] = 0 (see Swamy, Tavlas, and Mehta 2007). (10)

A key implication of Equations 5 and 6 is that, in the presence of a misspecified functional form and omitted variables, the errors in a standard regression will contain the difference between the right-hand side of Equation 4 and the right-hand side of the standard regression with the errors suppressed; the errors will contain the explanatory variables, denoted by x, in the standard regression. This means that the orthogonality condition (of the form of Equation 3) of GMM and the conditions for the existence of instrumental variables cannot be met, as the errors contain exactly the same variables that we require the instruments to have a strong correlation with. In effect, if the instruments are highly correlated with the x variables, they cannot be uncorrelated with the errors as these errors contain exactly the same x variables.

Swamy et al. (2008) provide a theoretical proof showing that this TVC methodology gives consistent estimates of the true parameters of interest underlying Equation 4 under a reasonable set of assumptions. This theoretical proof negates the need for a Monte Carlo experiment. To explain, suppose we generated artificial data from a model based around a CES production function and with an extra variable in the NKPC that is not normally present in the model. The standard GMM results would then be biased because of the misspecified functional form and omitted variables. Given the theoretical proof of consistency in the presence of these forms of misspecification for the TVC model, we know that the results from the TVC model would be consistent estimates of our two parameters of interest. (11) The Appendix shows how TVC estimation provides consistent information about the coefficients of Equation 4.

The NKPC and TVC Estimation

In "The NKPC Is a Misspecified Model," we argued that the NKPC is subject to a misspecified functional form, omitted variables, and measurement error. "A New Estimation Strategy" demonstrated that in the simultaneous presence of all three sources of misspecification, no valid instruments could exist for instrumental variables estimation. Therefore, it follows that in the case of the NKPC, GMM is not a consistent estimator. Thus, it is hardly surprising that some of the reported results are so poor. For example, in Gali and Gertler (1999) the Hansen J-statistic suggests that the instruments used are extremely poor, as we would expect from the aforementioned arguments. TVC estimation, however, shows that we may remove the bias component from the time-varying coefficients and get back to the unbiased underlying true effects. We can do this without fully specifying the set of the determinants of inflation and without knowing the correct functional form. We are, therefore, able to derive consistent estimators of the two parameters of interest: the coefficients on the expected inflation term and the marginal cost term.

Apart from the general theoretical problems with the NKPC outlined above, there are some specific reasons why, in the case of U.S. data, standard estimation would be problematic. During the past two decades, several interrelated factors appear to have contributed to a nonlinear structure (or, equivalently, a linear structure with changing coefficients) of the U.S. economy, including the following. First, there was a substantial fall in inflation in the 1990s and the first half of the 2000s, compared with the 1970s and early 1980s, reflecting the focus of monetary policy on achieving price stability; (12) increased globalization, which led to competitive pressures on prices; and an acceleration of productivity, beginning in the mid-1990s, that helped contain cost pressures. Second, the increased role of the services sector and an improved trend in productivity growth beginning in 1995 appear to have led to a changing non-accelerating inflation rate of unemployment (NAIRU), so that a given inflation rate has been associated with a lower unemployment rate in the late 1990s and early 2000s, compared with the 1970s (Sichel 2005, pp. 131-132), Third, a structural decline in business-cycle volatility appears to have occurred beginning in the mid-1980s (Gordon 2005). This decline has been attributed to such factors as the improved conduct of monetary policy and innovations in financial markets that allow for greater flexibility and dampen the real effects of shocks (Jermann and Quadrini 2006). The implication of these changes for estimation of econometric models was noted by Greenspan (2004, p. 38), who argued: "The economic world in which we function is best described by a structure whose parameters are continuously changing... An ongoing challenge to the Federal Reserve ... is to operate in a way that does not depend on a fixed economic structure based on historically ... [fixed] coefficients."

3. Data and Empirical Results

In this section, we contrast the results for some standard NKPC estimates with those obtained from the TVC approach. In the case of standard GMM results, we try to replicate (not to improve or correct) the findings often reported in the literature in order to demonstrate that the data we are using yield the usual results. We will then demonstrate that the TVC approach actually gives much stronger support to the standard NKPC models, although, of course, without assuming they are the entire story.

All the estimates reported below are based on quarterly U.S. data over the period 1970:12002:4, to compare with most of the literature. (13) We use two measures of expected inflation; the first is the projected change in the implicit gross domestic product (GDP) deflator, contained in the Fed's Federal Open Market Committee (FOMC) Greenbook. The Greenbook contains projections of inflation produced by the staff at the Federal Reserve Board. The projections measure the annualized quarter-to-quarter changes of the implicit price deflator up to 1996 and of the chain-weighted indices after that date. These projections are made available to the public after a lag of five years. The Greenbook forecasts appear to incorporate efficiently a large amount of information from all sectors of the economy as well as Fed officials' judgmental adjustments. The second measure of expected inflation used is the consensus group median forecasts of inflation from the Survey of Professional Forecasters (consensus forecasts). The Survey of Professional Forecasters, constructed by the Federal Reserve Bank of Philadelphia, has data on the expected annualized change in the implicit price deflator since 1970:1. The number of respondents changes somewhat with the quarter and the year in which the survey is run, and respondents are primarily members of the business community.

The other data are as follows. Inflation ([[??].sub.t]) is the quarterly percent change in the implicit GDP deflator. Real unit labor cost (ulc) is estimated using the deviation of the (log) of the labor income share from its average value; the labor income share is the ratio of total compensation of employees in the economy to nominal GDP. The consumer price index (CPI) inflation rate (used as an instrument) is the quarterly percent change in CPI. (14) Wage inflation is the quarterly percent change in hourly earnings in manufacturing. The interest rate is the three-month t-bill rate. (15) Four coefficient drivers are chosen for use in TVC estimation. These are (i) a constant term; (ii) change in the t-bill rate in period t-1; (iii) change in CPI inflation rate in period t 1; and (iv) change in wage inflation in period t-1. The bias-free effects are estimated using the constant term and change in the t-bill rate in t-1. (16)

Our estimation procedure was the following. In line with much of the literature, we estimated a hybrid model using GMM, the results of which are used as a benchmark with which to compare the results based on TVC estimation. Our aim is to assess whether the results reported in the literature--namely, that the inclusion of lagged inflation is needed in the Phillips curve specification and that the coefficient on expected inflation, while significant, is well below unity, results typically based on GMM--reflect specification biases. Given the possibility of measurement error in both of our measures of expected inflation we use GMM estimation in the standard estimates. In an attempt to keep our GMM estimates as close to the standard literature as possible, we use a standard set of instruments in Equation 3; four lags of inflation, two lags of real unit labor cost variable, four lags of CPI inflation, four lags of wage inflation, and the t-bill rate. The standard errors of the estimated parameters were modified using a Barlett or quadratic kernel with variable Newey-West bandwidth. In addition, prewhitening was used. In all cases the J-statistic was used to test overidentifying restrictions of the model (Greene 2003, p. 155).

Table 1 presents the empirical results. In both cases of expected inflation measures the GMM results include highly significant lagged inflation effects. If these are not included then the marginal cost term ceases to be significant. The TVC results present a strong contrast to this finding, (17) In both cases the lagged inflation effect is insignificant (and in one case it is actually negative, which strongly confirms our view that the lagged effect does not belong in the equation). When this effect is removed from the equation, the coefficient on expected inflation becomes almost exactly 1 (1.005 and 0.978). In both cases the marginal cost terms are highly significant and, at 0.092 and 0.081, well within the range of standard findings. (18)

As mentioned in "The NKPC Is a Misspecified Model," [theta] is derived from the expression, [[lambda].sub.1] = [(1 - [theta])(1 - [beta][theta])]/[theta], where [[lambda].sub.1] is the coefficient of unit labor costs and [beta] is the coefficient on expected inflation. In Table 1, the values of 1 - [theta] reported in column 3 (i.e., 0.26 and 0.25) for the Greenbook-based and Consensus-based specifications, respectively--imply that about one-quarter of firms adjust their prices each quarter. Alternatively, the results indicate that it takes an average of four quarters for all firms in the economy to change their prices. These estimates are similar to the findings of Nakamura and Steinsson (2006), who quantify the stickiness of prices in the U.S. economy over the period 1998-2005 using data on prices on a large sample of individual items collected by the Bureau of Labor Statistics. Those authors find that the median duration of all prices (with sales excluded) was 10 months. (19)

These results are almost exactly as we would have expected. Given the theoretical approximations made in the formal derivation of the NKPC, our theory suggests the GMM is not a consistent estimation technique. We have applied the TVC estimation strategy and found parameter estimates for the effect of expected inflation that are much closer to our theoretical expectations, along with suitable significant effects for the effect of marginal costs, provided correct coefficient drivers are used to compute bias-free effects. We would emphasize that we are not stating that this is the complete formulation of the Phillips curve. There may be other effects that are important. The TVC approach does not require a complete specification of the equation to derive consistent estimators of the structural effects considered.

4. Conclusions

This article has provided a clear-cut empirical experiment. Using GMM, we were able to replicate results typically found in the literature in which lagged inflation has a positive and significant coefficient in the NKPC framework, producing a hybrid NKPC. Under GMM, incorporating lagged inflation and, alternatively, one of two measures of expected inflation in the Phillips relation, the coefficients on the lagged inflation variable and expected inflation sum to near unity, yielding a long-run vertical Phillips relation. Are these results spurious? TVC estimation provides a method of addressing this question. The TVC procedure is more general than other approaches; it produces consistency under a variety of sources of misspecification. The TVC results strongly suggest that the role found by previous researchers for lagged inflation in the NKPC is the spurious outcome of specification biases. Moreover, the results are not dependent on a particular measure of inflation expectations or sample period. Each of the measures used provided a similar set of results.

This finding can have significant policy implications; the correct setting of monetary policy requires a clear understanding of the dynamics of inflation. The results provided here imply that inflation is much less sluggish and persistent than the standard finding might suggest. This would mean that the path of interest rates to optimally combat shocks to inflation would be substantially different than that implied by the conventional results. In conclusion, this article offers strong support to the standard microfounded theory that lies behind the NKPC, and this has important implications for monetary policy.

Appendix

The direct or bias-free effects [[alpha].sup.*.sub.jt] are the true parameters underlying Equation 4. These effects appear as components of the coefficients of the non-constant explanatory variables of Equation 4. Therefore, the only method we can use to estimate these effects is to decompose the coefficients of Equation 4 into their components in Equation 6. One of two complications that arise in this decomposition is that the explanatory variables of Equation 4 are not unconditionally independent of their coefficients. The other complication is that the time profiles of these coefficients are unknown. To resolve these complications, we make the following assumptions:

Assumption 1:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [z.sub.0t] = 1 for all t, [z.sub.ht]; h = 0, l, ..., p are called the "coefficient drivers" that explain the variations in the coefficients of Equation 4; the ns are fixed coefficients; and ([[zeta].sub.0t], ([[zeta].sub.1t] ..., ([[zeta].sub.K-1,t])' follows a first-order autoregressive process.

We make the plausible assumption that changes in the policy variables made in the periods prior to period t may help explain the variations in the coefficients of Equation 4. Therefore, these changes are used as the coefficient drivers. The number p is determined to reduce the variance [[zeta].sub.jt] to a small number so that the coefficient drivers included in Assumption 1 explain most of the variation in [[gamma].sub.jt]. Since Assumption 1 and Equation 6 are the equations for the same [[gamma].sub.jt], each term on the right-hand side of Assumption 1 can go into one of the three terms on the right-hand side of Equation 6. Therefore, the p + 1 coefficient drivers in Assumption 1 can divide into two disjoint sets [S.sub.1] and [S.sub.2] such that

Assumption 2:

[[alpha].sup.*.sub.jt][summation over (h[member of] [S.sub.1])] [[pi].sub.jh][Z.sub.ht]

Assumption 3:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Assumption 4:

The explanatory variables of Equation 4 are conditionally independent of their coefficients, given the coefficient drivers.

Substituting the right-hand side of Assumption 1 for [[gamma].sub.jt] in Equation 4 gives

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (A1)

This equation is like any other regression model but the interpretations of its coefficient are different from those of the coefficients of regression models. The first two terms and the last two terms on the right-hand side of Equation A1 are called "the regression part" and "the error part," respectively.

Swamy et al. (2008) prove the following:

(i) The coefficients and the combination of the errors [[zeta].sub.0t] + [[summation].sup.K-1.sub.j=1] [[zeta].sub.jt][x.sub.jt] of Equation A1 are identifiable and the coefficients are consistently estimable. An iteratively rescaled generalized least squares (IRSGLS) method can be used to estimate Equation A1. Cavanagh and Rothenberg (1995) provide sufficient conditions for the consistency and asymptotic normality of the IRSGLS estimators of the coefficients of Equation A1. Under these conditions and Assumptions 14, the IRSGLS estimator of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is the consistent and asymptotically normal estimator of the bias-free direct effect [[alpha].sup.*.sub.jt] or the partial derivative of the correctly measured dependent variable [y.sup.*.sub.t] with respect to the correctly measured explanatory variable [x.sup.*.sub.jt]. Under Assumptions 14, the IRSGLS estimate of [[alpha].sup.*.sub.jt] is not distorted by omitted-variable and measurement-error biases. The correlation between [Y.sub.t] and [x.sub.jt] is spurious if [[alpha].sup.*.sub.jt] = 0. The IRSGLS estimate of [[alpha].sup.*.sub.jt] can be used to test whether this correlation is spurious.

(ii) It can be seen that the econometrician's instrumental variables that are highly correlated with the regression part and uncorrelated with the error part of Equation A1 cannot exist because the [x.sub.jt] s are common to both these parts. Consequently, the GMM estimates of the coefficients of Equation A1 are inconsistent.

(iii) If we believe that Assumptions 14 are not exactly true but probably true, then the Bayesian model averaging methods given in Swamy et al. (2008) can be used to draw inferences about the [[alpha].sup.*.sub.jt].

Thus, unlike the GMM estimators, the IRSGLS estimators of the coefficients of Equation A1 can give very important consistent information about the true coefficients underlying Equation 4.

By convention when we conduct a Monte Carlo experiment we analyze the procedure under investigation assuming that its underlying assumptions are true. In the case of the TVC procedure, this would imply that our experimental design would ensure that Assumptions 1-4 hold. In this case we know, given the theoretical results noted above, that the TVC procedure would produce consistent estimates of the true parameters while the assumptions of GMM would be violated, and hence the GMM estimates of the parameters would be inconsistent.

Received July 2008; accepted January 2009.

References

Basmann, Robert L. 1988. Causality tests and observationally equivalent representations of econometric models. Journal of Econometrics 39:69 104.

Batini, Nicoletta, Brian Jackson, and Stephen Nickell. 2005. An open economy new Keynesian Phillips curve for the U.K. Journal of Monetary Economics 52:1061-71.

Calvo, Guillermo. 1983. Staggered prices in a utility-maximizing framework. Journal of Monetary Economics 12:383-98.

Cavanagh, Christopher L., and Thomas J. Rothenberg. 1995. Generalized least squares with non-normal errors. In Advances in Econometrics and Quantitative Economies, edited by G. S. Maddala, Peter C. B. Phillips, and T. N. Srinivasan. Oxford, UK: Blackwell, pp. 276-90.

Chang, I-Lok, Charles Hallahan, and P. A. V. B. Swamy. 1992. Efficient computation of stochastic coefficient models. In Computational Economics and Econometrics, edited by Hans M. Amman, David A. Belsley, and Louis F. Pau. London: Kluwer Academic Publishers, pp. 43-53.

Chang, I-Lol, P. A. V. B. Swamy, Charles Hallahan, and George S. Tavlas. 2000. A computational approach to finding causal economic laws. Computational Economics 16:105 36.

Christiano, Laurence, Martin Eichenbaum, and Charles Evans. 2005. Nominal rigidities and the dynamic effects of a shock to monetary policy. Journal of Political Economy 113:145.

Dellas, Harris. 2006a. Monetary shocks and inflation dynamics in the New Keynesian model. Journal of Money, Credit, and Banking 38:543-51.

Dellas, Harris. 2006b. Inflation inertia in the New Keynesian model. Mimeo, University of Bern.

Del Negro, Marco, and Frank Schorfheide. 2004. Priors from General Equilibrium Models for VARs. International Economic Review 45:643-73.

Fuhrer, Jeff C. 1997. The (un)importance of forward-looking behavior in price setting. Journal of Money, Credit, and Banking 29:338-50.

Gali, Jordi, and Mark Gertler. 1999. Inflation dynamics: A structural econometric approach. Journal of Monetary Economics 44:195-222.

Gali, Jordi, Mark Gertler, and J. David Lopez-Salido. 2005. Robustness of the estimates of the hybrid New Keynesian Phillips curve. Journal of Monetary Economics 52:1107-18.

Greene, William H. 2003. Econometric analysis. 5th edition. Upper Saddle River, NJ: Prentice Hall.

Gordon, Roger. 2005. What caused the decline in U.S. business cycle volatility? NBER Working Paper No. 11777.

Granger, Clive W. J., and Paul Newbold. 1974. Spurious regressions in econometrics. Journal of Econametrics 2:111-20. Greenspan, Alan. 2004. Risk and uncertainty in monetary policy. American Economic Review, Papers, and Proceedings 94:3340.

Hondroyiannis, George, P. A. V. B. Swamy, and George S. Tavlas. 2009. A note on the new Keynesian Phillips curve in a time-varying coefficient environment: Some European evidence. Macroeconomic Dynamics 13:149-66.

Jermann, Urban, and Vincenzo Quadrini. 2006. Financial innovations and macroeconomic volatility. NBER Working Paper No. 12308.

Linde, Jesper. 2005. Estimating New-Keynesian Phillips curves: A full information maximum likelihood approach. Journal of Monetary Economics 52:1135-52.

Mankiw, N. Gregory. 2001. The inexorable and mysterious trade-off between inflation and unemployment. The Economic Journal 111:C45-C61.

McCallum, Bennett T. 1999. Recent developments in monetary policy analysis: The roles of theory and evidence. Journal of Economic Methodology 6:171-98.

Nakamura, Emi, and Jon Steinsson. 2006. Five facts about prices: A reevaluation of menu cost models. Unpublished, Harvard University.

Pratt, John W., and Robert Schlaifer. 1984. On the nature and discovery of structure. Journal of the American Statistical Association 79:9-22.

Pratt, John W., and Robert Schlaifer. 1988. On the interpretation and observation of laws. Journal of Econometrics 39:23-52.

Roberts, John M. 1997. Is inflation sticky? Journal of Monetary Economics 39:173-96.

Rudebusch, Glenn D. 2002. Assessing nominal income rules for monetary policy with model and data uncertainty. Economic Journal 112:402-32.

Rudd, Jeremy, and Karl Whelan. 2005. New tests of the New Keynesian Phillips curve. Journal of Monetary Economics 52:1167-81.

Sichel, Daniel E. 2005. Where did the productivity growth go? Inflation dynamics and the distribution of income: Comments. Brookings Papers on Economic Activity 2:128-35.

Sims, Christopher A. 2008. Improving monetary policy models. Journal of Economic Dynamics and Control 32:2460-75.

Swamy, P. A. V. B., and George S. Tavlas. 1995. Random coefficient models: Theory and applications. Journal of Economic Surveys 9:165-82.

Swamy, P. A. V. B., and George S. Tavlas. 2001. Random coefficient models. In A companion to theoretical econometrics, edited by Badi H. Baltagi. Malden: Blackwell, pp. 410-28.

Swamy, P. A. V. B., and George S. Tavlas. 2007. The new Keynesian Phillips curve and inflation expectations: Re-specification and interpretation. Economic Theory 31:293-306.

Swamy, P. A. V. B., George S. Tavlas, and Jatinder S. Mehta. 2007. Methods of distinguishing between spurious regressions and causality. Journal of Statistical Theory and Applications 1:83-96.

Swamy, P. A. V. B., George S. Tavlas, Stephen G. Hall, and George Hondroyiannis. 2008. Estimation of parameters in the presence of model misspecification and measurement error. Mimeo.

Taylor, John B. 1999. Staggered wage and price in macroeconomics. In Handbook of macroeconomics, edited by John Taylor and Michael Woodford. Amsterdam: North Holland, pp. 1009-50.

Walsh, Carl E. 2003. Monetary theory and policy. 2nd ed. Cambridge, MA: MIT Press.

Woodford, Michael. 2003. Interest and prices. Princeton: Princeton University Press.

Zellner, Arnold. 1979. Causality and econometrics. In Three Aspects of Policy and Policymaking, edited by Karl Brunner and Alan H. Meltzer. Amsterdam: North-Holland, pp. 9-54.

Zellner, Arnold. 1988. Causality and causal laws in economics. Journal of Econometrics 39:7-21.

Stephen G. Hall,* George Hondroyiannis, [dagger] P. A. V. B. Swamy, [double dagger] and G. S. Tavlas[section]

* Leicester University and Bank of Greece, Room Astley Clarke 116, University Road, Leicester, LEI 7RH, UK; E-mail [email protected].

[dagger] Bank of Greece and Harokopio University, 21 E. Venizelos Ave. 102 50 Athens, Greece; E-mail ghondroyiannis@ bankofgreece.gr.

[double dagger] Retired from Federal Reserve Board, Washington, DC, 6333 Brocketts Crossing, Kingstowne, VA 22315; E-mail [email protected].

[section] Economic Research Department, Bank of Greece, 21 El. Venizelos Ave. 102 50. Athens, Greece; Tel. ++30210 320 2370; Fax: ++30210 320 2432; E-mail [email protected]; corresponding author.

We thank Peter yon zur Muehlen and Arnold Zellner for helpful comments. The comments of two anonymous referees were extremely constructive. The views expressed are those of the authors and should not be interpreted as those of their respective institutions.

(1) Roberts (1997), however, provides evidence suggesting that inflation is not sticky.

(2) Not all researchers have obtained large estimates of lagged inflation. Gali, Gertler, and Lopez-Salido (2005) find that the coefficient of lagged inflation, while significant, was quantitatively modest (that is, generally on the order of 0.35 to 0.37).

(3) Swamy et al. (2008) in turn draw on papers by Chang, Hallahan, and Swamy (1992), Swamy and Tavlas (1995, 2007), and Chang et al. (2000).

(4) An alternative derivation that does not rely on ad hoc assumptions for inclusion of lagged inflation effects is due to Christiano, Eichenbaum, and Evans (2005). Their derivation essentially rests on assuming that a fraction of firms do not re-optimize their prices each period for one reason or another. At one level the results in this paper could be seen as a test of the relative merits of the two theoretical approaches.

(5) The coefficients and the error term of Equation 1 are not unique because [beta], [[lambda].sub.1], and [[eta].sub.0t] can be changed without changing Equation 1 (Pratt and Schlaifer 1984, p. 13).

(6) Sims (2008) makes a powerful criticism of current Dynamic Stochastic General Equilibrium (DSGE) modeling practices and single equation estimation of relationships such as the Phillips Curve. He argues for a Bayesian probabilistic approach to modeling that involves system estimation and allows for both model uncertainty and measurement error. In this connection, Del Negro and Schorfheide (2004) use a simple New Keynesian monetary DSGE model as a prior for vector autoregression and show that the resulting model was competitive with standard benchmarks in terms of forecasting and could be used for policy analysis. While interesting, this approach is well beyond the literature that we are addressing here. Our view is that the TVC method used here addresses many of the issues raised by Sims, although in a different way.

(7) The discussion in the following subsection draws on Swamy et al. (2008).

(8) That is, the number of determinants is itself time-variant.

(9) These correlations are typically ignored in the analyses of state-space models. Thus, inexpressive conditions and restrictive functional forms are avoided in arriving at Equations 5 and 6 so that Theorem 1 can easily hold; for further discussion and interpretation of the terms in Equations 5 and 6, see Swamy and Tavlas (2001, 2007) and Hondroyiannis, Swamy, and Tavlas (2009).

(10) We use the term "spurious" in a more general sense than Granger and Newbold (1974), where it strictly applies to linear models with non-stationary error terms. Here we mean any correlation that is observed between two variables when the true direct effect of one variable on the other is actually zero.

(11) We should point out that the small sample properties of the TVC method for the NKPC are not yet explored, and it remains outside the scope of this article to undertake such an investigation.

(12) Greenspan (2004) argues that this focus reflects increased political support for stable prices, which was a consequence of, and reaction to, the unprecedented peacetime inflation of the 1970s.

(13) Estimation was also carried out using data up to 2007; the results were very similar, so they are not reported here. In the longer sample we also used actual future inflation as a measure of expected inflation and again the results did not change significantly.

(14) Apart from the Greenbook forecasts, the source of the foregoing data is the Datastream OECD Economic Outlook.

(15) The data on wages and the t-bill rate are from the International Financial Statistics (IFS).

(16) For further details on the use of coefficient drivers see the Appendix.

(17) The TVC results report the average coefficient over the sample once the bias has been removed.

(18) For example, in the nine regressions reported by Gali and Gertler (1999, p. 216), the coefficients on marginal costs ranged from 0.020 to 0.913, with a median estimate of 0.054. See also Rudd and Whelan (2005).

(19) Survey evidence reported by Taylor (1999) indicates that price changes occur, on average, every four quarters in the U.S. economy. Using their estimates of the coefficients in their NKPC specification, Gali and Gertler (1999) estimate that prices are fixed on average for five to six quarters, an estimate that the authors note is "perhaps on the high side" (1999, p. 209).
Table 1. Estimation of NKPC for USA 1970:1-2002:4

                                         GMM (1)

Panel A: Greenbook forecast-based specification

  Greenbook forecast of [[??].sub.t]+1    0.820 *** [10.69]
  [ulc.sub.t] (marginal costs)            0.061 *** [3.45]
  [[??].sub.t]+1                          0.378 *** [8.07]
  1 - [theta]                             0.16
  [[??].sup.2]                            0.83
  J-test                                  0.93
Panel B: Consensus forecasts-based specification

  Consensus forecast of [[??].sub.t]+1    0.653 *** [9.49]
  [ulc.sub.t] (marginal costs)            0.088 *** [5.37]
  [[??].sub.t]+1                          0.319 *** [6.87]
  1 - [theta]                             0.16
  [[??].sup.2]                            0.83
  J-test                                  0.93

                                         TVC Bias-Free
                                         Effect (2)

Panel A: Greenbook forecast-based specification

  Greenbook forecast of [[??].sub.t]+1    0.933 *** [9.60]
  [ulc.sub.t] (marginal costs)            0.056 *** [2.84]
  [[??].sub.t]+1                          0.068 [0.74]
  1 - [theta]                             0.19
  [[??].sup.2]                            0.99
  J-test
Panel B: Consensus forecasts-based specification

  Consensus forecast of [[??].sub.t]+1    1.003 *** [8.19]
  [ulc.sub.t] (marginal costs)            0.074 ** [5.05]
  [[??].sub.t]+1                         -0.004 [-0.03]
  1 - [theta]                             0.24
  [[??].sup.2]                            0.99
  J-test

                                         TVC Bias-Free
                                         Effect (3)

Panel A: Greenbook forecast-based specification

  Greenbook forecast of [[??].sub.t]+1    1.005 *** [9.94]
  [ulc.sub.t] (marginal costs)            0.092 *** [7.22]
  [[??].sub.t]+1                          -
  1 - [theta]                             0.26
  [[??].sup.2]                            0.99
  J-test
Panel B: Consensus forecasts-based specification

  Consensus forecast of [[??].sub.t]+1    0.978 *** [31.96]
  [ulc.sub.t] (marginal costs)            0.081 ** [6.53]
  [[??].sub.t]+1                          -
  1 - [theta]                             0.25
  [[??].sup.2]                            0.99
  J-test

Figures in brackets are t-statistics. The estimates in columns 2 and
3 are obtained using four coefficient drivers: a constant term, change
in the t-bill rate in period t - 1, change in CPI inflation rate in
period t - 1, and change in wage inflation in period t - 1. The
bias-free effects are estimated using the constant term and change in
the t-bill rate in the previous period. For further details on the
use of coefficient drivers see the Appendix. *** and ** indicate
significance at the 1% and 5% level,
respectively.
联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有