Solving for Non-linear Transitions: Forward Guidance under ZLB (McKay, Nakamura, Steinsson, 2016)

In this example, we solve the one-asset HANK model used by McKay, Nakamura, and Steinsson (2016) to study the forward guidance of monetary policy at the liquidity trap (nominal interest rate zero lower bound). The liquidity trap is driven by a positive shock to the common discount factor of households. The forward guidance is modeled as the central bank’s commitment to setting the nominal interest rate at zero for additional periods, after the economy would have exited the zero lower bound.

The example is used to demonstrate the capability of the toolbox in solving highly non-linear models. In particular, the equilibrium system at the liquidity trap is very different from that out of the liquidity trap, posing challenges to traditional solution methods.

We follow the parameterization and notation of their original paper.

The Model

Households decision problems

Households are heterogeneous in their labor efficiency \(e\), which follows a stationary Markov chain, and real bonds holding \(b\). They solve the Bellman equation:

\[\begin{split} V_t(e,b) = \max_{c,n,b'} \frac{c^{1-\gamma}}{1-\gamma} - \chi \frac{n^{1+1/\nu}}{1+1/\nu}+ \beta_t \mathbb{E}[V_{t+1}(e',b')|e] \\ s.t. \quad c+\frac{b'}{1+r_t}=b + w_ten - \tau_t \bar{\tau}(e) +D_t \\ b' \geq 0,n\geq 0, \end{split}\]

where \(r_t\) is the real interest rate, \(w_t\) the wage rate, \(\tau_t\) the tax level, and \(D_t\) the profits of firms. \(\bar{\tau}(e)\) takes value 0 or 1, which specifies whether a household with labor efficiency type \(e\) is taxed. Households choose consumption \(c\), labor supply \(n\) and future bond holding \(b'\), subject to a no borrowing constraint \(b' \geq 0\).

Denote the measure that represents the distribution over households states by \(\Gamma_t\), and policy functions by \(g_{c,t},g_{n,t},g_{b',t}\).

The New-Keynesian block

Intermediate goods firms use labor to produce differentiated goods, which are aggregated by final goods firms using a CES aggregator. Intermediate goods firms engage in monopolistic competition and face price adjustment frictions a la Calvo. The New Keynesian Phillips curve is characterized by the following system (see their paper or any monetary textbook for derivations):

\[\begin{split} Y_tS_t = z_t N_t \\ S_{t} =\theta (\frac{p_{t}^*}{P_{t}})^{\mu / (1-\mu)}+(\frac{1}{1+\pi_{t}})^{\mu / (1-\mu)} (1-\theta)S_{t-1} \\ 1+\pi_{t} =( \frac{1-\theta}{1- \theta( \frac{p_{t}^*}{P_{t}})^{\frac{1}{1-\mu}}})^{1-\mu} \\ \frac{p_t^*}{P_t} = \frac{P_t^A}{P_t^B} \\ P_t^A =\mu\frac{w_{t}}{z_{t}}Y_{t} + (1-\theta)\beta (1+\pi_{t+1})^{-\mu/(1-\mu)}P_{t+1}^A \\ P_t^B =Y_{t} + (1-\theta)\beta (1+\pi_{t+1})^{-1/(1-\mu)}P_{t+1}, \end{split}\]

where \(\mu\) is the elasticity of substitution, \(\theta\) is the probability of the opportunity for price adjustment. \(Y_t\) is aggregate output, \(S_t\) is the dispersion of prices, \(N_t\) is aggregate labor demand, \(z_t\) is the exogenous labor productivity, \(\pi_t\) is the inflation rate, \(p_t^*\) is firms’ ideal price when adjusting, and \(P_t^A,P_t^B\) are auxiliary variables that facilitate the definition of the system.

The monetary authority implements a Taylor rule subject to a zero lower bound on the nominal interest rate:

\[ i_t = \max\{i_{0} + \phi \pi_t,0\} \]

Since there is no aggregate uncertainty, the Fisher equation holds:

\[ 1+r_t=\frac{1+i_t}{1+\pi_{t+1}}. \]

Government set taxes \(\tau_t\) to finance exogenous spending and interest payment on bonds:

\[ \tau_t =\frac{B_t + G_t - \frac{B_{t+1}}{1+r_{t}}}{\int \bar{\tau}(e) \ \mathrm{d}\Gamma_t}, \]

where \(B_t\) is the government bond outstanding at the beginning of period \(t\) and \(G_t\) is the government spending.

In their benchmark specification, the fiscal rule supplies a fixed amount of government bond \(B_{t+1}=\overline{B}\).

Market clearing conditions

Bonds market clearing:

\[ B_{t+1}=\int g_{b',t}(e,b) \ \mathrm{d}\Gamma_t \]

Labor market clearing:

\[ N_t = \int eg_{n,t}(e,b) \ \mathrm{d}\Gamma_t \]

and goods market clearing:

\[ \int g_{c,t}(e,b) \ \mathrm{d} \Gamma_t + G_t = Y_t. \]

Definition of equilibrium

Given an initial distribution over households states, \(\Gamma_0\), a sequential competitive equilibrium is a sequence of (1) distributions over households states \(\{\Gamma_t\}_{t=0}^{\infty}\), (2) households’ value and policy functions \(\{V_t,g_{c,t},g_{n,t},g_{b',t}\}_{t=0}^{\infty}\), and (3) aggregate quantities and prices \(\{w_t,Y_t,S_t,N_t,D_t,\pi_t,\frac{p_t^*}{P_t},P_t^A,P_t^B,r_t,i_t,\tau_t\}_{t=0}^{\infty}\) such that

\[ \begin{align}\begin{aligned}\begin{split} Y_t S_t= z_t N_t \\ D_t=Y_t-w_tN_t \\ S_{t} =\theta (\frac{p_{t}^*}{P_{t}})^{\mu / (1-\mu)}+(\frac{1}{1+\pi_{t}})^{\mu / (1-\mu)} (1-\theta)S_{t-1} \\ 1+\pi_{t} =( \frac{1-\theta}{1- \theta( \frac{p_{t}^*}{P_{t}})^{\frac{1}{1-\mu}}})^{1-\mu} \\ \frac{p_t^*}{P_t} = \frac{P_t^A}{P_t^B} \\ P_t^A =\mu\frac{w_{t}}{z_{t}}Y_{t} + (1-\theta)\beta (1+\pi_{t+1})^{-\mu/(1-\mu)}P_{t+1}^A \\ P_t^B =Y_{t} + (1-\theta)\beta (1+\pi_{t+1})^{-1/(1-\mu)}P_{t+1}^B \\ 1+r_t=\frac{1+i_t}{1+\pi_{t+1}} \\ i_t = \max\{i_{0} + \phi \pi_t,0\} \\\end{split}\\\begin{split}\tau_t =\frac{B_t + G_t - \frac{B_{t+1}}{1+r_{t}}}{\int 1(e=e_{High}) \Gamma_t} \\ N_t = \int eg_{n,t}(e,b) d\Gamma_t \\ \overline{B}=\int g_{b',t}(e,b) d\Gamma_t \end{split}\end{aligned}\end{align} \]

and goods market clearing implied by Walras’s law. The system can be further simplified. See the hmod file below that defines the simplified system of equations.

The hmod File

The model can be represented using a hmod file, hank1.hmod, listed below

 1parameters beta gamma nu chi w r D tau;
 2beta = 0.986;
 3gamma = 2;
 4nu = 0.5;
 5chi = 1;
 6
 7var_shock e taxed;
 8shock_trans = [
 9    0.9663    0.0334    0.0003
10    0.0167    0.9666    0.0167
11    0.0003    0.0334    0.9663
12    ];
13e = [0.49008,1,2.0405];
14taxed = [0, 0, 1];
15
16var_state b;
17b_min = 0.0;
18b_max = 50.0;
19b_pts = 201;
20b_shift = 0.1;
21b = exp(linspace(log(b_min+b_shift),log(b_max+b_shift),b_pts)) - b_shift;
22
23var_pre_vfi budget_n1;
24budget_n1 = b + w*e - tau*taxed + D;
25
26var_policy c bp n;
27initial c budget_n1;
28initial bp 0;
29initial n 1;
30
31var_aux ne;
32
33vfi;
34    u = c^(1-gamma)/(1-gamma) - chi*n^(1+1/nu)/(1+1/nu);
35    Tv = u + beta*EXPECT(v(bp));
36    c + bp/(1+r) == b + w*e*n - tau*taxed + D;
37    ne = n*e;
38    c >= 1e-8;
39    bp >= b_min;
40    n >= 0;
41end;
42
43var_agg Y pii w S PA PB;
44mu = 1.2;           % markup
45theta = 0.15;       % prob staggered price
46pop_taxed = 0.25;
47ii0 = 0.005;
48G = 0;
49phi = 1.5;
50z = 1.0;            % labor productivity
51
52% bond level affects steady state interest rate
53% see how it is calibrated in model_cali
54B = 5.6;
55
56% initial
57N = 1;
58
59var_agg_shock beta m_shock;
60m_shock = 0.0;
61
62model;
63    N = Y*S / z;
64    D = Y-w*N;
65    ii = max(ii0 + phi*pii + m_shock,0);
66    r = (1+ii) / (1+pii(+1)) - 1;
67    tau = (B+G-B/(1+r)) / pop_taxed;
68    pstar = PA/PB;
69    
70    S == theta*pstar^(mu/(1-mu)) + (1/(1+pii))^(mu/(1-mu))*(1-theta)*S(-1);
71    1+pii == ( (1-theta)/(1-theta*(pstar)^(1/(1-mu))) )^(1-mu);
72    PA == mu*w/z*Y + (1-theta)*beta*(1+pii(+1))^(-mu/(1-mu))*PA(+1);
73    PB == Y + (1-theta)*beta*(1+pii(+1))^(-1/(1-mu))*PB(+1);
74    N == ne;
75    B == bp;
76end;
77
78% a different block for calibration; arguments to overwrite aggregate unknowns (var_agg)
79model_cali(N, B);
80    Y = z*N;
81    w = z/mu;
82    D = Y - w*N;
83    r = ii0;
84    tau = (B+G-B/(1+r)) / pop_taxed;
85
86    N == ne;
87    B == bp;
88    
89    PA = mu*w/z*Y / (1-(1-theta)*beta);
90    PB = Y / (1-(1-theta)*beta);
91    S = 1;
92    pii = 0;
93end;

The equilibrium system now involves non-trivial market clearing conditions but nevertheless can be represented by a system of equations. Define the time sequence of all unknowns in var_agg and the same number of equations in the model block with “==” (see codes highlighted below):

43var_agg Y pii w S PA PB;
44mu = 1.2;           % markup
45theta = 0.15;       % prob staggered price
46pop_taxed = 0.25;
47ii0 = 0.005;
48G = 0;
49phi = 1.5;
50z = 1.0;            % labor productivity
51
52% bond level affects steady state interest rate
53% see how it is calibrated in model_cali
54B = 5.6;
55
56% initial
57N = 1;
58
59var_agg_shock beta m_shock;
60m_shock = 0.0;
61
62model;
63    N = Y*S / z;
64    D = Y-w*N;
65    ii = max(ii0 + phi*pii + m_shock,0);
66    r = (1+ii) / (1+pii(+1)) - 1;
67    tau = (B+G-B/(1+r)) / pop_taxed;
68    pstar = PA/PB;
69    
70    S == theta*pstar^(mu/(1-mu)) + (1/(1+pii))^(mu/(1-mu))*(1-theta)*S(-1);
71    1+pii == ( (1-theta)/(1-theta*(pstar)^(1/(1-mu))) )^(1-mu);
72    PA == mu*w/z*Y + (1-theta)*beta*(1+pii(+1))^(-mu/(1-mu))*PA(+1);
73    PB == Y + (1-theta)*beta*(1+pii(+1))^(-1/(1-mu))*PB(+1);
74    N == ne;
75    B == bp;
76end;

One can define an alternative model block that is different from the main model block. For example, the following block in the script file defines a block for calibration, which takes different unknowns and a different system of equations. Here, the calibration is at the steady state, so we can reduce the calibration system into an equation for aggregate labor and bond holding, with all other variables being simple functions with the two unknowns, or implied by the steady state condition.

79model_cali(N, B);
80    Y = z*N;
81    w = z/mu;
82    D = Y - w*N;
83    r = ii0;
84    tau = (B+G-B/(1+r)) / pop_taxed;
85
86    N == ne;
87    B == bp;
88    
89    PA = mu*w/z*Y / (1-(1-theta)*beta);
90    PB = Y / (1-(1-theta)*beta);
91    S = 1;
92    pii = 0;
93end;

All var_agg need to be initialized right after declaration, unless their values are returned from an alternative model block (e.g., here, by the model_cali block). See below for how to pass the calibration solution to solving the main model block.

Use the Toolbox

After parsing the script file, the toolbox generates MATLAB files: solve_vfi.m, solve_ss.m, solve_trans_linear.m, solve_trans_nonlinear.m, solve_cali.m and other functions that can be called to solve the steady state, stationary distribution, transition paths, and alternative system (here, calibration) of the model. The usages of the generated files are illustrated below.

Parse .hmod Files

Parse .hmod Files

Suppose you have hank1.hmod in the working directory. Call in MATLAB
clear_hans
hans hank1
Parsing...ok! Compiling dpopt_vfi Building with 'MinGW64 Compiler (C++)'. MEX completed successfully.
ans = struct with fields:
parameters: {'beta' 'gamma' 'nu' 'chi' 'w' 'r' 'D' 'tau'} var_shock: {'e' 'taxed'} var_state: 'b' var_pre_vfi: 'budget_n1' var_policy: {'c' 'bp' 'n'} var_aux: 'ne' var_agg: {'Y' 'pii' 'w' 'S' 'PA' 'PB'} var_agg_shock: {'beta' 'm_shock'} var_agg_params: {'G' 'theta' 'pop_taxed' 'phi' 'z' 'ii0' 'mu' 'B'} var_agg_assign: {'N' 'D' 'ii' 'r' 'tau' 'pstar'} var_agg_in_ind_params: {'w' 'D' 'r' 'tau' 'beta'} var_ind_in_eq: {'ne' 'bp'}
As shown, HANS parases the hmod file, compiles the mex files, and returns a structure that describes the model.
clear_hans in the first line clears any cached files generated before.

Solve the Stationary Equilibrium

Call the generated code solve_cali for calibration
tic;
cali_rslt = solve_cali;
Evaluating at var_agg: 1, 5.6, VFI converged (metric_v) 0.00168503, (metric_pol) 0 in 476 iterations Range of b: 0, 32.3227 Equilbirium residual: -0.0405098, -0.323861, Iteration Line Search Func-count ||f(x)||^2 fnorm(x) Norm of Step Step Size Algorithm 0 0 1 1.065268e-01 3.263845e-01 Evaluating at var_agg: 1.0001, 5.6, VFI converged (metric_v) 8.03659e-05, (metric_pol) 0 in 216 iterations Range of b: 0, 32.3227 Equilbirium residual: -0.0404013, -0.323684, Evaluating at var_agg: 1, 5.60056, VFI converged (metric_v) 4.51514e-06, (metric_pol) 0 in 222 iterations Range of b: 0, 32.3227 Equilbirium residual: -0.0405116, -0.323222, Evaluating at var_agg: 1.03796, 5.8252, VFI converged (metric_v) 3.06396e-05, (metric_pol) 0 in 386 iterations Range of b: 0, 31.3304 Equilbirium residual: -4.0175e-06, 0.000127707, 1 1 4 1.632513e-08 1.277698e-04 2.283784e-01 1.000000e+00 Newton Evaluating at var_agg: 1.03796, 5.82509, VFI converged (metric_v) 4.45515e-06, (metric_pol) 0 in 140 iterations Range of b: 0, 31.3304 Equilbirium residual: -3.37884e-09, 1.08386e-08, 2 1 5 1.288908e-16 1.135301e-08 1.171250e-04 1.000000e+00 Broyden Evaluating at var_agg: 1.03796, 5.82509, VFI result reused Range of b: 0, 31.3304 Equilbirium residual: -3.37884e-09, 1.08386e-08,
fprintf('Time used for calibration at the steady state:\n');toc;
Time used for calibration at the steady state: Elapsed time is 1.122064 seconds.
cali_rslt.var_agg
ans = struct with fields:
N: 1.0380 B: 5.8251 beta: 0.9860 m_shock: 0 Y: 1.0380 w: 0.8333 D: 0.1730 r: 0.0050 tau: 0.1159 PA: 6.4111 PB: 6.4111 S: 1 pii: 0
As shown, the function returns a structure that describes the calibrated stationary equilibrium.

Solve the Linearized Transition Path as a Warmup

We solve a trivial lineraized transition path to obtain the Jacobian matrix of individual aggregates with respect to parameters.
The Jacobian matrix will be reused in solving non-linear transition paths later.
options = struct;
trans_linear_rslt = solve_trans_linear(cali_rslt, options);

Solve the Transition Path with the Liquidity Shock

The liquidity shock raises the discount factor for all households. In a New-Keynesian framework, this is a negative demand shock that could drive the economy to the zero lower bound.
Let's first generate the liquidity shock that is used in the original paper. β is raised by a small percentage points for 33 periods.
T = 200;
beta_factor = 1.001638;
beta_t = cali_rslt.var_agg.beta*ones(1,T);
beta_t(1:33) = cali_rslt.var_agg.beta * beta_factor;
figure; plot(beta_t); title('beta')
We then pass in the liquidity shock to solve the transition path after the liquidity shock hits.
options = struct;
options.beta_t = beta_t;
options.trans_linear_rslt = trans_linear_rslt;
options.trans_solver = 'Newton';
tic;
trans_nonlinear_rslt_zlb = solve_trans_nonlinear(cali_rslt, cali_rslt, options);
Iteration Line Search Func-count ||f(x)||^2 fnorm(x) Norm of Step Step Size Algorithm 0 0 1 4.486206e+00 2.118066e+00 1 1 2 6.245582e-01 7.902899e-01 4.268511e+00 1.000000e+00 Newton 2 1 4 8.501378e-02 2.915712e-01 3.329662e+00 1.000000e+00 Newton 3 1 6 1.200015e-03 3.464123e-02 5.306648e-01 1.000000e+00 Newton 4 1 8 8.453417e-06 2.907476e-03 5.548852e-02 1.000000e+00 Newton 5 1 10 1.127025e-07 3.357119e-04 9.155388e-03 1.000000e+00 Newton 6 1 12 1.286371e-10 1.134183e-05 5.020893e-04 1.000000e+00 Newton
fprintf('Time used for solving the transition path after the liquidty shock:\n');toc;
Time used for solving the transition path after the liquidty shock: Elapsed time is 2.615578 seconds.
Note that we have reused the Jacobian of individual aggregates with respect to parameters in trans_linear_rslt by passing it to options. We also use a 'Netwon' solver.
We now inspect the transition path. As shown below, indeed the liquidity shock triggers an interest rate zero lower bound, leading to substantial falls in output and inflation.
varnames = {'Y','pii','ii'};
for i=1:length(varnames)
figure; hold on;
var = varnames{i};
plot(trans_nonlinear_rslt_zlb.var_agg_t.(var)(1:40),'-','LineWidth',2.0);
legend({'No Forward Guidance'},'FontSize',14,'Location','Best');
title(var);
end

Solve the Transition Path with the Liquidity Shock and Forward Guidance

As shown above, the liquidity shock causes a zero lower bound that lasts for 20 periods.
The forward guidance commits to setting the nominal interest rate at zero for extra three periods. This can be achieved by assigning a very negative Taylor rule shock for the first 23 periods, as the nominal interest rate is bounded below by zero in the definition of equilbirum conditions.
m_shock_t = zeros(1,T);
m_shock_t(1:23) = -100; % a large number make sure it's pegging
m_shock_t(24) = -0.00092; % this adjustment follows MNS's original paper
 
options = struct;
options.beta_t = beta_t;
options.m_shock_t = m_shock_t;
options.trans_linear_rslt = trans_linear_rslt;
options.HANS_x = trans_nonlinear_rslt_zlb.HANS_x;
options.trans_solver = 'Newton';
trans_nonlinear_rslt_zlb_and_fg = solve_trans_nonlinear(cali_rslt, cali_rslt, options);
Iteration Line Search Func-count ||f(x)||^2 fnorm(x) Norm of Step Step Size Algorithm 0 0 1 1.690161e-02 1.300062e-01 1 1 2 1.548295e-02 1.244305e-01 1.815699e+00 1.000000e+00 Newton 2 1 4 9.094189e-06 3.015657e-03 2.100678e-01 1.000000e+00 Newton 3 1 6 9.868972e-06 3.141492e-03 3.518881e-02 1.000000e+00 Newton 3 2 7 7.076167e-07 8.411996e-04 1.759440e-02 5.000000e-01 Newton 4 1 9 1.264677e-06 1.124578e-03 2.507298e-03 1.000000e+00 Newton 4 2 10 6.810113e-07 8.252341e-04 1.253649e-03 5.000000e-01 Newton 5 1 12 1.165918e-07 3.414554e-04 7.582783e-03 1.000000e+00 Newton 6 1 14 6.522481e-08 2.553915e-04 7.383555e-03 1.000000e+00 Newton 7 1 16 1.251368e-10 1.118645e-05 9.005057e-04 1.000000e+00 Newton
Note that we pass in the previously solved transition path, HANS_x, into options as a warm-up for solving the model with forward guidance. This is not necessary but can solve the model faster.
We next compare the the transition paths with and without forward guidance. This replicates Figures 7-9 in their paper.
varnames = {'Y','pii','ii'};
for i=1:length(varnames)
figure; hold on;
var = varnames{i};
plot(trans_nonlinear_rslt_zlb.var_agg_t.(var)(1:40),'-','LineWidth',2.0);
plot(trans_nonlinear_rslt_zlb_and_fg.var_agg_t.(var)(1:40),'--','LineWidth',2.0);
legend({'No Forward Guidance','With Forward Guidance'},'FontSize',14,'Location','Best');
title(var);
end