The consensus on modelling Brexit

In recent weeks there have been a number of high-profile reports on the economic consequences of a vote to leave the European Union. Among others, the OECD, HM Treasury and we, at the National Institute, have all now published estimates of what the economic landscape might look like in the immediate aftermath of a leave vote on June 23rd. ¹ NIESR’s analysis of the short and long-run impact can be found here, Baker et al (2016).

At the heart of each of these pieces of analysis lies the same economic framework, the National Institute’s Global Econometric Model (NiGEM). What is more, all three have the same qualitative projections: there will be a sharp fall in sterling and economic activity.  This means that there will be a temporary rise in the rate of inflation and a drop in GDP growth relative to a world in which the UK votes to remain in the EU. Looking at the transmission mechanisms that drive these qualitative results, many are common across studies, in spirit if not in exact magnitude: risk premia and borrowing costs will rise and uncertainty will weigh down on consumption and investment intentions. This increases the probability of recession in the UK. Under the NIESR analysis for example, the likelihood of a year-long contraction in output rises from around 1 in 20 to closer to 1 in 5.

 And yet, each institution ultimately comes to a quantitatively different conclusion. The question is then, how? And secondly, does this matter?

Crucially, an economic model is a tool and will not give any meaningful insight by itself. How that tool is applied is of first-order importance to the result it produces. Even using the same model, a series of independent economists may reasonably differ in their view of exactly what are the appropriate inputs to introduce to that model. As such, they will arrive at different outputs. The validity of the output depends on the defensibility of those inputs, and so it is vital that all judgements and assumptions that are introduced to the model are transparent. This allows for a frank and honest discussion about the merits of each.

 In the case of the EU referendum, each institution; the OECD, HM Treasury and NIESR has used its own series of techniques to form a unique constellation of judgements about how key variables may move following a vote to leave the EU. These are summarised in Table 1. (opens in new window)

Table 1: Summary of assumptions and output from the OECD, HM Treasury and NIESR work on EU Referendum

There are a number of different ways in which calibrations have been set, varying by institution and by the variable in question. For instance HMT calibrate the impact of increased uncertainty by estimating a Vector Auto Regressive (VAR) model and then setting endogenous shocks in NiGEM to match the impulse responses given by the VAR. NIESR on the other hand chose to introduce uncertainty as a variable directly into the model and using information from betting markets to calibrate the size of the shock to this new variable.

All three pieces of analysis have shocks to corporate and household borrowing costs, government term premia and the exchange rate. For corporate and households, the shocks applied by HMT and the OECD are larger than those in the NIESR exercise. One reason for this is that, unlike the OECD, the NIESR analysis shocks uncertainty directly and by a much larger magnitude than in the HMT analysis. Thus less of the effect of uncertainty acts via risk premia on lending. However, for government borrowing costs, the NIESR work judges a larger increase in the term premia appropriate. It should be noted that although larger on impact, the NIESR shock decays at a much more rapid rate, and so by the end of 2017 it is actually smaller than the more persistent shocks in the OECD and HMT analysis.

Regarding the exchange rate, all three institutions arrive at very similar calibrations of a depreciation of between 10 and 12 per cent, on impact². However, all three seem to arrive at this value by different methodologies with NIESR using a particular historic episode and current market data, while HMT reach their calibration by averaging across the existing estimates in the literature.

So, does this reliance on modeller judgement render the analytical exercises undertaken useless? Far from it! As long as the underlying judgements are clearly laid out and defensible then such exercises likely represent the most effective and honest way of framing our thinking about future events following a large socio-economic event such as a referendum. Estimates by a range of institutions, each with their own views, act as a form of sensitivity analysis to any specific set of assumption by one analyst. In the case of the EU referendum, it seems that the central result of lower output, a falling pound and higher inflation, is robust to the range of assumptions made by a large number of economists, with a variety of methodologies and ideologies, even if the precise quantitative estimates vary at times.

 

¹ Much attention was placed on the 20 per cent depreciation of sterling effective exchange rate in Baker at al (2016). This was the immediate ‘jump’ down and was driven not only by the 12 per cent shock, but also by the endogenous responses of the rest of the NiGEM model. The actual depreciation in 2017 averages 16 per cent.

² Other studies of note include those by PWC, CEP (LSE), Oxford Economics and Economists for BREXIT.

Research Theme(s): 
Total views: 9,570