Quantcast
Channel: EViews
Viewing all 69 articles
Browse latest View live

Automatic Forecasting of the M3 Data

$
0
0

In a recent post on his excellent Hyndsightblog, Rob Hyndman compared the results of the R forecasting package with those of some commercial automatic forecasting software packages using datafrom the M3 forecasting competition.

We don’t consider EViews to be an automatic forecasting package, but EViews does include two of the most widely used forecasting techniques; Box-Jenkins ARIMA models, and Error, Trend, Season (ETS) exponential smoothing models, and includes “automatic selection” versions of both techniques, letting EViews decide the best specification for each.

You can view a demonstration of automatic ARIMA forecasting in EViews here:





And ETS smoothing here:


Although the underlying code that calculates the automatic ARIMA and ETS models in EViews is not open-source, we are open about the algorithms used, and, indeed, the ETS calculations are very similar to those in Hyndman’s ETS module in the R forecasting package.

We thought it would be interesting to compare the results of EViews’ automatic routines with those provided by Dr Hyndman. 

The first step we took was to import the M3 data into EViews.  The data are arranged in a slightly complicated fashion in the original Excel file, so we had to write an EViews script to import it all properly. You can download the program and resulting EViews workfile

Our second step was to verify the SMAPE calculations performed by Dr Hyndman of the automatic forecasting software package’s results from the M3 competition. We used another EViews program to do this.² Satisfyingly, we obtained the same results as Dr Hyndman’s calculations (which means that we too were unable to produce the originally cited SMAPE calculations), indicating that our data import was successful.

Then our final step was to use EViews’ ETS smoothing and automatic ARIMA routines to produce our own forecasts (we also, following Hyndman, and modern forecasting suggested practices, calculated the SMAPE from averaging the ETS and ARIMA forecasts). We wrote a straight forward EViews program³ to do this, which resulted in the following results:

Routine
EViews Mean SMAPE
R Mean SMAPE
ETS
13.096
13.13
ARIMA
13.719
13.85
Average
12.775
12.88

These are very close to the results obtained by Dr Hyndman using the R forecast package, which is to be expected since the same econometric techniques are used. We contribute the slightly better performance of the EViews algorithms to differences between the internal optimization engine in EViews and that in R, and the fact that the EViews autoarma procedure detects whether data should be estimated in logs.

As well as calculating SMAPE, our program went one step further and also calculated the number of times that the ARMA forecast produced a better forecast (in terms of SMAPE) than ETS smoothing, and the number of times averaging produced better than either:

ARMA better than ETS
44.29%
Average better than ETS
52.55%
Average better than ARMA
61.74%

These numbers back up the average SMAPE numbers above – ETS smoothing produces better forecasts that ARIMA a slight majority of the time, and averaging produces better forecasts than either.

ETS smoothing is often cited as producing better forecasts than ARIMA models, and our results agree with this opinion, at least on the M3 data. They also support the theory that averaging models can produce more accurate forecasts than simply using the best individual forecast.



¹ Note that EViews 9 is required to run the program.
² Note that you will require an EViews 9 with a build date after October 2015 to run this program (since the @smape function to calculate symmetric MAPE was added in October 2015).
³ The default settings on the autoarma routine were used, with the exception for cases where limited number of observations meant we had to reduce the search space to a maximum of 1 AR and 1 MA term. The default ETS settings were used, with the exception that, following recommendations from the original authors, multiplicative trend and seasonal terms were disallowed.

settings were used, with the exception that, following recommendations from the original authors, multiplicative trend and seasonal terms were disallowed.

Welcome to “It’s About Time”

$
0
0
Meet EViews’ dynamic team of economists, scientists, and developers as we blog about diverse topics that will give you more insight into EViews’ capabilities and application. From discussing current time series and econometric principles to comparing EViews’ performance against the competition, we will have an interesting story to tell you. Additionally, whether you are a professional user or new to EViews, you will discover new ways to use EViews.

Leave us a comment … we want to hear from you. Quite possibly, you can influence the future of EViews with your opinion. We have listed "Our Favorite Blogs" we believe you will find useful. Also remember to follow us by submitting your email so you will be notified of any new blog posts.










Add-in Round Up

$
0
0
In this section of the blog we provide a summary of the Add-ins that have been released or updated in the previous few months, and we announce the winner of our “Add-in of the Quarter” prize!

As a reminder, EViews Add-ins are additions to the EViews interface or command language written by our users or the EViews development team and released to the public. You can install Add-ins to your EViews by using the Add-ins menu from within EViews, or by visiting the EViews website.

The past few months have seen the release of three new Add-ins: HEGY, Backtest and FAVAR.


HEGY

The HEGY add-in, written by Nicolas Ronderos, performs the HEGY seasonal unit root tests of biannual, quarterly or monthly data, based on papers by Hylleberg, Engle, Granger and Yoo1, Franses2, and Franses and Hobijn3.

Seasonal unit root tests have become popular in recent years as more emphasis has been placed on studying seasonal patterns in economic data, and in particular the stationarity of such data at seasonal frequencies.

The HEGY unit root test is probably the most popular seasonal unit root test, and Nicolas’ Add-in does a great job of implementing it in EViews.

Backtest

The Backtest Add-in was written as a side project by Rebecca, a member of the EViews development team. The Add-in calculates a number of different metrics to evaluate portfolio properties and performance, and allows the user to enter a range of different parameters for the evaluation. These metrics include value added, information ratio, notional exposure, and turnover.

This Add-in builds upon earlier Add-ins aimed at financial practices, such as the Fama-Macbeth, GetStocks, irrval, Mishkin, PairsTrade and TechAsis Add-ins.


FAVAR

FAVAR estimates a Factor Augmented Vector Auto-Regression using the two-step principal components approach of Bernanke, Boivin, and Eliasz4. It was written by EViews user Davaajargal Luvsannyam.

FAVAR models have become popular in macroeconomic analysis of monetary policy as they help to alleviate some of the issues associated with estimating low-dimension standard VAR and Structural VAR models. By using factor analysis to reduce the dimensionality of large data sets, FAVARs allow VAR modelling without loss of information.

The FAVAR Add-in is a fantastic addition to the macro-economic tools available in EViews.


Quarterly Prize

While both HEGY and FAVAR are impressive Add-ins that harness EViews’ power to offer robust tools for analysis, the winner of the “Add-in of the Quarter” and a $500 prize goes to Davaajargal Luvsannyam’s FAVAR Add-in … Congratulations!

For more information on writing Add-ins, you can read the Add-in chapter of the online help, visit the Add-in writer’s forum, or send an email to support@eviews.com.  If you would like to submit an Add-in, or would like more information on the Quarterly Prize, or student Add-in writing sponsorship opportunities, please also email support@eviews.com.





1Seasonal integration and cointegration, Journal of Econometrics, 44(1):215-238, 1990.
2Seasonality, non-stationarity and the forecasting of monthly time series, International Journal of Forecasting, 7(2):199-208, 1991
3Critical values for unit root tests in seasonal time series, Journal of Applied Statistics, 24(1):25-48, 1997
4Measuring the effects of monetary policy - a factor-augmented vector autoregressive (FAVAR) approach, Quarterly Journal of Economics, 120(1):387-422, 2005.


HAPPY HOLIDAYS

$
0
0
Created with EViews program xmas.prg.

Best Wishes And Warmest Thoughts 
For A Wonderful Holiday 
And A Happy New Year.

Join us at ASSA 2016

Object Linking and Embedding (OLE)

$
0
0
EViews has many users whose workflow includes repeatedly copying and pasting graphs and tables from EViews into the same Microsoft Office documents every time data or results are updated. We also have users who would like to give their colleagues the ability to adjust the appearance of their graphs and/or tables but without having the source workfile. One possible solution for both sets of users is to use Object Linking and Embedding (OLE).

Object Linking 

Object linking allows you to place your EViews graph or table into a Microsoft document but maintain a link/connection to the series or object in your EViews workfile. When changes are made to the source EViews workfile, those changes can be reflected immediately in the objects within the Microsoft document.



For example, we will use a monthly workfile dated from January 2014 to July 2016, but only have data up to December 2015. In the workfile, we have a group containing 2 series that will be used to create a graph. You will select all the cells in the spreadsheet view of that group, copy the cell range to the clipboard, and then paste the range as a link into a Word document.


To paste the table as a link into the Word document, select Paste Special from the Paste button
drop - down on the menu bar. From the Paste Special dialog, select Paste link and choose EViews Object from the As: box. Press OK.



Repeating the same process but with the stacked bar of our graph, will yield the below Word document with 2 linked objects:


Now that we have Word document completed, let’s say a month has passed and we now have data for January 2016. The first 2 things we will do is first open our EViews workfile and open our Word document. Next we will open group in the spreadsheet view and add our new observations for January 2016. Once the observations have been added, the linked objects in Word will automatically be updated.


In a different example, we have pasted an EViews equation object in a Microsoft Excel spreadsheet as a link.


After examining the equation results, we noticed that the Tbill series was missing. We then added Tbill and re-estimated our equation. Because it was linked, the equation output in the Excel spreadsheet was automatically updated.


Object Embedding

Object embedding gives you the ability to add an EViews object into a Microsoft document that is independent of the EViews workfile. Using OLE technology, the data is stored in the pasted object and therefore the workfile is no longer needed. This is similar to pasting a static image into Word, but it has the additional benefit that you can still modify the appearance of the image/object.
For example, we have selected an EViews graph and we copied and pasted it as an EViews object into a Microsoft Word document. (Note: Unlike linking, Paste must be selected instead of Paste link in the Paste Special dialog).

After saving your Word document, suppose you gave a copy of your document to a colleague, and they decide to make some changes to the graph (such as changing the graph type, line color, or the legend). Since you embedded an EViews object, you colleague can double click the object in the Word document, an EViews session will start (if one isn’t already available) and they can immediately edit the graph within EViews which will then change the Object in Word.


If, for example, they want to add a title and change the default legend text from “Female” and “Male” to “Total Female” and “Total Male”:


We have demonstrated 3 simple examples but more detailed information about OLE can be found in Extending EViews : Object Linking and Embedding (OLE) of EViews User’s Guide.

Rolling Regression

$
0
0
Rolling approaches (also known as rolling regression, recursive regression or reverse recursive regression) are often used in time series analysis to assess the stability of the model parameters with respect to time.

A common assumption of time series analysis is that the model parameters are time-invariant. However, as the economic environment often changes, it may be reasonable to examine whether the model parameters are also constant over time. One technique to assess the constancy of the model parameters is to compute the parameter estimates over a rolling window with a fixed sample size through the entire sample. If the parameters are truly constant over the entire sample, then the rolling estimates over the rolling windows will not change much. If the parameters change at some point in the sample, then the rolling estimates will show how the estimates have changed over time.


EViews does not have an extensive rolling regression functionality built-in, but it does offer different ways to perform rolling regressions:

  1. Write an EViews program: we can estimate an equation for each sample in the roll, and then save the results. The following EViews forum posts provided detailed examples.

You can also find more detailed examples of rolling regression under your Help menu in EViews. Go to: Help/Quick Reference/Sample Programs & Data/ then click the roll link for detailed examples.
  1. The "Roll" Add-In is a simple EViews program that is integrated into EViews, allowing you to execute the rolling regression program from a single equation object.
  1. Use the EViews rolling regression User Object: EViews allows us to create a new roll object and store various coefficients or statistics from each iteration of the roll. 

Download and Install Roll User Object

To download and install the object class definitions from the EViews website, go to EViews file menu and select Add-ins/Download User Objects, and select the Roll user object from the list box. Click on the Install button to download the Roll user object.


After completing the automatic installation procedure, EViews will report the status of the installation:

.
Once we have registered the roll user object, we can think of it as a built-in EViews object. More information about registering user objects can be found at our online help site.

Rolling Regression User Object Applied Example

In the following example, we will use the EViews workfile “Demo.wf1” that was saved to your computer when EViews was installed. The file can be found on your hard drive at: \EViews 9\Example Files\EV9 Manual Data\Chapter 02 - A Demonstration.



Consider the least square analysis of an univariate time series LOG(M1) using a sample from 1952Q2 to 1992Q4 in EQ01. To assess the constancy of the parameters of C, LOG(GDP), RS and DLOG(PR), we will create a new Roll object by clicking on Object/New Object and then select roll in the list of object types:



Click on OK, EViews will ask users to create a new Roll object from an existing equation or manually:



Since we will use the existing equation EQ01 in our workfile, click OK to accept the default. Then, we will see the next dialog which asks us to select the type of rolling regression:


Rolling Type

Fixed window: the window size specifies the fixed number of observation in each window and the step size specifies how far ahead the window is moved each time:




Anchored at start (recursive rolling analysis): the starting date is fixed, and the window size grows as the ending date is advanced:



Anchored at end (reverse recursive rolling analysis): the ending date is fixed, and the window size shrinks as the starting date is advanced.
Select Fixed window and click OK to continue:



Click OK, EViews will display basic estimation information of the rolling regression:


In addition to the summary view, EViews will display information about the coefficient statistics, residual statistics, likelihood statistics, members of the object, and the standard label information view for the Roll object. Click on View/View rolling coefficient statistics to display a dialog prompting users to select the coefficients and statistics to display:


Since we would like to see the fluctuations in the coefficients over the sample period, select Point estimates and Standard errors:



There are considerable variations in the rolling estimates.

Click on Proc/Extract rolling coefficient statistics, this will allows users to save various coefficients or statistics from each iteration of the roll.


How We Decide Which Features To Add

$
0
0

As developers of econometric software, one of the most common questions we are asked is how we decide which features to add to the next release of EViews.

There isn’t an easy way to answer this question – the process is often fluid and is different for every feature. Feature ideas generally come to us from one of the following sources:
·
  • Directly from our user base, either on the EViews forum, through our technical support channels or through face to face meetings. 
  • From reading journal articles and text books to discover the latest trends in the field.
  • From visiting academic and professional conferences, such as ASSA, NABE or ISF.
  • From meetings held at our user conferences.
  • Research from our development team.

The recent release of EViews 9.5 gives us a chance to explore the process with some examples.

MIDAS

Perhaps the most anticipated feature in EViews 9.5 is MIDAS estimation, which allows estimation of regression models using data of different frequencies. MIDAS first came to our attention a few years ago during a casual conversation between our developers and one of our academic users at the Joint Statistical Meetings.

Our user suggested that EViews’ natural handling of data from different frequencies and our emphasis on time series analysis, coupled with MIDAS’ growing popularity made it a great candidate for a new feature.

Following up on that discussion, our development team began researching what would be involved in adding MIDAS to EViews. MIDAS would be the first estimation technique in EViews that inherently uses data based on different workfile pages.

While later attending the EViews co-sponsored ISF conference, we also noticed that a large part of the conference was devoted to MIDAS estimation and forecasting as well as nowcasting. By this time we were convinced that MIDAS was an obvious choice to add to EViews.


Model Interface

The model interface improvements added in EViews 9.5 came about as a result of direct feature requests made during a user conference we held in Washington DC for government sector EViews users. The government sector tends to be heavy users of the EViews model object, and many agencies such as the Federal Reserve, World Bank, IMF, EIA and United Nations attended and offered suggestions and had requests for ways we could improve an already extensive model interface.

For example, one of the agencies wanted a way to make their model available to the outside world, but also protect them so the official model could not be modified. This request brought about the model protection feature.

FIML

The introduction of variance restrictions to EViews’ FIML estimator came about following a discussion between one of our senior econometricians and a couple of text book co-authors who are writing a new text book featuring VAR/VEC estimation in EViews. Our econometrician happens to currently be working on revamping the VAR/VEC estimation engine in EViews with the view of adding a number of new features in the next few releases. One estimation technique the authors were interested in writing about was not currently easy to complete without variance restrictions on FIML estimation, and since our econometrician was currently interested in that area, it was quickly added.

Group Preview

The Object Preview window was added to EViews 9 as a method for quickly viewing each object in your workfile. On release, EViews 9 displayed fairly sparse information for group objects, merely listing the names of the series contained in the group.

Extending the preview window for groups was always something we planned on implementing after our EViews 9 release, but it was following a discussion between one of our developers and a large client in the financial sector that we decided to implement it as soon as EViews 9.5. This particular client utilizes EViews to quickly view graphs of multiple series. Although EViews’ multi-graph tools, in particular the multi-graph slide show, make this task easy, we thought that adding a graphical view of the underlying series to the group preview window would improve their workflow even more.

pyeviews: Python + EViews

$
0
0

Since we love Python (who doesn’t?), we’ve had it in the back of our minds for a while now that we should find a way to make it easier for EViews and Python to talk to each other, so Python programmers can use the econometric engine of EViews directly from Python. So we did! We’ve written a Python package called pyeviews that uses COM to transfer data between Python and EViews (For more information on COM and EViews, take a look at our whitepaper on the subject).

Here’s a simple example going from Python to EViews. We’re going to use the popular Chow-Lin interpolation routine in EViews using data created in Python. Chow-Lin interpolation is a regression-based technique to transform low-frequency data (in our example, annual) into higher-frequency data (in our example, quarterly). It has the ability to use a higher-frequency series as a pattern for the interpolated series to follow. The quarterly interpolated series is chosen to match the annual benchmark series in one of four ways: first (the first quarter value of the interpolated series matches the annual series), last (same, but for the fourth quarter value), sum (the sum of the first through fourth quarters matches the annual series), and average (the average of the first through fourth quarters matches the annual series).


We’re going to create two series in Python using the time series functionality of the pandas package, transfer it to EViews, perform Chow-Lin interpolation on our series, and bring it back into Python. The data are taken from Bloem et al in an example originally meant for Denton interpolation.

1.    Install the pyeviewspackage using your method of choice. We like the Anaconda distribution, which includes most of the packages we’ll need. Then, from a Windows command prompt:

conda install -c bexer pyeviews

Alternatively, if you’re not using Anaconda head over to the pyeviewspackageat the Python Package Index and at a Windows command prompt:

pip install pyeviews

Or, download the package, navigate to your installation directory, and use:

python setup.py install 
 
    For more details on installation, see our whitepaper.

2.   Start python and create two time series using pandas. We’ll call the annual series "benchmark" and the quarterly series "indicator":

>>> import numpy as np
>>> import pandas as pa
>>> dtsa = pa.date_range('1998', periods = 3, freq = 'A')
>>> benchmark = pa.Series([4000.,4161.4,np.nan], index=dtsa, name = 'benchmark')
>>> dtsq = pa.date_range('1998q1', periods = 12, freq = 'Q')
>>> indicator = pa.Series([98.2, 100.8, 102.2, 100.8, 99., 101.6, 102.7, 101.5, 100.5, 103., 103.5, 101.5], index = dtsq, name = 'indicator')

3.   Load the pyeviewspackage and create a custom COM application object so we can customize our settings. Set showwindow (which displays the EViews window) to True. Then call the PutPythonAsWFfunction to create pages for the benchmark and indicator series:

>>> import pyeviews as evp
>>> eviewsapp = evp.GetEViewsApp(instance='new', showwindow=True)
>>> evp.PutPythonAsWF(benchmark, app=eviewsapp)
>>> evp.PutPythonAsWF(indicator, app=eviewsapp, newwf=False)

Behind the scenes, pyeviews will detect if the DatetimeIndex of your pandas object (if you have one) needs to be adjusted to match EViews' dating customs. Since EViews assigns dates to be the beginning of a given period depending on the frequency, this can lead to misalignment issues and unexpected results when calculations are performed. For example, a DatetimeIndex with an annual 'A' frequency and a date of 2000-12-31 will be assigned an internal EViews date of 2000-12-01. In this case, pyeviews will adjust the date to 2000-01-01 before pushing the data to EViews.

4.   Name the pages of the workfile:

>>> evp.Run('pageselect Untitled', app=eviewsapp)
>>> evp.Run('pagerename Untitled annual', app=eviewsapp)
>>> evp.Run('pageselect Untitled1', app=eviewsapp)
>>> evp.Run('pagerename Untitled1 quarterly', app=eviewsapp)

5.   Use the EViews “copy” command to copy the benchmark series in the annual page to the quarterly page, using the indicator series in the quarterly page as the high-frequency indicator and matching the sum of the benchmarked series for each year (four quarters) with the matching annual value of the benchmark series:

>>> evp.Run('copy(rho=.7, c=chowlins, overwrite) annual\\benchmark quarterly\\benchmarked @indicator indicator', app=eviewsapp)

6.    Bring the new series back into Python:

>>> benchmarked = evp.GetWFAsPython(app=eviewsapp, pagename='quarterly', namefilter='benchmarked')
>>> print benchmarked

                BENCHMARKED
    1998-01-01   867.421429
    1998-04-01  1017.292857
    1998-07-01  1097.992857
    1998-10-01  1017.292857
    1999-01-01   913.535714
    1999-04-01  1063.407143
    1999-07-01  1126.814286
    1999-10-01  1057.642857
    2000-01-01  1000.000000
    2000-04-01  1144.107143
    2000-07-01  1172.928571
    2000-10-01  1057.642857

7.   Release the memory allocated to the COM process(this does not happen automatically in interactive mode). This will close down EViews:

>>> eviewsapp.Hide()
>>> eviewsapp = None
>>> evp.Cleanup()

Note that if you choose not to create a custom COM application object (the GetEViewsAppfunction), you won’t need to use the first two lines in the last step. You only need to call Cleanup(). If you create a custom object but choose not to show it, you won’t need to use the first line (the Hide()function).


8.   If you want, plot everything to see how the interpolated series follows the indicator series:

>>> # load the matplotlib package to plot
>>> import matplotlib.pyplot as plt
>>> # reindex the benchmarked series to the end of the quarter so the dates match those of the indicator series
>>> benchmarked_reindexed =
pa.Series(benchmarked.values.flatten(), index = benchmarked.index + pa.DateOffset(months = 3, days = -1))
>>> # plot
>>> fig, ax1 = plt.subplots()
plt.xticks(rotation=70)
ax1.plot(benchmarked_reindexed, 'b-', label='benchmarked')
# multiply the indicator series by 10 to put it on the same scale as the benchmarked series
ax1.plot(indicator*10, 'b--', label='indicator*10')
ax1.set_xlabel('dates')
ax1.set_ylabel('indicator & interpolated values', color='b')
ax1.xaxis.grid(True)
for tl in ax1.get_yticklabels():
    tl.set_color('b')
plt.legend(loc='lower right')
ax2 = ax1.twinx()
ax2.set_ylim([3975, 4180])
ax2.plot(benchmark, 'ro', label='benchmark')
ax2.set_ylabel('benchmark', color='r')
for tl in ax2.get_yticklabels():
    tl.set_color('r')
plt.legend(loc='upper left')
plt.title("Chow-Lin interpolation: \nannual sum of benchmarked = benchmark", fontsize=14)
plt.show()



For more information on the pyeviews package, including a list of functions, please take a look at our pyeviews whitepaperon the subject.

References:

Add-in Round Up for 2016 Q1

$
0
0
In this section of the blog, we provide a summary of the Add-ins that have been released or updated within the previous few months, and we announce the winner of our “Add-in of the Quarter” prize!

As a reminder, EViews Add-ins are additions to the EViews interface or command language written by our users or the EViews Development Team and released to the public. You can install Add-ins to your EViews by using the Add-ins menu from within EViews, or by visiting our Add-ins webpage.

We have five new Add-ins within the last few months:
  1. BFAVAR
  2. SRVAR
  3. TVSVAR
  4. FORCOMB
  5. TSCVAL

BFAVAR

The BFAVAR Add-in, written by Davaajargal Luvsannyam, estimates Factor Augmented Vector Auto Regression (FAVAR) models using the one-step Bayesian likelihood approach.

Unlike the FVAR Add-in, which takes the two-step principle component approach to the FVAR model estimation, the BFAVAR Add-in takes a Bayesian perspective, treating the model parameters as random variable. The Add-in implements the multi-move Gibbs sampling explained in Bernanke, Boivin, and Eliasz (2005)1. It is known that the Bayesian likelihood-based estimation based on MCMC methods comes at the cost of computation burden.

SRVAR

The SRVAR Add-in, also written by Davaajargal Luvsannyam, performs analysis of Bayesian Sign Restricted Vector Auto Regression (SRVAR)models using the flat Normal-inverse Wishart prior.

There is fast growing literature that identifies structural shocks by imposing sign restrictions on the responses of (a subset of) the endogenous variables to a particular structural shock. The SRVAR Add-in employs the Uhlig (2005) rejection method2to identify structural shocks. This Add-in helps us pin down the impact of structural shocks of the model recursively using the Cholesky decomposition in the process of Bayesian MCMC calculations.

TVSVAR

The TVSVAR Add-in, again written by Davaajargal Luvsannyam, performs Bayesian analysis of Time Varying Structural Auto Regression (TVSVAR) models introduced in Primiceri (2005)3.

A common assumption in the VAR model analysis is that the VAR coefficients are constant over time. However, in many applications, it may be more appropriate to consider time variations in the VAR coefficients. Following Primiceri, this Add-in implements the structural VAR model which allows for both stochastic volatility and time-varying regression parameters.

FORCOMB

The FORCOMB Add-in, written by Yongchen Zhao, provides a way to combine multiple candidate forecasts into a robust real-time forecast.

Time series forecasting is a continuously growing research area in many domains of business, finance, engineering and demography, etc. Improvements to the accuracy of forecasting have received extensive attention from researchers. In different publications, it has been observed that combining multiple forecasts improved the overall forecast accuracy. This Add-in provides different types of robust (weighted) forecast combination techniques such as S-After, L-After, h-After, L210-After, Scancetta (2010) MLS4,simple average, trimmed mean, winsorized mean and Bates-Granger (1969)5 methods.

TSCVAL

The TSVCAL Add-in, written by James Lamb and Rita Linets, performs rolling estimation and out-of-sample forecast evaluation from EViews’ equation and VAR objects. If the Add-in is called from an equation object, it returns tables and vectors which contain cross-validation results for the forecasts of the base forms (e.g. non-transformed) of the dependent variable. If the Add-in is called from a VAR object, it returns cross-validation results for the forecast of the base forms of all the endogenous variables.

Cross-validation (or sometimes called forecast evaluation with a rolling technique) is a way of assessing the predictive performance by measuring the forecast errors. This Add-in provides 13 types of forecast error metrics including mean squared error (MSE) and mean absolute error (MAE).

The TSCVAL Add-in is a fantastic addition to the macro-economic tools available in EViews.

Quarterly Prize

The EViews Development Team has decided that the TSCVAL Add-in contributed most significantly to the usage of EViews this quarter. This quarter’s $500 prize goes to James Lamb and Rita Linets, congratulations!
For more information on writing Add-ins, you can read the Add-in chapter of the online help or visit the Add-in writer’s forum.

If you would like to submit an Add-in, need more information on the Quarterly Prize, or have quesitons about writing Add-in for EViews, please email support@eviews.com.

Measuring the Effects of Monetary Policy: A Factor-Augmented Vector Autoregressive (FAVAR) Approach”, Quarterly Journal of Economics 120.1: 38.7-422.
What are the effects of monetary policy on output? Results from an agnostic identification procedure," Journal of Monetary Economics, Elsevier, vol. 52(2), pp. 381-419.
Primiceri, G.E. (2005): ‘Time Varying Structural Vector Autoregressions and Monetary Policy’, Review of Economic Studies 72, pp. 821-852.
Sancetta A. 2010. Recursive forecast combination for dependent heterogeneous data. Econometric Theory 26: 598—631.
Bates JM, Granger CWJ. 1969. The Combination of Forecasts. Operations Research Quarterly 20: 451--468.}.

Fan Chart

$
0
0
Fan charts are a method of visualizing a distribution of economic forecasts, pioneered by the Bank of England for their quarterly inflation forecasts.

We are often asked if EViews can produce fan charts. At its heart, a fan chart is simply a type of area band chart. EViews has been able to produce area band charts for a number of previous versions. So whenever we have been asked if EViews can produce fan charts, we have said “yes”.

Recently, we decided to go one step further and replicate an official Bank of England fan chart in EViews, and this blog post will document the steps required to perform the replication.

We have decided to replicate a recent inflation report fan chart, specifically the November 2015 inflation fan chart available from the Bank of England.




At first glance, this fan chart is simple to re-produce in EViews. Simply obtain the historical inflation series, along with the median and quantile forecast series used to create the fans, then put those series in a group and create an area band chart from them. However it isn’t quite so easy, since the Bank of England does not provide the forecast quantile values, rather they only provide distribution parameters for the assumed distribution of the forecasts.

Nevertheless, the first step in the replication is to obtain the historical inflation numbers. We’ll create a new EViews workfile to hold the inflation data by clicking on File/New/Workfile, and select quarterly as the frequency with a start date of 2009 and end date of 2018, giving us enough room for a few years of forecasts.

To retrieve the historical data, use the handy GetQuandl add-in to fetch the data directly from the Quandl website. If you don’t have the GetQuandl add-in already installed, you can install directly from within EViews by using the Add-ins/Download Add-ins menu item, then select the GetQuandl add-in and follow the instructions to install.

Once installed, use the menu item Add-ins/Download Quandl Data to bring up the Quandl Add-in dialog. Next, enter the Quandl code for UK inflation, RATEINF/CPI_GBR, (which we found by performing a search on the Quandl.com website) in the top box, and then change the Import Choice option to be “Import into Page”.

Clicking on OK results in the data being brought into our workfile page as the series “value”.  Since the Bank of England fan charts deal with the annual growth rate of inflation, we will create a growth rate series by typing the command:

series cpig = @pcy(value)

We now have the required historical values.

To obtain the forecasts, first fetch the forecast distribution data from the Bank of England. The bank provides the data in an Excel file.

We can import this data directly into EViews by clicking on File/Import/Import from file and then enter the following URL into the File name box:

http://www.bankofengland.co.uk/publications/Documents/inflationreport/ir15novprob.xlsx

Once EViews has connected to the Bank of England's website, the Excel Read dialog box will guide you through the import process. All we need to do is check the box that instructs EViews to “Read series by row”, since the original data is in transposed format in Excel.

Since the data from either Bank of England or Quandl may change in the future, we have created a snapshot of the data used in this blog.

Now that we have the forecast mode, skewness and uncertainty, we can create the forecast quantiles for the fan chart. The calculations used to create the quantiles are a little complicated, so we wrote a simple EViews program to compute them for us.

The program generates the forecast quantiles as the series f01-f07. We will open the actual data along with the forecasts into a group by selecting the CPIQ series and the forecast series, right-click and select Open/As Group. Name this group by clicking on the Name button and enter “fgroup” as the name.

We are now ready to make our fan chart. With fgroup open, click View/Graph to bring up the Graph dialog, and then change the Graph type to Mixed. This allows us to then select the Mixed settings node of the pages tree to set the Mixed types properly. We assign the first series and last series to be Lines, and the rest to be “Area Band”. Click OK to produce the fan chart:



All that remains is to change some of the graph settings to have it match the Bank of England’s style. First, change the sample of the graph using the slider-bar to set the start date as 2011Q3 and the end as 2017Q3.

Next, the following customization options are made under the Graph Options dialog (available by double-clicking on the graph) and expanding (+) the different options.


  • Graph Elements/Fill Areas, change the first color to RGB(248,191,172), the second color to RGB(240,134,112), and the third color to RGB(234,76,73).
  • Graph Elements/Lines and Symbols, change the first and second lines’ color to RGB(228,5,31) and the width to 2pt.
  • Axes & Scaling/Data scaling, change all 8 series assignments to Right.
  • Axes & Scaling/Data scaling, change Right axis scale endpoints to User specified with values of (-3, 6)
  • Frame & Size/Color & Border, change Frame border Axes to the second option (i.e. a border on each side of the graph).


To finish the customization, we need to freeze the graph into its own graph object. We do this by clicking on the Freeze button, then the Name button and naming our graph "CPIGRAPH".

Next, we add some shading over the forecast period by right-clicking on our graph and clicking on Add lines & shading. We add a shaded area starting at 2015Q3 and ending in 2017Q3. Change the color of the shade to RGB(239,239,239).

Additionally, we add a horizontal line at the Bank’s 2% target by again right-clicking and selecting Add lines & shading, and adding a Horizontal – Right axis line (Orientation drop-down list) at a data value of 2. We set the line’s width to 2 pt and check-box "Draw the line on top".

Finally, we remove the legend from the graph by selecting legend and pressing the delete button.



Note: We have released a FanChart Add-in for creating Bank of England style fan charts. You can download it here.

Impulse Responses by Local Projections

$
0
0

Author and guest post by Eren Ocakverdi.


Vector Autoregression (VAR) is a standard tool for analyzing interactions among variables and making inferences about the historical evolution of a system (e.g., an economy). When doing so, however, interpreting the estimated coefficients of the model is generally neither an easy or useful task due to complicated dynamics of VARs. As Stock and Watson (2001) aptly puts it, impulse responses are reported as a more informative statistic instead.

The Impulse Response Function (IRF) measures the reaction of the system to a shock of interest. Unfortunately, when the underlying data generating process (DGP) cannot be well approximated by a VAR(p) process, IRFs derived from the model will be biased and misleading. Jordà (2005) introduced an alternative method for computing IRFs based on local projections that do not require specification and estimation of the unknown true multivariate dynamic system itself1.

The usual presentation of IRFs is through visualizing the dynamic propagation mechanism accompanied by error bands. In addition to marginal error bands, Jordà (2009) introduced two new sets of bands to represent uncertainty about the shape of the impulse response and to examine the individual significance of coefficients in a given trajectory. In this framework, it becomes straightforward to impose restrictions on impulse response trajectories and formally test their significance.


The EViews add-in “localirfs” implements the methodology outlined in Jordà (2009). The add-in is designed as a complementary tool for the existing VAR object and can also be run from the command line. In that respect, it is similar to the “hdecomp” or “svarpatterns” add-ins. For details, please see the information in the documentation that comes with the add-in2. Note that although the local projection methodology does not depend on the previous estimation of a VAR, the localirfs add-in must be used on an existing EViews VAR object.

Jordà (2009) uses the three-variable monetary VAR model of Stock and Watson (2001), where the inflation, unemployment, and federal funds rate variables used in that study are extended to cover the 1960q1-2007q1 period. These particular data are used for demonstrating the use of the add-in as well.

First, we import the data into EViews:

import .\swqdata.csv names=(P,UN,FF) @freq q 1960Q1



Then build a VAR model with the longest possible lag:

var swmodel.ls 1 14 p un ff

Now let the add-in find the optimal lag length based on a corrected version of AIC using the covariance matrix “unadjusted” for degrees of freedom.

swmodel.localirfs(imp=4,aicc)



You can safely skip previous steps if you have already built a VAR model of your own. In that case, it is straightforward to obtain the local projection IRFs and compare the results with regular IRFs generated by the VAR itself:

swmodel.localirfs(horizon=24,imp=4,compare,charts)

If you like, you can also use the GUI:



The resulting output will be a graph object that contains 3x3 charts similar to those produced by EViews’ VAR object. Solid lines with circles are regular IRFs, while remaining solid lines are local projection IRFs with associated marginal error bands:


Impulse response coefficient estimates can suffer from serial correlation, which may lead to wider marginal error bands. Conditional error bands help remove the variability caused by the serial correlation. Conditional error bands are consistent with the joint null of significance and give a better sense about the significance of individual responses. In the absence of correlation among impulse response coefficients, marginal and conditional bands would be similar (Jordà, 2009).
In order to obtain conditional error bands and save the correlation matrix of estimated impulse coefficients to the workfile, simply run the following:

swmodel.localirfs(horizon=24,imp=4,cond=2,cormat,charts)

Note that the chart titles include the resulting p-values of two null hypotheses: i) “Joint”refers to the null that all the response coefficients are jointly zero, ii) “Cumulative” refers to the null that the accumulated impulse response after 24 periods is zero.




Another way to construct error bands is by applying Scheffé’s method to approximate simultaneous confidence coverage3. For instance, you can obtain the percentile bound for the 50th percentile by4:

swmodel.localirfs(horizon=24,alpha=0.5,imp=4,cond=3,charts)



You can test the equality of two responses. In order to compare the response of unemployment (2nd variable) to a shock in inflation (1st variable) with the response of interest rate (3rd variable) to a shock in interest rate (3rd variable):

swmodel.localirfs(horizon=24,imp=4,equality="2 1 3 3")


The result will be a 1x2 matrix, where the columns hold the p-values associated with the hypothesis testing of joint and accumulated equality, respectively.


Finally, you can create a conditioning response path in order to examine the change in the system’s behavior. Jordà (2009) imposes a restriction on the response of inflation (1st variable) to a shock in interest rate (3rd variable) by subtracting 0.25 points from every coefficient.

First, we need to save the impulse response matrix:

swmodel.localirfs(respmat="irf",horizon=24,imp=4)



Next, we extract the IRF of interest and construct the conditioning path:

vector condition = @columnextract(irf01,3) - 0.25*@ones(@rows(irf01))

Now we are ready to see how the response of the interest rate to a shock changes due to the conditioning path:

swmodel.localirfs(horizon=24,imp=4,cfact="3 3 1 3",cpath="condition",charts)

Solid (blue) lines with squares and associated dashed (blue) lines are the original impulse responses with conditional error bands. The solid (red) line with circles is the counterfactual response in the bottom graph, while it denotes the conditional response given this counterfactual in the top panel.

The p-value at the bottom of the graph is a test result measuring the distance between the conditioning event and the sample estimates5.



As for the change in the response of unemployment (2nd variable) to a shock in inflation (1st variable):

swmodel.localirfs(horizon=24,imp=4,cfact="2 3 1 3",cpath="condition",pvalue=2,charts)



Unlike the previous chart, the p-value at the bottom of the graph is obtained from an F-test.



References:

Jordà, Ò. (2005). “Estimation and Inference of Impulse Responses by Local Projections,” American Economic Review, v. 95 (1), pp. 161–182.

Jordà, Ò. (2009). “Simultaneous Confidence Regions for Impulse Responses,” Review of Economics and Statistics, v. 91 (3), pp. 629–647.

Kilian, L., and Kim, Y. J (2009). “Do Local Projections Solve the Bias Problem in Impulse Response Inference?”, CEPR Discussion Paper Series 7266.

Lütkepohl, H., Staszewska-Bystrova, A. and Winker, P. (2013) “Comparison of Methods for Constructing Bands for Impulse Response Functions,” SFB649 Discussion Paper 2013-31, Humboldt University of Berlin.

Stock, J. H., and Watson, M. W. (2001). “Vector Autoregressions,” Journal of Economic Perspectives, v. 15 (4), pp. 101-115.



1  Please refer to Kilian and Kim (2009) for a criticism of this approach.
2  Special thanks to Eviews Rebecca for testing the code and comparing the output to that of original GAUSS source code. The usual disclaimer applies.
3  Please refer to Lütkepohl et.al. (2013) for a criticism of this method.
4  In his final version of the source code, Professor Jordà had made a revision to the computation of Scheffé error bands and adapted a step-down procedure, which creates wider intervals for the early responses. Please visit his website for details of his research: http://faculty.econ.ucdavis.edu/faculty/jorda/
5  In original study the p-value is reported as 0.217, which can only be obtained if you construct a conditioning path by substracting 0.30 instead of 0.25.



All About Excel

$
0
0
Microsoft Excel is still used by many users and this post will quickly go over all of the different ways you can share and move data between EViews and Excel.

Native Excel File Support

EViews offers direct Excel file read and write capability. If you have data in an existing Excel spreadsheet and you wish to use it in an EViews workfile, simply drag and drop the Excel file onto an EViews workfile to start the import (see IMPORT command and Importing Data in our User's Guide) or drop it onto an empty area in the EViews frame window to create a new workfile (see WFOPEN command).


At the end of the import, you also have the option to link the data back to the source spreadsheet. This will allow you to easily refresh the data in the workfile, whenever the source Excel data has changed (see WFREFRESH).

By default, EViews will try to read in objects by column and will look for a single header row for the object names. In addition, EViews can transpose the data before import if your objects are defined in rows instead.

In the other direction, you can save EViews workfiles directly to an Excel file by going to
File –> Save As, then selecting the proper Excel type in the Save as type dropdown (see WFSAVE command and Exporting Data in our User's Guide).

Note: Reading the newer Excel .XLSX file format was added in EViews 7. Saving in .XLSX format was added in EViews 8.



Using the EViews Excel Add-In

EViews also offers an Excel Add-In that can be used within Excel to read and link to EViews data residing in EViews file formats. The Excel add-in is installed by default with each EViews installation and can be seen in Excel's ADD-INS ribbon tab.


In the EViews group box, clicking on Get Data will allow you to select an EViews source file. Once selected, you'll be presented with a list of objects from that file.


Once you've selected the objects you want, you can click Import & Link to not only read in the data to your spreadsheet but also have it linked back to the source. This will allow you to refresh the data in your Excel file whenever the EViews data has been updated (see The Excel Add-In in our User's Guide for more details).

As a side note, our Excel add-in actually performs its work by making use of Excel's built-in support for OLEDB data sources. We created a read-only EViews OLEDB provider for this purpose. If you're familiar with using OLEDB providers with Excel, you can bypass our add-in altogether and just use our OLEDB provider directly (see Microsoft's documentation on Connect OLE DB data to your workbook). For more technical details on using our OLEDB provider, please see our whitepaper (PDF).

Object Linking and Embedding (OLE) Support

EViews also supports Microsoft's Object Linking and Embedding (OLE) technology for various EViews elements such as data tables and graphs.

In the past, you could always copy an EViews graph to the clipboard and then paste it into Excel as a static picture. But if you wanted to update that image (maybe to change line colors or even the graph type), you would have to redo the entire copy and paste operation from the beginning.

Now with OLE support, you can paste the graph as an EViews object instead of a static image, which saves the actual EViews graph object (along with all relevant data) directly into your spreadsheet. This embedded object now exists separately from the workfile and represents a snapshot of the graph at that point in time, similar to a static image. But unlike a static image, you can double-click the image to open it in EViews and change its attributes, which will be reflected automatically in your spreadsheet.

You can also go one step further and paste the EViews graph as a linked OLE object instead. As a linked object, Excel will keep track of where the object originally came from and will allow you to quickly update the object from the source workfile upon request. This means any changes made to the object in the source workfile (perhaps by another EViews user) can be reflected in your spreadsheet easily and quickly.

Note: EViews OLE objects can be used in other Office applications that support it, including Microsoft Word and PowerPoint.

For more details on using EViews OLE, please see our recent OLE blog post.

For details on using OLE with Microsoft Office, see their documentation on Create, edit, and control OLE objects.

Using Excel Macros (VBA)

For the ultimate in control and flexibility, EViews also offers a COM automation interface that can be used by the Excel macro language (VBA) to talk directly to EViews. With it, you can write an Excel macro to send and retrieve data and perform almost any EViews task via EViews commands.

For instance, you can write an Excel macro that:
  1. launches a new instance of EViews
  2. sends Excel data from a cell range to series objects in a new EViews workfile
  3. run various EViews commands on the data
  4. retrieve EViews results back into an Excel cell range
all while EViews remains hidden from view.

For Excel VBA examples and more details on using our COM Automation interface, please see our whitepaper (PDF).

For a quick introduction to writing Excel VBA macros, see Microsoft's documentation on Getting Started with VBA.

An Application of Data Filtering Extracting Super Cycles in Commodity Prices

$
0
0
Authors and guest post by Daniel L. Jerrett, Ph.D and Abdel M. Zellou, Ph.D.

EViews offers numerous techniques to filter time series including the Hodrick Prescott filter as well as various band-pass filters.

This article will describe an application of one of these filtering techniques, namely the asymmetric Christiano Fitzgerald band pass filter, and its applications to real oil prices in order to extract the various cycle and trend components.

Super Cycles and Christiano Fitzgerald Band Pass Filter

There is a long standing interest in commodity price dynamics, i.e. their trend, cycle and volatility (Cuddington et al. 2007, Cashin and McDermott 2002). Recently, a number of papers have focused on the super cycle hypothesis. A super cycle (SC) is “a prolonged (decades) long trend rise in real commodity prices. Heap (2005) and Cuddington and Jerrett (2008) define a super cycle as a cycle lasting 20 to 70 years (trough to trough) as an economy goes through structural transformation caused by industrialization and urbanization. This structural transformation is accompanied by increased demand for energy and metals commodities as the manufacturing sector expands. Historically, these periods of urbanization and industrialization have occurred in Europe during the Industrial Revolution in the 19th century, in the U.S. at the beginning of the 20th century, in Western Europe again during the reconstruction that followed the Second World War, in South-East Asia in the 1960s and finally in the BRIC1 countries in the 1990s2. The increase in demand for energy and metals commodities during these periods, combined with the delay for the supply to catch up with the demand surge, created sustained periods of high commodity prices according to the super-cycle hypothesis.

Trends and cycles have been widely studied in various subfields in economics. Seasonal fluctuations, business cycles (6 to 32 quarters), Kitchin inventory cycles (3-5 years), Juglar fixed investment cycles (7-11 years), Kuznets cycles applied to real estate and infrastructural investment (15 to 25 years), Bronson asset allocation cycles (around 30 years) and Kondratiev waves or “grand super cycles” (45 to 60 years) are among those that have received attention.

The ‘‘ideal’’ band-pass filter, which isolates only specified frequencies, uses an infinite number of leads and lags when calculating the filter weights from the underlying spectral theory. Of course, a finite number of leads and lags must be used in practice; so a truncation decision must be made. Using a larger number of leads and lags allows for more precise results, but renders unusable more observations at the beginning and the end of the sample. Baxter and King (1995) stress that a filter must be symmetric in terms of the number of leads and lags to avoid causing phase shift in the cycles in filtered series. Baxter and King and Christiano and Fitzgerald (2003) develop alternative finite sample approximations to the ideal symmetric filter. Christiano and Fitzgerald also derive asymmetric filters, which have the advantage that they allow us to compute cyclical components for all observations at the beginning and end of the data span. The cost, as Christiano and Fitzgerald show, is very minor phase shifting, at least in their applications. 

Although Christiano and Fitzgerald are interested in business-cycle analysis, they also provide a couple of interesting macroeconomic applications of their symmetric and asymmetric filters for extracting lower frequency components of economic time series. The first involves an analysis of the Phillips curve relationship between unemployment and inflation in the short run versus the long run (that is, the high versus low frequency components). The second application examines the correlations between the low-frequency components of monetary growth and inflation. 

Cuddington and Jerrett (2008) are the first to apply band-pass filters to natural resource issues, including metal markets. Band-pass filters are well suited for our objective of attempting to measure super cycles in metals prices. One can define the range of cyclical periodicities that constitute super cycles, and then use the band-pass filter to extract those cyclical components. Given current interest in whether a new super cycle is emerging in the final years of the data sample, the asymmetric Christiano and Fitzgerald band-pass filter is especially useful because it allows one to calculate super-cycle components at the end of our data sample.

An Application to Crude Oil

Crude oil is the most traded commodity around the world. The asymmetric Christiano Fitzgerald band-pass filter has been applied to the price of real crude oil using EViews (see graph below).


The band-pass filter is available as a series Proc in EViews. To display the band-pass filter dialog, select Proc/Frequency Filter from the main series menu.

The first thing you will do is to select a filter type. There are three types: Fixed-length symmetric Baxter King, Fixed-length symmetric Christiano Fitzgerald, and Full-length asymmetric Christiano Fitzgerald.


The asymmetric Christiano Fitzgerald filter is applied to the real price of crude oil. Two components of the crude oil price are extracted: the trend component (in green in the figure below) and the super-cycle component (in red in the figure below).


  • The trend component remains on its post World War II course with a positive slope averaging roughly 2% per year in real terms.
  • The crude oil super cycle component peaked in 2010. 
  • The current super cycle is showing similar duration to the previous one (SC2) with a duration of 29 years from trough to trough. If we consider that the current SC (SC3) will have same duration and amplitude, one could expect the trough to be reached around 2025 with a value around $50/bbl in real terms using 2015 dollars. 
  • One should see a downward movement in the Super Cycle component continuing over the next decade (through roughly 2025).
  • The super cycle in oil prices peaked in 2010. This was barely detected in the 2011 study, but seemed to be confirmed in this 2015 update.
  • Super cycle EViews workfile
  • For more information and updates, visit Clear Future Consulting.

1The BRIC countries represent Brazil, Russia, India and China.  The original acronym, BRICS, was invented in 2001 by Jim O’Neill, an economist at Goldman Sachs, and it includes South Africa.

2These clubs of countries are similar because of their economic transformation rather than geographical similarities. 

Add-in Round Up for 2016 Q2/Q3

$
0
0

Add-in Round Up for 2016 Q2/3

In this section of the blog, we provide a summary of the Add-ins that have been released or updated within the previous few months, and we announce the winner of our “Add-in of the Quarter” prize!

As a reminder, EViews Add-ins are additions to the EViews interface or command language written by our users or the EViews Development Team and released to the public. You can install Add-ins to your EViews by using the Add-ins menu from within EViews, or by visiting our Add-ins webpage.

We have 9 new Add-ins within the last few months, including a number related to VAR analysis: 
  1. ThSVAR
  2. FanChart
  3. Croston
  4. LocalIRFs
  5. Speccaus
  6. SIRF
  7. ConfCast
  8. URAll
  9. DMA

ThSVAR

The ThSVAR Add-in continues Davaajargal Luvsannyam's line of VAR based Add-ins, and estimates Threshold Structural Vector Auto Regression (FAVAR) models, such as those described by Balke (2000).

Unlike traditional structural VAR approaches, the ThSVAR allows a threshold variable that determines which of two regimes the structural contemporaneous relationship is in.

FanChart

The FanChart add-in creates Bank of England style fan charts from forecast distribution data.  More details can be found on our Fan Chart blog post.

Croston

The Croston method is a way of using exponential smoothing techniques to forecast intermittent series (series with long periods of zeros, intermingled with sparse positive integers).  This add-in performs the Croston method in a simple fashion.

LocalIRFs

The LocalIRFs Add-in, written by Eren Ocakverdi (Trubador on the EViews forums), performs impulse response analysis by local projection method of Jordà (2005, 2009) on a previously estimated VAR model. 

As well as providing the impulse response graphs and tables, Eren allows equality hypothesis tests on the responses.

Speccaus

Nicolas Ronderos' speccaus Add-in computes a frequency domain Granger causality test in the context of VAR models, as given in Breitung and Candelon (2006). 

SIRF

Another Davaajargal Luvsannyam Add-in related to VARs, SIRF computes scaled impulse responses of Structural Vector Auto Regressions. 

Although a rather simple Add-in, it provides powerful functionality to users who wish to create their own impulses for structural VARs.

ConfCast

One more from Davaajargal Luvsannyam (who has been busy!) to add to the extensive list of VAR based add-ins.  ConfCast performs conditional forecasting from a VAR model, allowing you to constrain the future values of the VAR's underlying series.

URALL

URALL, by Imadeddin Almosabbeh, solves a time-old issue of wanting to perform individual unit root tests on a large number of series at once.  The add-in allows you to specify the type of unit root test to run, then collates the output from each one into an easy to read table. Nifty!

DMA

The final Davaajargal Luvsannyam add-in, and one unrelated to VARs!, performs dynamic model averaging.  

Model averaging is an exploding field in econometrics, with a common consensus held that averaging over different models is a better approach than choosing the single best model. 

Although EViews (9 and above) has various model averaging techniques built, dynamic model averaging is not yet available built in.  This add-in addresses that short-coming.



Quarterly Prize

The EViews Development Team has decided that the DMA Add-in contributed most significantly to the usage of EViews this quarter. 
For more information on writing Add-ins, you can read the Add-in chapter of the online help or visit the Add-in writer’s forum

If you would like to submit an Add-in, need more information on the Quarterly Prize, or have quesitons about writing Add-in for EViews, please email support@eviews.com.

L1 Trend Filtering

$
0
0
Author and guest post by Eren Ocakverdi.


Extracting the trend of a time series is an important analytical task as it simply depicts the underlying movement of the variable of interest. Had this so-called long term component known in advance, we would have been able to foresee its future course. In practice, however, there are several other factors (e.g. cycle, noise) in play that have influence on the dynamics of a time dependent variable.

Time path of a variable can either be deterministic (assuming the change in trend is constant) or stochastic (assuming the change in trend varies randomly around a constant). Estimation of a deterministic trend is straightforward, yet it often oversimplifies the data generating process. The assumption of stochastic trend seems to be a better fit to observed behavior of various time series as they tend to evolve with abrupt changes. Nevertheless, its estimation is difficult and can have serious implications due to accumulation of past errors.


Kim et.al. (2009) proposed l1 trend filtering method, which produces trend estimates that are piecewise linear. In this method, changes in slope of the estimated trend can also be interpreted as abrupt changes or events in the underlying dynamics of the time series in question.

EViews add-in "l1filter" implements the primal-dual interior point method1 outlined in Kim et.al. (2009). The add-in is designed to work in series object and can also be run from command line. In that respect, it is similar to “hpfilter1s” add-in. For details, please see the information in the documentation that comes with the add-in.

l1 trend filtering method can also be thought of as a segmented linear regression, where we fit a straight line on each segment. Note that this is different than the approach of breakpoint regression as the latter does not necessarily have consistent values at the join points.

The use of add-in is demonstrated below on a different data set than that of Kim et.al. (2009) in order to avoid slow computing time2. Instead, a low frequency macroeconomic time series data is chosen for the purpose: Monthly Industrial Production Index (2010=100) of Turkish Economy between January 1986 and August 20163.

Import the data into EViews4:
import .\tripi.xls range=mevsim_1986!$B$5:$N$37 byrow colhead=1 namepos=all na="#N/A" names=(, , , , , , , , , , , , , , , , , , , , , , , , , , , , , 2013, 2014, 2015, 2016) @freq U 1 @smpl @all

Restructure the workfile:
pagestack num? @ *?  *
pagestruct(freq=m, start=1986, end=2016)

Get rid of unnecessary series:
delete id01 series01 var01 year

Change the name of dependent variable:

rename num ipitr_adj



Plot the series:
ipitr_adj.line

Estimate a deterministic trend:
equation eqdet.ls ipitr_adj c @trend


Visualize the fit:
eqdet.resids

Estimate a stochastic trend (i.e. Hodrick-Prescott filter5):
freeze(graph_hp) ipitr_adj.hpf(lambda=14400) trend_hp


Estimate l1trend:

Ipitr_adj.l1filter(trend=trend_l1,lambda=100,draw)
Note that in addition to the penalty parameter the chart title also includes the maximum lambda value that would produce a straight line (i.e. affine fit) as well as the resulting status of iterations.
We can compare two estimated trends:
graph graph_comp.line trend_hp trend_l1
graph_comp.draw(shade, bottom) 2008m01 2010m12 'shade the period of interest

Please note the relatively smooth transition behavior of HP filter around the global financial crisis of 2008 and the V-shaped recovery pattern captured (more accurately) by l1 filter.
l1 trend is also more robust to end-point sensitivity:
smpl @first 2016m07 'drop the last observation
ipitr_adj.hpf(lambda=14400) trend_hpb
ipitr_adj.l1filter(trend=trend_l1b,lambda=100)
line trend_l1-trend_l1b trend_hp-trend-hpb
smpl @all


Another useful practical application of l1 filter is the identification of breakpoints (i.e. kink points) for a given time series as it has the ability to estimate piecewise linear trend, where the consecutive segments have consistent values at their join points. EViews already allows breakpoint estimation, which does not require such consistency when used in the identification of breaks for the time series of interest.Although Bai and Perron (2003) make some very useful and practical recommendations, deciding the number and type of breaks is still an art as much as a science. In this case, global maximization approaches seem to work better as they are able to identify important turning points more successfully (with a margin of error, of course) in the history of Turkish economy6:
equation eqbreak.breakls(method=globplus1,trim=5,size=1,heterr) ipitr_adj c @trend
string breaks = eqbreak.@breaks 'save the dates of breakpoints


Visualize the fit:

eqbreak.resids(g)

Save the fitted values:
eqbreak.fit trend_break
We can compare the results from both analyses:
line trend_break trend_l1

l1filtering is a useful tool for both trend estimation and breakpoint detection of a time series. Although the add-in can only be used for such purposes only, the interior-point method developed by authors can be extended to handle the problems in areas like; univariate time series decomposition, outlier detection, regularized least squares, multivariate time series analysis, etc.


References
Bai, J. and Perron, P. (2003). "Computation and Analysis of Multiple Structural Change Models", Journal of Applied Econometrics, v. 18, pp. 1–22.
Hodrick, R. J. and Prescott, E. C. (1997). "Postwar U.S. Business Cycles: An Empirical Investigation", Journal of Money, Credit and Banking, Vol. 29, pp. 1-16.

Kim, S-J., Koh, K., Boyd, S. and Gorinevsky, D.  (2009). "L1 Trend Filtering", SIAM Review, Vol. 51(2), pp. 339-360.




1 Please visit authors’ web page for details: http://web.stanford.edu/~boyd/papers/l1_trend_filter.html
2 The original algorithm exploits the sparse matrix structure, which EViews does not have that functionality (yet).
3 Officially adjusted for calendar and seasonal effects: http://www.turkstat.gov.tr/PreTablo.do?alt_id=1024
4 You can download the data from: http://www.turkstat.gov.tr/PreIstatistikTablo.do?istab_id=2418
5  In order to reflect the usual practice in macroeconomic time series modelling, default value of HP filter is preferred here (Hodrick and Prescott, 1997). In the original study, however, authors used the same lambda for both L1 and HP filters. Difference in the results would become much more pronounced if we did so.
6  Turkey experienced major economic crises of domestic nature in 1994 and 2001. Asian crisis hit Turkish economy during 1998 and not to mention the impact from global financial crisis of 2008. As of end-2010, Central Bank of Turkey announced a new (unconventional) monetary policy for the sake of financial stability and introduced the interest rate corridor framework from 2011 and onwards.

EViews Add-In: Importing Ken French’s Data Library

$
0
0
Background
The frenchdata add-in is designed to make it easier and faster to download data from Ken French's data library. The data in the library are in zipped *.txt or *.csv files, many with multiple data sets and mixed date formats that can be tedious to import. This add-in, in contrast, is straightforward and requires minimal input. After downloading and processing each file is put in a separate workfile, multiple datasets in a single file are separated and each one is put in a separate page, data columns are put into series, and date formats are read from the files and applied to the page(s) of the workfile.

How to Use: GUI
After installation, go to the Add-ins menu item and choose “Ken French's data library.” A small window will appear listing the data files that can be downloaded. Choose the desired files (multiple files may be chosen), then the OK button (see figure below).



EViews will automatically process the data in each file, assigning dates and splitting up multiple sets of data that may be in the same file. The four “International Research Returns Data” contain multiple files, each of which will be placed in a separate workfile.

How to Use: Command line

frenchdata """name1""""name2"""

Fetch and process data from Ken French's data library.

Parameters
“ ““name1”” ““name2”” ““...”” ”: names of files in double quotes
Returns
series: Series corresponding to data columns in file(s). Multiple datasets in a file go in separate pages in the workfile. Multiple files go in separate workfiles.

Examples
frenchdata """5 Industry Portfolios"""

frenchdata """Fama/French 3 Factors""""Fama/French 3 Factors [Weekly]"""

AutoRegressive Distributed Lag (ARDL) Estimation. Part 1 - Theory

$
0
0
One of our favorite bloggers, Dave Giles often writes about current trends in econometric theory and practice. One of his most popular topics is ARDL modeling, and he has a number of fantastic posts about it.

Since we have recently updated ARDL estimation in EViews 9.5, and are in the midst of adding some enhanced features to ARDL for the next version of EViews, EViews 10, we thought we would jot down our own thoughts on the theory and practice of ARDL models, particularly in regard to their use as a cointegration test.

This blog post will be in three parts. The first will discuss the theory behind ARDL models, the second will present the theory behind correct inference of the Bounds test, while the third will bring everything together with an example in EViews.



Overview

ARDL models are linear time series models in which both the dependent and independent variables are related not only contemporaneously, but across historical (lagged) values as well. In particular, if $y_t$ is the dependent variable and $x_1, \ldots, x_k$ are $k$ explanatory variables, a general ARDL$(p,q_1,\ldots,q_k)$ model is given by: \begin{align} y_t = a_0 + a_1t + \sum_{i=1}^p{\psi_i y_{t-i}} + \sum_{j=1}^{k}\sum_{l_j=0}^{q_j}{\beta_{j,l_j}x_{j,t-l_j}} + \epsilon_t \label{eq.ardl.1} \end{align} where $\epsilon_t$ are the usual innovations, $a_0$ is a constant term, and $a_1, \psi_i,$ and $\beta_{j,l_j}$ are respectively the coefficients associated with a linear trend, lags of $y_t$, and lags of the $k$ regressors $x_{j,t}$ for $j=1,\ldots k$. Alternatively, let $L$ denote the usual lag operator and define $\psi(L)$ and $\beta_j(L)$ as the lag polynomials: $$\psi(L) = 1 - \sum_{i=1}^{p}{\psi_iL^i} \quad \text{and} \quad \beta_j(L) = \sum_{l_j=0}^{q_j}{\beta_{j,l_j}L^{l_j}}$$ Then, equation (\ref{eq.ardl.1}) above can also be written as: \begin{align} \psi(L)y_t = a_0 + a_1t + \sum_{j=1}^{k}{\beta_{j}(L)x_{j,t}} + \epsilon_t \label{eq.ardl.1.lag} \end{align} Although ARDL models have been used in econometrics for decades, they have gained popularity in recent years as a method of examining cointegrating relationships. Two seminal contributions in this regard are Pesaran and Shin (1998, PS(1998)) and Pesaran, Shin and Smith (2001, PSS(2001)). In particular, they argue that ARDL models are especially advantageous in their ability to handle cointegration with inherent robustness to misspecification of integration orders of relevant variables. In this regard, we have three cases of interest:


  • All variables are I$(d)$ for some $0\leq d$ and are not cointegrated -- fractional orders of integration are in principle also possible. Here one can use familiar least squares techniques to estimate and interpret equation (\ref{eq.ardl.1}) in levels when $d=0$ and in appropriate differences when $d>0$.
  • All variables are I$(1)$ and are cointegrated. Here one can:
    • use least squares to estimate the cointegrating (long-run) relationship by regressing $y_t$ on $x_{j,t}$ for $j=1,\ldots k$ in levels; and/or,
    • use least square to estimate speed of adjustment of short-run dynamics to the cointegrating relationship by regressing the appropriate error-correction model (ECM).
  • Some variables are I$(0)$, others are I$(1)$, and amongst the latter, some are cointegrated.

It is precisely in this last case where traditional cointegration methodologies of Engle-Granger (1987), Phillips and Ouliaris (1990) or Johansen (1995), typically fail since all variables need to have identical orders of integration, usually I$(1)$. This requires pre-testing for the presence of a unit root in each of the variables under consideration, which is clearly subject to misclassification, particularly since unit root tests are known to suffer size and power problems in many cases of interest; see Perron and Ng (1996).

Alternatively, the PSS(2001) bounds test for cointegration is not subject to such limitations and readily accommodates the nuances of the third case. The test is in fact a parameter significance test on the long-run variables in the ECM of the underlying vector autoregression (VAR) model, and works when all or some variables are I$(0)$, I$(1)$, or even mutually cointegrated. Since there exists a one-to-one correspondence between an ECM of a VAR model and an ARDL model (see Banerjee et. al., 1993), and since ARDL models are estimated and interpreted using familiar least squares techniques, ARDL models are de facto the standard of estimation when one chooses to remain agnostic about the orders of integration of the underlying variables. It is precisely in this regard where the ARDL methodology shines.

Specification

Although the general ARDL model is specified in (\ref{eq.ardl.1}), there exist three alternative representations. While all three can be used for parameter estimation, the first is typically used for intertemporal dynamic estimation, the second for post-estimation derivation of the long-run (equilibrium) relationship, whereas the third is a reduction of equation (\ref{eq.ardl.1}) to the conditional error correction (CEC) representation in the PSS(2001) bounds test; see Banerjee et. al. (1993).

All three representations require some preliminary results. Using principles underlying the famous Beveridge-Nelson decomposition, recall that $\psi(L)$ and $\beta_j(L)$ can always be decomposed as: \begin{align*} \psi(L) = \psi(1) + (1-L)\widetilde{\psi}(L) \quad \text{and} \quad \beta_j(L) = \beta_j(1) + (1-L)\widetilde{\beta}_j(L) \end{align*} where \begin{align*} \widetilde{\psi}(L) = \sum_{i=0}^{p-1}{\widetilde{\psi}_{i}L^{i}} &\quad \text{and} \quad \widetilde{\psi}_i = -\sum_{r=i+1}^{p}\psi_r\\ \widetilde{\beta}_j(L) = \sum_{l_j=0}^{q_j-1}{\widetilde{\beta}_{j,l_j}L^{l_j}} &\quad \text{and} \quad \widetilde{\beta}_{j,l_j} = -\sum_{s=l_j+1}^{q_j}\beta_{j,s} \end{align*} and $$\psi(1) = 1 - \sum_{i=1}^{p}{\psi_i} \quad \text{and} \quad \beta_j(1) = \sum_{l_j=0}^{q_j}{\beta_{j,l_j}}$$ Next, note that $\psi(L) = 1 - \psi^\star(L)$ where $\psi^\star(L) = \sum_{i=1}^{p}{\psi_iL^i}$. Furthermore, observe that: $$ \psi^\star(L) = \sum_{i=1}^{p}{\psi_iL^i} = \left(\sum_{i=1}^{p}{\psi_iL^{i-1}}\right)L = \left(\psi^\star(1) + (1-L)\widetilde{\psi^\star}(L)\right)L $$ where $\widetilde{\psi^\star}(L) = \sum_{i=1}^{p-1}{\widetilde{\psi^\star}_{i}L^{i-1}}$, $\widetilde{\psi^\star}_i = -\sum_{r=i+1}^{p}\psi_r$, and $\psi^\star(1) = \sum_{i=1}^{p}{\psi_i}$. Finally, note that for any series $z_t$ one can always write: $$z_t = z_{t-1} + \Delta z_t$$

First Representation: (Intertemporal Dynamics Regression)

The typical starting point for most ARDL applications is the estimation of intertemporal dynamics. In this form, one is interested in estimating the relationship between $y_t$ on both its own lags as well as the contemporaneous and lagged values of the $k$ regressors $x_{j,t}$. This in fact the basis of the ARDL model studied in PS(1998). In particular, we cast equation (\ref{eq.ardl.1}) into the following representation: \begin{align} y_t &= a_0 + a_1t + \sum_{i=1}^p{\psi_i y_{t-i}} + \sum_{j=1}^{k}{\beta_j(L)x_{j,t}} + \epsilon_t \notag\\ &= a_0 + a_1t + \sum_{i=1}^p{\psi_i y_{t-i}} + \sum_{j=1}^{k}{\left(\beta_j(1) + (1-L)\widetilde{\beta}_j(L)\right)x_{j,t}} + \epsilon_t \notag\\ &= a_0 + a_1t + \sum_{i=1}^p{\psi_i y_{t-i}} + \sum_{j=1}^{k}{\beta_j(1)x_{j,t}} + \sum_{j=1}^{k}{\widetilde{\beta}_j(L)\Delta x_{j,t}} + \epsilon_t \label{eq.ardl.2} \end{align} where we use the first difference notation $\Delta = (1-L)$. Since equation (\ref{eq.ardl.2}) does not explicitly solve for $y_t$, it is typically interpreted as a regression for intertemporal dynamics. Of course, the model above uses theoretical coefficients whereas in a practical regression setting, it would be represented as: \begin{align} y_t &= a_0 + a_1t + \sum_{i=1}^p{b_{0,i} y_{t-i}} + \sum_{j=1}^{k}{b_{j}x_{j,t}} + \sum_{j=1}^{k}{\sum_{l_j=1}^{q_j-1}c_{j,l_j}\Delta x_{j,t-l_j}} + \epsilon_t \label{eq.ardl.2.reg} \end{align}

Second Representation: (Post-Regression Derivation of Long-Run Dynamics)

The second representation is in essence an attempt to derive the long-run relationship between $y_t$ and the $k$ regressors. As such, the representation solves for $y_t$ in terms of $x_{j,t}$. In particular, starting from equation (\ref{eq.ardl.1}), note that: \begin{align*} \psi(L)\Delta y_t &= (1-L)\psi(L)y_t\\ &= (1-L) \left(y_t - \sum_{i=1}^p{\psi_i y_{t-i}}\right)\\ &= (1-L)\left(a_0 + a_1t + \sum_{j=1}^{k}{\beta_j(L)x_{j,t}} + \epsilon_t\right)\\ &= a_1 + \sum_{j=1}^{k}{\beta_j(L)\Delta x_{j,t}} + \Delta\epsilon_t \end{align*} Next, assuming $\psi(L)$ is in fact invertible, that is, the roots of the characteristic polynomial $1 - \sum_{i=1}^{p}{\psi_iz^i} = 0$ all fall outside the unit circle and a stable relationship between $y_t$ and $x_{1,t}, \ldots, x_{k,t}$ does indeed exist, it holds that: $$\Delta y_t = \psi^{-1}(L) \left(a_1 + \sum_{j=1}^{k}{\beta_j(L)\Delta x_{j,t}} + \Delta\epsilon_t\right)$$ Furthermore, noting that $\psi(L)y_t = \psi(1)y_t + \widetilde{\psi}(L)\Delta y_t$, rewrite equation (\ref{eq.ardl.1}) as follows: \begin{align*} \psi(1)y_t &= \psi(L)y_t - \widetilde{\psi}(L)\Delta y_t\\ \psi(1)y_t &= a_0 + a_1t + \sum_{j=1}^{k}{\beta_j(1)x_{j,t}} + \sum_{j=1}^{k}{\widetilde{\beta}_j(L)\Delta x_{j,t}} - \widetilde{\psi}(L)\Delta y_t + \epsilon_t\\ &= a_0 + a_1t + \sum_{j=1}^{k}{\beta_j(1)x_{j,t}} + \sum_{j=1}^{k}{\widetilde{\beta}_j(L)\Delta x_{j,t}} - \widetilde{\psi}(L)\psi^{-1}(L)\left( a_1 + \sum_{j=1}^{k}{\beta_j(L)\Delta x_{j,t}} + \Delta\epsilon_t \right) + \epsilon_t\\ &= a_0^\star + a_1t + \sum_{j=1}^{k}{\beta_j(1)x_{j,t}} + \sum_{j=1}^{k}{\widetilde{\beta}^\star_j(L)\Delta x_{j,t}} + \epsilon_t^\star \end{align*} where \begin{align*} a_0^\star &= a_0 - \widetilde{\psi}(L)\psi^{-1}(L)a_1\\ &= a_0 - \widetilde{\psi}(1)\psi^{-1}(1)a_1\\ \widetilde{\beta}^\star_j(L) &= \widetilde{\beta}_j(L) - \widetilde{\psi}(L)\psi(L)^{-1}\beta_j(L)\\ \epsilon_t^\star &= \epsilon_t - \widetilde{\psi}(L)\psi(L)^{-1}\Delta\epsilon_t \end{align*} At last, the second representation is formulated as: \begin{align} y_t &= \psi^{-1}(1)\left( a_0^\star + a_1t + \sum_{j=1}^{k}{\beta_j(1)x_{j,t}} + \sum_{j=1}^{k}{\widetilde{\beta}^\star_j(L)\Delta x_{j,t}} + \epsilon_t^\star \right) \notag\\ &= \alpha_0 + \alpha_1 t + \sum_{j=1}^{k}{\theta_j(1)x_{j,t}} + \sum_{j=1}^{k}{\widetilde{\theta}_j(L)\Delta x_{j,t}} + \xi_t \label{eq.ardl.3} \end{align} where \begin{align*} \alpha_0 &= \psi^{-1}(1) \left(a_0 - \widetilde{\psi}(1)\alpha_1 \right)\\ \alpha_1 &= \psi^{-1}(1) a_1\\ \theta_j(1) &= \psi^{-1}(1)\beta_j(1)\\ \widetilde{\theta}_j(L) &= \psi^{-1}(1) \left(\widetilde{\beta}_j(L) - \widetilde{\psi}(L)\psi(L)^{-1}\beta_j(L)\right)\\ \xi_t &= \psi^{-1}(1)\left(\epsilon_t - \widetilde{\psi}(L)\psi(L)^{-1}\Delta\epsilon_t\right) \end{align*} From equation (\ref{eq.ardl.2}), we are typically interested in the long-run (trend) parameters captured by $\alpha_1$ and $\theta_j(1)$, for $j=1,\ldots,k$. In fact, given the one-to-one correspondence between the parameter estimates obtained in (\ref{eq.ardl.2}) and equation (\ref{eq.ardl.3}), having estimated the regression model (\ref{eq.ardl.2.reg}), one can use the parameter formulas above to derive estimates of the long-run parameters post-estimation. In particular, if $\widehat{a}_1,\widehat{b}_{0,1},\ldots,\widehat{b}_{0,p},\widehat{b}_1\ldots,\widehat{b}_k$ denote the relevant subset of estimated coefficients from the regression model (\ref{eq.ardl.2.reg}), then, a post-regression estimate of the long-run parameters is derived as follows: $$\widehat{\alpha}_1 = \frac{\widehat{a}_1}{1 - \displaystyle\sum_{i=1}^{p}{\widehat{b}_{0,i}}} \quad \text{and} \quad \widehat{\theta}_j(1) = \frac{\widehat{b}_j}{1 - \displaystyle\sum_{i=1}^{p}{\widehat{b}_{0,i}}}$$

Third Representation: (Conditional Error Correction Form and the Bounds Test)

The final representation is arguably the most interesting and one that typically receives the most attention in applied work. The objective here is to test for cointegration by reducing a typical VAR framework to its corresponding conditional error correction (CEC) form. As it happens to be, the CEC model of interest is in fact an ARDL model with a one-to-one correspondence with the model in (\ref{eq.ardl.2}). To see this, substitute the right hand side of equation (\ref{eq.ardl.1}) for $y_t$ in line 2 below. In particular: \begin{align} \Delta y_t &= y_t - y_{t-1} \notag\\ &= a_0 + a_1t + \psi^\star(L)y_t + \sum_{j=1}^{k}{\beta_j(L)x_{j,t}} + \epsilon_t - y_{t-1}\notag\\ &= a_0 + a_1t - y_{t-1} + \psi^\star(1)y_{t-1} + \widetilde{\psi^\star}(L)\Delta y_{t-1} + \sum_{j=1}^{k}{\beta_j(L)x_{j,t}} + \epsilon_t \notag\\ &= a_0 + a_1t - \left(1 - \psi^\star(1)\right)y_{t-1} + \widetilde{\psi^\star}(L)\Delta y_{t-1} + \sum_{j=1}^{k}{\beta_j(L)\left(x_{j,t-1} + \Delta x_{j,t}\right)} + \epsilon_t \notag\\ &= a_0 + a_1t - \psi(1)y_{t-1} + \widetilde{\psi^\star}(L)\Delta y_{t-1} + \sum_{j=1}^{k}{\left(\beta_j(1) + (1-L)\widetilde{\beta}_j(L)\right)x_{j,t-1}} + \sum_{j=1}^{k}{\beta_j(L)\Delta x_{j,t}} + \epsilon_t \notag\\ &= a_0 + a_1t - \psi(1)y_{t-1} + \sum_{j=1}^{k}{\beta_j(1)x_{j,t-1}} \notag\\ &+ \left(\widetilde{\psi^\star}(L)\Delta y_{t-1} + \sum_{j=1}^{k}{\widetilde{\beta}_j(L)\Delta x_{j,t-1}}\right) + \sum_{j=1}^{k}{\beta_j(L)\Delta x_{j,t}} + \epsilon_t \label{eq.ardl.4} \end{align} Equation (\ref{eq.ardl.4}) above is the CEC form derived from the ARDL model in equation (\ref{eq.ardl.1}). Rewriting this equation as: \begin{align} \Delta y_t &= a_0 + a_1t - \psi(1)\left( y_{t-1} - \sum_{j=1}^{k}{\frac{\beta_j(1)}{\psi(1)}x_{j,t-1}}\right) \notag\\ &+ \left(\widetilde{\psi^\star}(L)\Delta y_{t-1} + \sum_{j=1}^{k}{\widetilde{\beta}_j(L)\Delta x_{j,t-1}}\right) + \sum_{j=1}^{k}{\beta_j(L)\Delta x_{j,t}} + \epsilon_t \notag\\ &= a_0 + a_1t - \psi(1)EC_{t-1} \notag\\ &+ \left(\widetilde{\psi^\star}(L)\Delta y_{t-1} + \sum_{j=1}^{k}{\widetilde{\beta}_j(L)\Delta x_{j,t-1}}\right) + \sum_{j=1}^{k}{\beta_j(L)\Delta x_{j,t}} + \epsilon_t \label{eq.ardl.5} \end{align} it is readily verified that the error correction term, typically denoted as $EC_t$, is also the cointegrating relationship when $y_t$ and $x_{1,t},\ldots,x_{k,t}$ are cointegrated. In fact, PSS(2001) demonstrate that equation (\ref{eq.ardl.4}) is in fact (abstracting from differing lag values) the CEC of the VAR$(p)$ model: $$\pmb{\Phi}(L)(\pmb{z}_t - \pmb{\mu} - \pmb{\gamma}t) = \pmb{\epsilon}_t$$ where $\pmb{z}_t$ is a $(k+1)$-vector $(y_t,x_{1,t},\ldots, x_{k,t})^\top$ and $\pmb{\mu}$ and $\pmb{\gamma}$ are respectively the $(k+1)$-vectors of intercept and trend coefficients, and $\pmb{\Phi}(L) = \pmb{I}_{k+1} - \sum_{i=1}^{p}\pmb{\Phi}_iL^i$ is the $(k+1)$ square matrix lag polynomial. This is particularly important as the CEC is typically used as a platform for testing for the presence of cointegration.

Traditionally, the cointegration tests of Engle-Granger (1987), Phillips and Ouliaris (1990) or Johansen (1995), typically require all variables in the VAR to be I$(1)$. This clearly requires a battery of pre-testing for the presence of a unit root in each of the variables under consideration, and is clearly subject to misclassification, particularly since unit root tests are known to suffer size and power problems in many cases of interest. In contrast, PSS(2001) propose a test for cointegration that is not only robust to whether variables of interest are I$(0)$, I$(1)$, or mutually cointegrated, but is significantly easier to implement as it only requires estimation and inferential procedures used in the familiar least squares regressions. In this regard, PSS(2001) discuss the famous bounds test for cointegration as a test on parameter significance in the cointegrating relationship of the CEC model (\ref{eq.ardl.4}). In other words, the test is a standard $F-$ or Wald test for the following null and alternative hypotheses: \begin{align*} H_0 &: \quad \psi(1) \cap \left \{ \beta_j(1) \right \}_{j=1}^{k} = 0 \quad \text{(variables are not cointegrated)}\\ H_A &: \quad \psi(1) \cup \left \{ \beta_j(1) \right \}_{j=1}^{k} \neq 0 \quad \text{(variables are cointegrated)} \end{align*} Once the test statistic is computed, it is compared to two asymptotic critical values corresponding to polar cases of all variables being purely I$(0)$ or purely I$(1)$. As such, these critical values lie in the lower and upper tails, respectively, of a non-standard mixture distribution involving integral functions of Brownian motions. When the test statistic is below the lower critical value, one fails to reject the null and concludes that cointegration is not possible. In contrast, when the test statistic is above the upper critical value, one rejects the null and concludes that cointegration is indeed possible. In either of these two cases, knowledge of the cointegrating rank is not necessary. Alternatively, should the test statistic fall between the lower and upper critical values, testing is inconclusive, and knowledge of the cointegrating rank is required to proceed further.

We also remark here that it has been argued in Narayan (2005) that asymptotic critical values presented in PSS (2001) are usually unrealistic for most practical implementations since they are derived for sample sizes of $T=1000$. Accordingly, Narayan (2005) presents critical values for sample sizes ranging from $T=30$ to $T=80$ in increments of 5, which they argue ought to improve inferential reliability under most finite sample settings. EViews 10 will offer the user a choice of whether they'd like to use the PSS (2001) or the Narayan (2005) critical values.

Here it is also important to highlight that PSS(2001) offer five alternative interpretations of the CEC model (\ref{eq.ardl.4}), distinguished by whether deterministic terms integrate into the error correction term. When deterministic terms contribute to the error correction term, they are implicitly projected onto the span of the cointegrating vector. In other words, $\pmb{\mu}$ and $\pmb{\gamma}$ of the VAR$(p)$ model are restricted to a linear combination of the elements in the cointegrating vector. This clearly implies that $a_0$ and $a_1$ in equation (\ref{eq.ardl.4}) must too be similarly restricted. Below are summaries of the theoretical (DGP) and practical regression (REG) models, respectively, for each of the five interpretations along with the appropriate cointegrating relationship ($EC_t$) and the bounds test null-hypothesis ($H_0$).

Case 1: (No Constant and No Trend): $a_0 = a_1 = 0$, that is, ($\pmb{\mu} = \pmb{\gamma} = \pmb{0}$)

DGP:\begin{align*} \Delta y_t &=-\psi(1)y_{t-1} + \sum_{j=1}^{k}{\beta_j(1)x_{j,t-1}}\\ &+ \left(\widetilde{\psi^\star}(L)\Delta y_{t-1} + \sum_{j=1}^{k}{\widetilde{\beta}_j(L)\Delta x_{j,t-1}}\right) + \sum_{j=1}^{k}{\beta_j(L)\Delta x_{j,t}} + \epsilon_t \\ EC_t &= y_{t} - \sum_{j=1}^{k}{\frac{\beta_j(1)}{\psi(1)}x_{j,t}} \\ H_0 &:\quad \psi(1) = \beta_j(1) = 0, \quad \forall j \end{align*} Regression:\begin{align} \Delta y_t &= b_0y_{t-1} + \sum_{j=1}^{k}{b_{j}x_{j,t-1}} \notag\\ &+ \sum_{i=1}^{p-1}{c_{0,i}\Delta y_{t-i}} + \sum_{j=1}^{k}\sum_{l_j=1}^{q_j-1}{c_{j,l_j}\Delta x_{j,t-l_j}} + \sum_{j=1}^{k}{d_{j}\Delta x_{j,t}} + \epsilon_t \label{eq.ardl.6}\\ EC_t &= y_{t} - \sum_{j=1}^{k}{\frac{b_j}{b_0}x_{j,t}} \notag\\ H_0 &: \quad b_0 = b_j = 0, \quad \forall j \notag \end{align}
Case 2: (Restricted Constant and No Trend): $a_0 = \psi(1)\mu_y + \sum_{j=1}^{k}\beta_j(1)\mu_{x_j}$ and $a_1 = 0$ so that $\pmb{\gamma} = \pmb{0}$

DGP:\begin{align*} \Delta y_t &=-\psi(1)\left(y_{t-1} - \mu_y\right) + \sum_{j=1}^{k}{\beta_j(1)\left(x_{j,t-1} - \mu_{x_j}\right)} \\ &+ \left(\widetilde{\psi^\star}(L)\Delta y_{t-1} + \sum_{j=1}^{k}{\widetilde{\beta}_j(L)\Delta x_{j,t-1}}\right) + \sum_{j=1}^{k}{\beta_j(L)\Delta x_{j,t}} + \epsilon_t \\ EC_t &= y_{t} - \mu_y - \sum_{j=1}^{k}{\frac{\beta_j(1)}{\psi(1)}\left(x_{j,t} - \mu_{x_j}\right)} \\ H_0 &:\quad \psi(1) = \beta_j(1) = 0, \quad \forall j \end{align*} Regression:\begin{align} \Delta y_t &= a_0 + b_0y_{t-1} + \sum_{j=1}^{k}{b_{j}x_{j,t-1}} \notag\\ &+ \sum_{i=1}^{p-1}{c_{0,i}\Delta y_{t-i}} + \sum_{j=1}^{k}\sum_{l_j=1}^{q_j-1}{c_{j,l_j}\Delta x_{j,t-l_j}} + \sum_{j=1}^{k}{d_{j}\Delta x_{j,t}} + \epsilon_t \label{eq.ardl.7}\\ EC_t &= y_{t} - \sum_{j=1}^{k}{\frac{b_j}{b_0}x_{j,t}} - \frac{a_0}{b_0} \notag\\ H_0 &: \quad a_0 = b_0 = b_j = 0, \quad \forall j \notag \end{align}
Case 3: (Unrestricted Constant and No Trend): $a_0 \neq 0$ and $a_1 = 0$ so that $\pmb{\mu} \neq \pmb{0}$ and $\pmb{\gamma} = \pmb{0}$

DGP:\begin{align*} \Delta y_t &=a_0 -\psi(1)y_{t-1} + \sum_{j=1}^{k}{\beta_j(1)x_{j,t-1}} \\ &+ \left(\widetilde{\psi^\star}(L)\Delta y_{t-1} + \sum_{j=1}^{k}{\widetilde{\beta}_j(L)\Delta x_{j,t-1}}\right) + \sum_{j=1}^{k}{\beta_j(L)\Delta x_{j,t}} + \epsilon_t \\ EC_t &= y_{t} - \sum_{j=1}^{k}{\frac{\beta_j(1)}{\psi(1)}x_{j,t}} \\ H_0 &:\quad \psi(1) = \beta_j(1) = 0, \quad \forall j \end{align*} Regression:\begin{align} \Delta y_t &= a_0 + b_0y_{t-1} + \sum_{j=1}^{k}{b_{j}x_{j,t-1}} \notag\\ &+ \sum_{i=1}^{p-1}{c_{0,i}\Delta y_{t-i}} + \sum_{j=1}^{k}\sum_{l_j=1}^{q_j-1}{c_{j,l_j}\Delta x_{j,t-l_j}} + \sum_{j=1}^{k}{d_{j}\Delta x_{j,t}} + \epsilon_t \label{eq.ardl.8}\\ EC_t &= y_{t} - \sum_{j=1}^{k}{\frac{b_j}{b_0}x_{j,t}} \notag\\ H_0 &: \quad b_0 = b_j = 0, \quad \forall j \notag \end{align}
Case 4: (Unrestricted Constant and Restricted Trend): $a_0 \neq 0$ so that $\pmb{\mu} \neq \pmb{0}$ and $a_1 = \psi(1)\gamma_y + \sum_{j=1}^{k}\beta_j(1)\gamma_{x_j}$

DGP:\begin{align*} \Delta y_t &=a_0 - \psi(1)\left(y_{t-1} - \gamma_y t\right) + \sum_{j=1}^{k}{\beta_j(1)\left(x_{j,t-1} - \gamma_{x_j}t\right)} \\ &+ \left(\widetilde{\psi^\star}(L)\Delta y_{t-1} + \sum_{j=1}^{k}{\widetilde{\beta}_j(L)\Delta x_{j,t-1}}\right) + \sum_{j=1}^{k}{\beta_j(L)\Delta x_{j,t}} + \epsilon_t \\ EC_t &= y_{t} - \gamma_yt - \sum_{j=1}^{k}{\frac{\beta_j(1)}{\psi(1)}\left(x_{j,t} - \gamma_{x_j}t\right)} \\ H_0 &:\quad \psi(1) = \beta_j(1) = 0, \quad \forall j \end{align*} Regression:\begin{align} \Delta y_t &= a_0 + a_1t + b_0y_{t-1} + \sum_{j=1}^{k}{b_{j}x_{j,t-1}} \notag\\ &+ \sum_{i=1}^{p-1}{c_{0,i}\Delta y_{t-i}} + \sum_{j=1}^{k}\sum_{l_j=1}^{q_j-1}{c_{j,l_j}\Delta x_{j,t-l_j}} + \sum_{j=1}^{k}{d_{j}\Delta x_{j,t}} + \epsilon_t \label{eq.ardl.9}\\ EC_t &= y_{t} - \sum_{j=1}^{k}{\frac{b_j}{b_0}x_{j,t}} - \frac{a_1}{b_0}t \notag\\ H_0 &:\quad a_1 = b_0 = b_j = 0, \quad \forall j \notag \end{align}
Case 5: (Unrestricted Constant and Unrestricted Trend): $a_0 \neq 0$ and $a_1 \neq 0$ so that $\pmb{\mu} \neq \pmb{0}$ and $\pmb{\gamma} \neq \pmb{0}$

DGP:\begin{align*} \Delta y_t &=a_0 + a_1t - \psi(1)y_{t-1} + \sum_{j=1}^{k}{\beta_j(1)x_{j,t-1}} \notag\\ &+ \left(\widetilde{\psi^\star}(L)\Delta y_{t-1} + \sum_{j=1}^{k}{\widetilde{\beta}_j(L)\Delta x_{j,t-1}}\right) + \sum_{j=1}^{k}{\beta_j(L)\Delta x_{j,t}} + \epsilon_t\\ EC_t &= y_{t} - \sum_{j=1}^{k}{\frac{\beta_j(1)}{\psi(1)}x_{j,t}}\\ H_0 &:\quad \psi(1) = \beta_j(1) = 0, \quad \forall j \end{align*} Regression:\begin{align} \Delta y_t &= a_0 + a_1t + b_0y_{t-1} + \sum_{j=1}^{k}{b_{j}x_{j,t-1}} \notag\\ &+ \sum_{i=1}^{p-1}{c_{0,i}\Delta y_{t-i}} + \sum_{j=1}^{k}\sum_{l_j=1}^{q_j-1}{c_{j,l_j}\Delta x_{j,t-l_j}} + \sum_{j=1}^{k}{d_{j}\Delta x_{j,t}} + \epsilon_t \label{eq.ardl.10}\\ EC_t &= y_{t} - \sum_{j=1}^{k}{\frac{b_j}{b_0}x_{j,t}} \notag\\ H_0 &:\quad b_0 = b_j = 0, \quad \forall j \notag \end{align}
Whilst EViews 9.5 only supports the first four cases, EViews 10 will support all 5.

References:

Banerjee, A., Dolado, J. J., Galbraith, J. W., Hendry, D., et al. (1993). Co-integration, error correction, and the econometric analysis of non-stationary data. OUP Catalogue.
Engle, R. F. and Granger, C. W. (1987). Co-integration and error correction: representation, estimation, and testing. Econometrica: journal of the Econometric Society, pages 251--276.
Johansen, S. (1995). Likelihood-based inference in cointegrated vector autoregressive models. Oxford University Press on Demand.
Narayan, P. K. (2005). The saving and investment nexus for china: evidence from cointegration tests. Applied economics, 37(17):1979--1990.
Perron, P. and Ng, S. (1996). Useful modifications to some unit root tests with dependent errors and their local asymptotic properties. The Review of Economic Studies, 63(3):435--463.
Pesaran, M. H. and Shin, Y. (1998). An autoregressive distributed-lag modelling approach to cointegration analysis. Econometric Society Monographs, 31:371--413.
Pesaran, M. H., Shin, Y., and Smith, R. J. (2001). Bounds testing approaches to the analysis of level relationships. Journal of applied econometrics, 16(3):289--326.
Phillips, P. C. and Ouliaris, S. (1990). Asymptotic properties of residual based tests for cointegration. Econometrica: Journal of the Econometric Society, pages 165--193.

Dynamic Factor Models in EViews

$
0
0
One of the current buzz topics in macro-econometrics is that of dynamic factor models. 

Factor models allow researchers to work with a large number of variables by reducing them down to a handful (often two) components, allowing tractable results to be obtained from unwieldy data. 

A natural extension to factor models is to allow dynamics to enter the relationships.  These dynamic factor models have become extremely popular due to their ability to model business cycles, and perform both forecasting and nowcasting (predicting the current state of the economy).

Although EViews has built-in factor analysis, we do not (yet!) have dynamic factor models included. 

Luckily two researchers from the Ministry of Finance in Sweden have recently posted a paper, and corresponding code, that estimates dynamic factor models in EViews with a simple programming subroutine utilising EViews' state-space estimation object.

This paper looks fantastic - good job guys!

AutoRegressive Distributed Lag (ARDL) Estimation. Part 2 - Inference

$
0
0
This is the second part of our AutoRegressive Distributed Lag (ARDL) post. For Part 1, please go here, and for Part 3, please visit here.

In this post we outline the correct theoretical underpinning of the inference behind the Bounds test for cointegration in an ARDL model. Whilst the discussion is by its nature quite technical, it is important that practitioners of the Bounds test have a grasp of the background behind its inferences.



Overview


While the ARDL approach to cointegration is typically considered synonymous with the Pesaran, Shin, and Smith (2001) Bounds test for cointegration, in this post we emphasize that correct inference is in fact rooted in cointegration theory. In Part 1 of this series, we mentioned that the ARDL framework is a one-to-one reparameterization of the conditional error correction model (ECM) representation of the underlying vector auto-regression (VAR).

Recall that a VAR is a natural extension of the univariate autoregressive model to multivariate series, and is often interpreted as an autoregressive system-of-equations regression model with multiple endogenous variables. As such, it lends itself to the analysis of simultaneous interactions between variables -- namely, their short-run dynamics, but more importantly, their long-run (equilibrating) or cointegrating behaviour. In this regard, the vector error correction model (VECM), which is a reparameterization of the VAR to isolate the equilibrating relationships, if they exist, is of central importance. Nevertheless, like the VAR, the VECM models simultaneous interactions among several endogenous variables. However, applications in Economics typically ask:

How does one variable in the VAR behave conditional on a all the others, which are themselves endogenously determined, and is their any cointegrating relationship among them?

In other words, we hope to derive a conditional ECM (CECM), which formalizes an ECM model for some variable conditional on all the others, but at the same time, isolates the cointegrating relationship among them. In this regard, we will demonstrate that the ARDL model is in fact a special case of the CECM. However, recall from Part 1 of this series that one of the major advantages of the ARDL model is due to its ability to estimate the long-run or cointegrating relationship. What we expound on here, is that this estimate may not always be defined or sensible, and even if it is, it may be degenerate; that is, seemingly stable in the short-run, but dissipates in the long-run. It is here where the Bounds test comes into the limelight: it is a way of statistically detecting the presence of cointegration. The advantage of the procedure is that it uses the CECM (ARDL) as a platform. Thus, in estimating the CECM (ARDL), one can simultaneously test for cointegration and estimate the equilibrating relationship. Lastly, if cointegration does exist, one can estimate and conduct inference on the speed of convergence to equilibrium. The following flow-chart summarizes the steps:





Vector Auto-regression (VAR) and the Vector Error Correction Model (VECM)

Introduced to econometrics by Sims (1980), we formalize below a VAR model with $p$ lags, namely VAR$(p)$, augmented with the usual deterministic dynamics (intercept and trend). \begin{align} \pmb{\Phi}(L)(\pmb{z}_t - \pmb{\mu} - \pmb{\gamma}t) &= \pmb{\epsilon}_t \notag \\ \pmb{\Phi}(L)\pmb{z}_t &= \pmb{\Phi}(L)\pmb{\mu} + \pmb{\Phi}(L)\pmb{\gamma}t + \pmb{\epsilon}_t \label{eq.ardl.11} \end{align} where $\pmb{z}_t$ is a $(k+1)$-vector $(y_t,x_{1,t},\ldots, x_{k,t})^\top = (y_t,\pmb{x}^\top_t)^\top$ with $\pmb{x}_t = (x_{1,t},\ldots, x_{k,t})^\top$, $\pmb{\mu}$ and $\pmb{\gamma}$ are respectively the $(k+1)$-vectors of intercept and trend coefficients, $\pmb{\Phi}(L) = \pmb{I}_{k+1} - \sum_{i=1}^{p}\pmb{\Phi}_iL^i$ is the $(k+1)$ square matrix lag polynomial, and $\pmb{I}_{k+1}$ is the identity matrix of dimension $(k+1)$, and $\pmb{\epsilon}_t = (\epsilon_{yt}, \pmb{\epsilon}_{xt}^\top)$ is the vector of innovations. We complete the setup following assumptions:


Assumption 1:

Individual Variables can be I$(0)$ or I$(1)$: The roots of $\det\left(\pmb{\Phi}(z)\right) = \det\left(\pmb{I}_{k+1} - \sum_{i=1}^{p}\pmb{\Phi}_iz^i\right) = 0$ satisfy either $|z|>1$ or $z=1$.


Assumption 2:

Variables are Correlated: The $(k+1)$-vector error process $\pmb{\epsilon}_t \sim N(\pmb{0}, \pmb{\Omega})$ with $\pmb{\Omega}$ positive definite.

Notice that Assumption 1 is the multivariate analogue of assumptions typically made for univariate AR$(p)$ processes. The assumption simply restricts $\pmb{z}_t$ to have at most one unit root in each of the series, and prevents the occurrence of seasonal and explosive roots. This allows $\pmb{z}_t$ to contain any combination of purely I$(1)$, purely I$(0)$, or mutually cointegrated variables. On the other hand, Assumption 2 restricts the errors to zero mean Gaussian processes with a covariance matrix $\pmb{\Omega}$ that allows variables in $\pmb{z}_t$ to be arbitrarily correlated. Under these assumptions, the VAR is in reduced form. This means that not only are all variables treated as endogenous, but any contemporaneous effects are exhibited through contemporaneous correlations in $\pmb{\Omega}$. While useful in its own right, a far more revelatory representation exists in the form of a vector error correction model (VECM).

Relying on the the Beveridge-Nelson (BN) decomposition and some clever rearrangement, it is readily shown that the VECM representation of the VAR in (\ref{eq.ardl.11}) is: \begin{align} \Delta\pmb{z}_t &= \left(\pmb{\Phi}(1)\pmb{\mu} + \left(\sum_{i=1}^{p}i\pmb{\Phi}_i\right)\pmb{\gamma}\right) + \pmb{\Phi}(1)\pmb{\gamma}t - \pmb{\Phi}(1)\pmb{z}_{t-1} + \widetilde{\pmb{\Phi}}^{\star}(L)\Delta\pmb{z}_t + \pmb{\epsilon}_t \notag \\ &= \pmb{a}_0 + \pmb{a}_1t - \pmb{\Phi}(1)\pmb{z}_{t-1} + \widetilde{\pmb{\Phi}}^{\star}(L)\Delta\pmb{z}_t + \pmb{\epsilon}_t \label{eq.ardl.12} \end{align} such that \begin{align} \pmb{a}_0 = \pmb{\Phi}(1)\pmb{\mu} + \left(\sum_{i=1}^{p}i\pmb{\Phi}_i\right)\pmb{\gamma} \quad \text{and} \quad \pmb{a}_1 = \pmb{\Phi}(1)\pmb{\gamma} \label{eq.ardl.13} \end{align} In fact, several important remarks emerge from this construction.

  • The Cointegrating Matrix is $\pmb{\Phi}(1)$: If the original VAR variables in (\ref{eq.ardl.11}), namely $\pmb{z}_t$, are I$(1)$, all variables in the VECM are I$(0)$, except possibly for $\pmb{z}_{t-1}$. Since orders of integration must balance, $\pmb{\Phi}(1)\pmb{z}_{t-1}$ must be I$(0)$. Since a set of I$(1)$ variables is said to be cointegrated if there exists a linear combination of said variables which is I$(0)$, it is clear that $\pmb{\Phi}(1)\pmb{z}_t$ is the matrix of cointegrating relationships and $\pmb{\Phi}(1)$ is the cointegrating matrix. In Economics, the concept is often referred to as a long-run relationship, motivating the example that while prices -- which are frequently I$(1)$ variables -- can drift apart in the short-run, economic forces will eventually force them to equilibrium.

  • No Cointegration when $\pmb{\Phi}(1) = \pmb{0}$. Every variable in $\pmb{z}_t$ is I$(1)$: Recall that the rank of a matrix is the number of its linearly independent columns (or rows). The concept is frequently used in ordinary least squares (OLS) regression, and is typically exemplified using the dummy variable trap. In this regard, since $\pmb{\Phi}(1)$ is a $(k+1)$-square matrix, assume $\DeclareMathOperator{\rank}{\textbf{rk}}\rank\left(\pmb{\Phi}(1)\right) = r_z$, where $0 \leq r_z \leq (k+1)$, and $\rank(\cdot)$ denotes the rank operator. In other words, among the $(k+1)$ columns in $\pmb{\Phi}(1)$, only $r_z$ are linearly independent, and the ones which are not, are linear combinations of those $r_z$. Moreover, $r_z = 0$ if and only if $\pmb{\Phi}(1) = \pmb{0}_{(1+k)^2}$, where $\pmb{0}_{(1+k)^2}$ denotes the $(1+k)$-square matrix of zeros. When this is the case, the VECM reduces to: $$\Delta\pmb{z}_t = \pmb{a}_0 + \pmb{a}_1t + \widetilde{\pmb{\Phi}}^{\star}(L)\Delta\pmb{z}_t + \pmb{\epsilon}_t$$ Since all variables on the right-hand side (RHS) are I$(0)$, it follows that $\Delta\pmb{z}_t \sim \text{I}(0)$, and therefore $\pmb{z}_t \sim \text{I}(1)$. In other words, when $r_z = 0$, every variable in $\pmb{z}_t$ is I$(1)$, and since $\pmb{\Phi}(1) = \pmb{0}_{(1+k)^2}$, there are no cointegrating relationships.

  • No Cointegration when $\pmb{\Phi}(1)$ has full rank. Every variable in $\pmb{z}_t$ is I$(0)$: When $r_z = (k+1)$, $\pmb{\Phi}(1)$ has full column rank (i.e. all columns (rows) are linearly independent). In this particular case, $\DeclareMathOperator{\spann}{\textbf{sp}} \pmb{\Phi}(1)\pmb{z}_{t-1} = \spann{\left(\pmb{z}_{t-1}\right)}$, where $\spann(\cdot)$ denotes the span -- the space of all unique linear combinations of $\pmb{z}_t$. This implies $\Delta \pmb{z}_t$ can be uniquely written as a linear combination of all variables in $\pmb{z}_t$, namely $\pmb{\Phi}(1)\pmb{z}_{t-1}$, plus the remaining deterministic and stationary ones. Since $\Delta \pmb{z}_t \sim \text{I}(0)$, this is only sensible when every variable in $\pmb{z}_t \sim \text{I}(0)$, and cointegration is not possible.

  • VECM Estimates Speed of Convergence to Equilibrium: A classical result in linear algebra is that for any $m\times m$ matrix $\pmb{M}$ with rank $r$, there exist $m \times r$ matrices $\pmb{A}$ and $\pmb{B}$ such that $\pmb{M} = \pmb{AB}^\top$, where $\pmb{B}$ consists of the $r$ linearly independent columns of $\pmb{M}$. Thus, we can always write $\pmb{\Phi}(1) = \pmb{AB}^\top$, where $m=(1+k)$. More importantly, it implies that $\pmb{A}$ measures the rate of convergence to equilibrium. To see this, recall that if $\pmb{z}_t$ is cointegrated, then $\pmb{\Phi}(1)\pmb{z}_{t-1} \sim \text{I}(0)$. We can therefore factorize the cointegrated relationships as $\pmb{\Phi}(1)\pmb{z}_{t-1} = \pmb{A}\pmb{B}^\top \pmb{z}_{t-1} = \pmb{A}\pmb{\zeta}_{t-1}$ where $\pmb{\zeta}_{t-1}$ is a mean zero I$(0)$ process. This is because the cointegrating relationships are now captured by $\pmb{B}^\top \pmb{z}_{t-1}$. Observe further that when the system is in actual equilibrium, $\pmb{B}^\top \pmb{z}_{t-1} = \pmb{0}_{1+k}$, where $\pmb{0}_{1+k}$ is $(1+k)$-vector of zeros. This is because equilibrium requires not only stability, which follows from the stationarity of $\pmb{B}^\top \pmb{z}_{t-1}$, but also constancy, which manifests only when accumulated short-run dynamics $\widetilde{\pmb{\Phi}}^{\star}(L)\Delta\pmb{z}_t$, and the shocks to $\pmb{B}^\top \pmb{z}_{t-1}$, namely, $\pmb{\zeta}_{t-1}$, are zero as well. Accordingly, if the system was in equilibrium in the previous period, any current deviations from this state, namely $\Delta \pmb{z}_t$, must arise from systematic shocks $\pmb{\epsilon}_t$, where we assume $\pmb{a}_0 = \pmb{a}_1 = \pmb{0}_{1+k}$ for simplicity. Alternatively, when the system is in disequilibrium, $\pmb{B}^\top \pmb{z}_{t-1} = \pmb{\zeta}_{t-1} \neq \pmb{0}_{1+k}$. Thus, when $\pmb{B}^\top \pmb{z}_{t-1} < \pmb{0}_{1+k}$ $(\pmb{B}^\top \pmb{z}_{t-1} > \pmb{0}_{1+k})$, the impact on $\Delta\pmb{z}_t$ is of magnitude $\pmb{A}$ and positive (negative), since $\pmb{\Phi}(1)$ enters the VECM with a negative sign. In other words, $\Delta\pmb{z}_t$ adjusts toward equilibrium in the opposite direction to disequilibrium by a proportion equal to $\pmb{A}$.

  • Cointegrating Relationships Include Constants and Trends: We have outlined this in Part 1 of this series. The restrictions in (\ref{eq.ardl.13}) indicate that $\pmb{a}_0$ and $\pmb{a}_1$ are linear functions of $\pmb{\Phi}(1)$. As such, they span the $r_z$ linearly independent columns of the cointegrating matrix $\pmb{\Phi}(1)$, and by extension, the cointegrating equation. This distinguishes the 5 data generating processes (DGPs) considered in Pesaran, Shin, and Smith (2001) and outlined in Part 1 of this series.

    • Case I: $\pmb{\mu} = \pmb{\gamma} = \pmb{0}$ which implies $\pmb{a}_0 = \pmb{a}_1 = 0$. Accordingly, the VECM (\ref{eq.ardl.12}) reduces to: $$\Delta \pmb{z}_t = -\pmb{\Phi}(1)\pmb{z}_{t-1} + \widetilde{\pmb{\Phi}}^{\star}(L)\Delta\pmb{z}_t + \pmb{\epsilon}_t$$
    • Case II: $\pmb{\mu} \neq \pmb{0}$, $\pmb{\gamma} = \pmb{0}$, and the restriction in (\ref{eq.ardl.13}) is imposed. This implies that $\pmb{a}_0 = \pmb{\Phi}(1)\pmb{\mu}$ and $\pmb{a}_1 = 0$. Accordingly, the VECM is just: $$\Delta \pmb{z}_t = -\pmb{\Phi}(1)\left(\pmb{z}_{t-1} - \pmb{\mu}\right) + \widetilde{\pmb{\Phi}}^{\star}(L)\Delta\pmb{z}_t + \pmb{\epsilon}_t$$
    • Case III: $\pmb{\mu} \neq \pmb{0}$, $\pmb{\gamma} = \pmb{0}$, and the restrictions in (\ref{eq.ardl.13}) not imposed. This implies that $\pmb{a}_0 \neq 0$, $\pmb{a}_1 = 0$, while the VECM becomes: $$\Delta \pmb{z}_t = \pmb{a}_0 -\pmb{\Phi}(1)\pmb{z}_{t-1} + \widetilde{\pmb{\Phi}}^{\star}(L)\Delta\pmb{z}_t + \pmb{\epsilon}_t$$
    • Case IV: $\pmb{\mu},\pmb{\gamma} \neq \pmb{0}$, and the restrictions in (\ref{eq.ardl.13}) are imposed only on $\pmb{a}_1$. This implies that $\pmb{a}_0 \neq 0$ and $\pmb{a}_1 = \pmb{\Phi}(1)\pmb{\gamma}$. The VECM is now: $$\Delta \pmb{z}_t = \pmb{a}_0 -\pmb{\Phi}(1)\left(\pmb{z}_{t-1} - \pmb{\gamma}t\right) + \widetilde{\pmb{\Phi}}^{\star}(L)\Delta\pmb{z}_t + \pmb{\epsilon}_t$$
    • Case V: $\pmb{\mu},\pmb{\gamma} \neq \pmb{0}$, and the restrictions in (\ref{eq.ardl.13}) are not imposed. This implies that $\pmb{a}_0,\pmb{a}_1 \neq 0$ and the VECM is represented in (\ref{eq.ardl.12}).

Remember that the VECM is a reparameterization of a VAR. Accordingly, the VECM quantifies adjustments to equilibrium for all variables simultaneously. Nevertheless, economists, and other practitioners, are generally only interested in one particular variable as it relates to all others. For instance, in the present context, one could be interested in studying adjustments to equilibrium of $y_t$ in response to (conditioning on) the equilibrating paths of the remaining variables $\pmb{x}_t$. Moreover, the objective is only meaningful if, after conditioning on $\pmb{x}_t$, any implications on $y_t$ that would have emerged from the original VAR model, remain unchanged under the conditional one. The concept has a very important name in cointegration theory and is known as exogeneity; see Engle, Hendry, and Richard (1983) for a technical exposition. A natural way of ensuring the concept is to restrict the total number of cointegrating relationships between $y_t$ and $\pmb{x}_t$ to be one, and exactly one, irrespective of any cointegrating paths among the $\pmb{x}_t$ themselves. Should this be the case, $\pmb{x}_t$ are said to be weakly exogenous for any parameters in the equation for $y_t$.

Accordingly, deriving a model for $y_t$ conditional on $\pmb{x}_t$ requires:

  • Derive an ECM for $y_t$, explicitly conditioning on all effects originating from $\pmb{x}_t$. Such a model must include not only explicit effects of $\pmb{x}_t$ on $y_t$ stemming from the VAR matrix polynomial $\pmb{\Phi}(L)$, but also any and all contemporaneous relationships between $y_t$ and $\pmb{x}_t$ implicit within the covariance matrix $\pmb{\Omega}$ of the error vector $\pmb{\epsilon}_t$.

  • Ensure that $\pmb{x}_t$ are weakly exogenous.
We turn to both these tasks next.


Conditional ECM (CECM)

To derive the conditional model, we first identify the conditional and marginal variables -- namely $y_t$ and $\pmb{x}_t$, respectively. Next, the DGP of $y_t$ is conditioned on the DGPs of the marginal variables $\pmb{x}_t$. Since any explicit relationships between $y_t$ and $\pmb{x}_t$ are clearly accounted for through $\pmb{\Phi}(L)$, any remaining conditioning proceeds on the covariance matrix $\pmb{\Omega}$. Naturally, making these relationships explicit requires a solution where the VAR is driven by a vector of innovations $\pmb{u}_t = \left(u_{yt},\pmb{\epsilon}^\top_{xt}\right)^\top$, where $\pmb{u}_t \sim N(\pmb{0},\pmb{\Sigma})$, and $\pmb{\Sigma}$ is diagonal. In other words, by virtue of Gaussianity, innovations are independent across $y_t$ and $\pmb{x}_t$. Notice that the cointegrating structure of $\pmb{x}_t$ remains unchanged here. Since to each VAR we associate a bijection into its VECM form, all operations can proceed directly on the VECM. In this regard, express (\ref{eq.ardl.12}) as follows: \begin{align} \begin{bmatrix} \Delta y_t\\ \Delta \pmb{x}_t \end{bmatrix} &= \begin{bmatrix} a_{y0}\\ \pmb{a}_{x0} \end{bmatrix} + \begin{bmatrix} a_{y1}\\ \pmb{a}_{x1} \end{bmatrix}t - \begin{bmatrix} \phi_{yy}(1) & \pmb{\phi}_{yx}(1)\\ \pmb{\phi}_{xy}(1) & \pmb{\Phi}_{xx}(1) \end{bmatrix} \begin{bmatrix} y_{t-1}\\ \pmb{x}_{t-1} \end{bmatrix} + \begin{bmatrix} \widetilde{\phi}^\star_{yy}(L) & \widetilde{\pmb{\phi}}^\star_{yx}(L)\\ \widetilde{\pmb{\phi}}^\star_{xy}(L) & \widetilde{\pmb{\Phi}}^\star_{xx}(L) \end{bmatrix} \begin{bmatrix} \Delta y_t\\ \Delta \pmb{x}_t \end{bmatrix} + \begin{bmatrix} \epsilon_{yt}\\ \pmb{\epsilon}_{xt} \end{bmatrix} \label{eq.ardl.14} \end{align} where $\pmb{a}_i = (a_{yi},\pmb{a}^\top_{xi})^\top$ for $i=0,1$, $\widetilde{\pmb{\Phi}}^\star(L) = \left(\widetilde{\pmb{\phi}}^{\star\top}_{y}(L), \widetilde{\pmb{\phi}}^{\star\top}_{x}(L)\right)^\top$, and $\pmb{\Phi}(1)$ assumes the form: \begin{align*} \pmb{\Phi}(1) = \begin{bmatrix} \phi_{yy}(1) & \pmb{\phi}_{yx}(1)\\ \pmb{\phi}_{xy}(1) & \pmb{\Phi}_{xx}(1) \end{bmatrix} \end{align*} Moreover, express the covariance matrix $\pmb{\Omega}$ as follows: \begin{align*} E\left( \begin{bmatrix} \epsilon_{yt}\\ \pmb{\epsilon}_{xt} \end{bmatrix} \begin{bmatrix} \epsilon_{yt} & \pmb{\epsilon}^\top_{xt} \end{bmatrix} \right) = \begin{bmatrix} \omega_{yy} & \pmb{\omega}_{yx}\\ \pmb{\omega}_{xy} & \pmb{\Omega}_{xx} \end{bmatrix} = \pmb{\Omega} \end{align*} It is not difficult to demonstrate that $$\epsilon_{yt} = \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\epsilon}_{xt} + u_{yt}$$ where $u_{yt} \sim N\left(0,\omega_{yy} - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\omega}_{xy}\right)$ is independent of $\pmb{\epsilon}_{xt}$. Moreover, it can then be shown that \begin{align} \Delta \pmb{z}_t &=(\pmb{I}_{k+1} - \pmb{\Psi})\left(\pmb{a}_{0} + \pmb{a}_{1}t - \pmb{\Phi}(1)\pmb{z}_{t-1} + \widetilde{\pmb{\Phi}}^\star(L) \Delta\pmb{z}_{t}\right) + \pmb{\Psi}\Delta\pmb{z}_t + \pmb{u}_{t} \label{eq.ardl.16} \end{align} where $\pmb{\alpha}_i = (\pmb{I}_{k+1} - \pmb{\Psi})\pmb{a}_i$ for $i=1,2$, $\pmb{u}_t = \left(u_{yt}, \pmb{\epsilon}^\top_{xt}\right)^\top$, and $\pmb{\Psi}$ is the matrix: \begin{align*} \pmb{\Psi} = \begin{bmatrix} 0 & \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\\ \pmb{0}_k & \pmb{0}_{k \times k} \end{bmatrix} \end{align*} Making equation (\ref{eq.ardl.16}) explicit, we arrive at: \begin{align} \begin{bmatrix} \Delta y_t\\ \Delta \pmb{x}_t \end{bmatrix} &= \begin{bmatrix} \alpha_{y0} \\ \pmb{\alpha}_{x0} \end{bmatrix} + \begin{bmatrix} \alpha_{y0} \\ \pmb{\alpha}_{x0} \end{bmatrix}t - \begin{bmatrix} \phi_{yy}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\phi}_{xy}(1) & \pmb{\phi}_{yx}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\Phi}_{xx}(1)\\ \pmb{\phi}_{xy}(1) & \pmb{\Phi}_{xx}(1) \end{bmatrix} \begin{bmatrix} y_{t-1}\\ \pmb{x}_{t-1} \end{bmatrix}\notag\\ &+ \left((\pmb{I}_{k+1} - \pmb{\Psi})\widetilde{\pmb{\Phi}}^\star(L) + \pmb{\Psi}\right)\Delta\pmb{z}_t + \begin{bmatrix} u_{yt}\\ \pmb{\epsilon}_{xt} \end{bmatrix}\label{eq.ardl.17} \end{align} It now follows that the CECM is given by the equation: \begin{align} \Delta y_t &=\alpha_{y0} + \alpha_{y1}t - \left(\phi_{yy}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\phi}_{xy}(1)\right)y_{t-1} - \left(\pmb{\phi}_{yx}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\Phi}_{xx}(1)\right)\pmb{x}_{t-1}\notag\\ &+ \pmb{e}_1^\top\left((\pmb{I}_{k+1} - \pmb{\Psi})\widetilde{\pmb{\Phi}}^\star(L) + \pmb{\Psi}\right)\Delta\pmb{z}_t + u_{yt} \label{eq.ardl.18} \end{align} the cointegrating relationship between $y_t$ and $\pmb{x}_t$, if it exists, is of the form: $$ \left(\phi_{yy}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\phi}_{xy}(1)\right)y_{t-1} - \left(\pmb{\phi}_{yx}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\Phi}_{xx}(1)\right)\pmb{x}_{t-1} $$ and the marginal ECM is summarized as: $$ \Delta \pmb{x}_t = \pmb{\alpha}_{x0} + \pmb{\alpha}_{x1}t - \pmb{\phi}_{xy}(1)y_{t-1} - \pmb{\Phi}_{xx}(1)\pmb{x}_{t-1} + \pmb{e}_2^\top\left((\pmb{I}_{k+1} - \pmb{\Psi})\widetilde{\pmb{\Phi}}^\star(L) + \pmb{\Psi}\right)\Delta\pmb{z}_t + \pmb{\epsilon}_{xt} $$ where $\pmb{e}_1 = \left(1,\pmb{0}_k^\top\right)^\top$ and $\pmb{e}_2 = \left(0,\pmb{I}_k^\top\right)^\top$.

It is also clear that the new cointegrating matrix is specified by $(\pmb{I}_{k+1} - \pmb{\Psi})\pmb{\Phi}(1)$. Furthermore, notice that while the system wide shocks are independent across variables, by virtue of $\pmb{\phi}_{xy}(1)y_{t-1}$, there is a feedback channel from $y_{t-1}$ into $\Delta \pmb{x}_t$. Thus, while $u_{yt}$ drives $y_t$ directly, it also indirectly drives $\pmb{x}_t$. In this regard, inference on the CECM in isolation from the marginal ECM will lead to incorrect inference; see Ericsson (1992) for an excellent overview. A natural resolution, therefore, requires $\pmb{\phi}_{xy}(1) = \pmb{0}_k$. This is a critical assumption, and one we impose now.


Assumption 3:

No feedback from $y_t$ into $\pmb{x}_t$: The $k$-vector $\pmb{\phi}_{xy}(1) = \pmb{0}_k$.


Under Assumption 3, if a cointegrating relationship between $y_t$ and $\pmb{x}_t$ exists, it can only enter through the CECM equation. Since $y_t$ is a scalar, the cointegrating relationship, should it exist, is the only one under consideration, while the cointegrating matrix reduces to: \begin{align} (\pmb{I}_{k+1} - \pmb{\Psi})\pmb{\Phi}(1) &= \begin{bmatrix} \phi_{yy}(1) & \pmb{\phi}_{yx}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\Phi}_{xx}(1)\\ \pmb{0}_k & \pmb{\Phi}_{xx}(1) \end{bmatrix} \label{eq.ardl.19} \end{align} while the cointegrating relationship between $y_t$ and $\pmb{x}_t$, if it exists, becomes: $$\phi_{yy}(1)y_{t-1} - \left(\pmb{\phi}_{yx}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\Phi}_{xx}(1)\right)\pmb{x}_{t-1}$$

Relationship to ARDL

While the CECM in (\ref{eq.ardl.18}) derives from a VAR structure, the observant reader will recognize that it is in effect an ARDL model. In fact, as argued in Boswijk (2004), CECMs are special cases of their structural ECM counterparts, as such, an ARDL model can be thought of as a special case of a structural ECM. Thus, when one speaks of ARDL models in the context of cointegration, what is actually being referred to is the CECM. The relationship is made more stark by referring back to the VAR in (\ref{eq.ardl.11}). In this regard, let the lag polynomial matrix $\pmb{\eta}(L)$ satisfy $\pmb{\eta}(L)\pmb{\Phi}(L) = \pmb{\Phi}(L)\pmb{\eta}(L) = (1-L)\pmb{I}_{k+1}$, and consider the following derivations: \begin{align*} \Delta(\pmb{z}_t - \pmb{\mu} - \pmb{\gamma}t) = \pmb{\eta}(L)\pmb{\Phi}(L)(\pmb{z}_t - \pmb{\mu} - \pmb{\gamma}t) &=\pmb{\eta}(L)\pmb{\epsilon}_t\\ &=\pmb{\eta}(1)\pmb{\epsilon}_t + \widetilde{\pmb{\eta}}(L)\Delta\pmb{\epsilon}_t \end{align*} where the second line above follows from the BN decomposition of $\pmb{\eta}(L)$. Next, assuming without loss of generality that $\pmb{z}_0 = \pmb{\epsilon}_0 = \pmb{0}_k$, we can sum both sides of the equation above to derive: $$(\pmb{z}_t - \pmb{\mu} - \pmb{\gamma}t) = \pmb{\eta}(1)\sum_{i=0}^{t}\epsilon_i + \widetilde{\pmb{\eta}}(L)\pmb{\epsilon}_t$$ where the term $\sum_{i=0}^{t}\epsilon_i$ asymptotically approaches the Brownian motion distribution after appropriate scaling. On the other hand, recall that the CECM cointegrating matrix can be expressed as $(\pmb{I}_{k+1} - \pmb{\Psi})\pmb{\Phi}(1)$. Thus, multiplying the expression above with this cointegrating matrix, we derive: \begin{align*} (\pmb{I}_{k+1} - \pmb{\Psi})\pmb{\Phi}(1)(\pmb{z}_t - \pmb{\mu} - \pmb{\gamma}t) &= (\pmb{I}_{k+1} - \pmb{\Psi})\pmb{\Phi}(1)\pmb{\eta}(1)\sum_{i=0}^{t}\epsilon_i + (\pmb{I}_{k+1} - \pmb{\Psi})\pmb{\Phi}(1)\widetilde{\pmb{\eta}}(L)\pmb{\epsilon}_t\\ &= (\pmb{I}_{k+1} - \pmb{\Psi})\pmb{\Phi}(1)\widetilde{\pmb{\eta}}(L)\pmb{\epsilon}_t \end{align*} where we have used the fact that $(\pmb{I}_{k+1} - \pmb{\Psi})\pmb{\Phi}(1)\pmb{\eta}(1) = (\pmb{I}_{k+1} - \pmb{\Psi})(1-1)\pmb{I}_{k+1} = 0$. Assumptions 1 through 3 now guarantee that, if a cointegrating relationship exists, it must be of the form $(\pmb{I}_{k+1} - \pmb{\Psi})\pmb{\Phi}(1)(\pmb{z}_t - \pmb{\mu} - \pmb{\gamma}t)$. In fact, a slightly more expressive relation emerges by rewriting the CECM as: \begin{align*} \Delta y_t &= -\phi_{yy}(1)\left(y_{t-1} - \frac{\alpha_{y0}}{\phi_{yy}(1)} - \frac{\alpha_{y1}}{\phi_{yy}(1)}t + \left(\frac{\pmb{\phi}_{yx}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\Phi}_{xx}(1)}{\phi_{yy}(1)}\right)\pmb{x}_{t-1}\right)\\ &+ \pmb{e}_1^\top\left((\pmb{I}_{k+1} - \pmb{\Psi})\widetilde{\pmb{\Phi}}^\star(L) + \pmb{\Psi}\right)\Delta\pmb{z}_t + u_{yt} \end{align*} Since the long-run equation is known to be stationary, it now readily follows that the equilibrating (cointegrating) relationship between $y_t$ and $\pmb{x}_t$ satisfies: \begin{align} y_{t} = \frac{\alpha_{y0}}{\phi_{yy}(1)} + \frac{\alpha_{y1}}{\phi_{yy}(1)}t - \left(\frac{\pmb{\phi}_{yx}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\Phi}_{xx}(1)}{\phi_{yy}(1)}\right)\pmb{x}_{t} + v_t\label{eq.ardl.20} \end{align} However, observe that the expression $(\pmb{I}_{k+1} - \pmb{\Psi})\pmb{\Phi}(1)(\pmb{z}_t - \pmb{\mu} - \pmb{\gamma}t)$ is precisely the RHS of (\ref{eq.ardl.20}), whereas $(\pmb{I}_{k+1} - \pmb{\Psi})\pmb{\Phi}(1)\widetilde{\pmb{\eta}}(L)\pmb{\epsilon}_t = v_t$. Moreover, observe that equation (\ref{eq.ardl.20}) is precisely the long-run equation one derives from the ARDL models in Pesaran and Shin (1998). More importantly, the equation is easily estimated by running OLS on the CECM (\ref{eq.ardl.18}), and deriving the long-run equation post estimation. We've outline the procedure in Part 1 of this series.


Inference

We also pause here to impose a fourth assumption which governs the cointegrating properties of the marginal vectors $\pmb{x}_t$, irrespective of a potential cointegrating relationship with $y_t$ in the CECM. In particular:


Assumption 4:

Conditional variables are mutually cointegrated: The matrix $\pmb{\Phi}_{xx}(1)$ has rank $0\leq r_{x} \leq k$.

The importance of Assumption 4 lies in the flexibility of allowing $\pmb{x}_t$ to be I$(0)$ when $r_x = k$, I$(1)$ when $r_x = 0$, or mutually cointegrated whenever $0 < r_x < k$. Again, recall that the assumption is made without regard as to whether $y_t$ and $\pmb{x}_t$ are themselves cointegrated. Accordingly, we must allow for the possibility of the system cointegrating matrix $(\pmb{I}_{k+1} - \pmb{\Psi})\pmb{\Phi}(1)$ to have rank $r_x$ at the very minimum. To ensure this, we note the following result from Abadir and Magnus (2005): \begin{align*} \rank\left((\pmb{I}_{k+1} - \pmb{\Psi})\pmb{\Phi}(1)\right) &= \rank\left( \begin{bmatrix} \phi_{yy}(1) & \pmb{\phi}_{yx}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\Phi}_{xx}(1)\\ \pmb{0}_k & \pmb{\Phi}_{xx}(1) \end{bmatrix} \right)\\ &= \begin{cases} r_x \quad &\text{if} \quad \phi_{yy}(1) = 0 \quad \text{and} \quad \pmb{\phi}_{yx}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\Phi}_{xx}(1) = \pmb{0}_k^\top\\ 1 + r_x \quad &\text{otherwise} \end{cases} \end{align*} In other words:

While $\pmb{x}_t$ may or may not be cointegrated among itself, there is no cointegrating relationship between $y_t$ and $\pmb{x}_t$ if and only if $\phi_{yy}(1) = 0$ and $\pmb{\phi}_{yx}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\Phi}_{xx}(1) = \pmb{0}_k^\top$.

However, if this is indeed the case, the CECM reduces to: \begin{align*} \Delta y_t &= \alpha_{y0} + \alpha_{y1}t + \pmb{e}_1^\top\left((\pmb{I}_{k+1} - \pmb{\Psi})\widetilde{\pmb{\Phi}}^\star(L) + \pmb{\Psi}\right)\Delta\pmb{z}_t + u_{yt} \end{align*} Since $\Delta y_t$ is evidently a stationary process, and in the above formulation a function of stationary processes, it stands to reason that $y_t$ itself must be I$(1)$ -- in other words, while $y_t$ and $\pmb{x}_t$ are predisposed to cointegration, no cointegrating relationship exists, regardless of the cointegrating rank $r_x$ among $\pmb{x}_t$.

Thus, the null hypothesis that no cointegrating relationship between $y_t$ and $\pmb{x}_t$ exists, is: $$ H_{0,F}: \quad \phi_{yy}(1) = 0 \quad \text{and} \quad \pmb{\phi}_{yx}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\Phi}_{xx}(1) = \pmb{0}_k^\top $$

Analysis of the Null Hypotheses

The test for $H_{0,F}$ proceeds by estimating the CECM coefficients using OLS and computing the usual $F$-statistic, $\tau_F$, associated with $H_{0,F}$ for the five cases governed by the deterministic assumptions in (\ref{eq.ardl.13}). Again, we've discussed the specifics in Part 1 of this series. Next, $\tau_F$, compared to two sets of critical values: the lower bound $\xi_{L,F}$ associated with the case $\pmb{x}_t \sim \text{I}(0)$, or $r_x = k$, and the upper bound $\xi_{U,F}$, associated with the case $\pmb{x}_t \sim \text{I}(1)$, or $r_x = 0$, where $\xi_{L,F} < \xi_{U,F}$; hence the name, bounds test. Moreover, from Pesaran, Shin, and Smith (2001), critical values for $H_{0,F}$ derive from non-standard limiting distributions. Accordingly, it bears reminding that such tests reject $H_{0,F}$ whenever $\tau_F$ is greater than some critical value. In this regard we have three outcomes:
  • $\tau_F < \xi_{L,F} < \xi_{U,F}$: Here we fail to reject $H_{0,F}$ when $\pmb{x}_t$ is either I$(0)$ or I$(1)$. We are therefore assured that no cointegrating relationship between $y_t$ and $\pmb{x}_t$ exists.

  • $\xi_{L,F} < \tau_F < \xi_{U,F}$: Here, $\xi_{L,F} < \tau_F$. Accordingly, we reject $H_{0,F}$ when $\pmb{x}_t \sim \text{I}(0)$. Nevertheless, since $\tau_F < \xi_{U,F}$, we fail to reject $H_{0,F}$ when $\pmb{x}_t \sim \text{I}(1)$. This indicates that cointegrating relationships between $y_t$ and $\pmb{x}_t$ may or may not exist for cases where $0 < r_x < k$. Accordingly, we cannot make any specific conclusions unless we know the rank of the system-wide cointegrating matrix (\ref{eq.ardl.19}).

  • $\xi_{L,F} < \xi_{U,F} < \tau_F$: Here we reject $H_{0,F}$ when $\pmb{x}_t$ is either I$(0)$ or I$(1)$. Since $r_x = 0$ in this case, we know $\pmb{\Phi}_{xx}(1) = 0$. Moreover, since the maximal rank of the cointegrating matrix (\ref{eq.ardl.19}) is $r_z = 1 + r_x$, from the Abadir and Magnus (2005) result above, the remaining unity rank can arise from one of three possibilities:
    • $\phi_{yy} = 0$ and $\pmb{\phi}_{yx}(1) \neq \pmb{0}_k^\top$ in which case the equilibrating relationship between $y_t$ and $\pmb{x}_t$ is entirely nonsensical. In fact, looking at (\ref{eq.ardl.20}), it is undefined.

    • $\phi_{yy} \neq 0$ and $\pmb{\phi}_{yx}(1) = \pmb{0}_k^\top$, in which case the equilibrating relationship is defined but degenerate.

    • $\phi_{yy} \neq 0$ and $\pmb{\phi}_{yx}(1) \neq \pmb{0}_k^\top$ in which case the equilibrating relationship is well defined.

    This suggests an additional test for $\phi_{yy} = 0$ to exclude possibility (a) above. We discuss this in greater detail in the analysis of the alternative hypothesis below.


Analysis of the Alternative Hypotheses

Given the discussion above, if an equilibrating relationship between $y_t$ and $\pmb{x}_t$ exists, it must reside in $H_{A,F}$, where: $$ H_{A,F}: \quad \phi_{yy}(1) \neq 0 \quad \text{or} \quad \pmb{\phi}_{yx}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\Phi}_{xx}(1) \neq \pmb{0}_k^\top \quad \text{or both.} $$ In fact, $H_{A,F}$ consists of three alternative specifications, as we will show below, and only one results in a non-degenerate relationship between $y_t$ and $\pmb{x}_t$. In this regard, a non-degenerate relationship must guarantee the existence and validity of the equilibrating equation in (\ref{eq.ardl.20}). In other words, it must ensure $\phi_{yy}(1) \neq 0$, otherwise the relationship is undefined, and $\pmb{\phi}_{yx}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\Phi}_{xx}(1) \neq 0$, otherwise the relationship between $y_t$ and $\pmb{x}_t$ in the CECM is through $\Delta\pmb{x}_t$, and hence degenerate. We analyze the implication of these conclusions below.
  • $H_{A_1,F}: \quad \phi_{yy}(1) = 0$ and $\pmb{\phi}_{yx}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\Phi}_{xx}(1) \neq \pmb{0}_k^\top$.

    Here, the result from Abadir and Magnus (2005) assures us that the cointegrating matrix (\ref{eq.ardl.19}) has rank $r_z = 1 + r_x$, and the CECM reduces to: $$ \Delta y_t = \alpha_{y0} + \alpha_{y1}t - \left(\pmb{\phi}_{yx}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\Phi}_{xx}(1)\right)\pmb{x}_{t-1} + \pmb{e}_1^\top\left((\pmb{I}_{k+1} - \pmb{\Psi})\widetilde{\pmb{\Phi}}^\star(L) + \pmb{\Psi}\right)\Delta\pmb{z}_t + u_{yt} $$ The cointegrating relationship (\ref{eq.ardl.20}) is here undefined since $\phi_{yy} = 0$. Moreover, since $\pmb{\Phi}_{xx}(1)$ is the only cointegrating matrix for $\pmb{x}_t$, it holds that $\pmb{\Phi}_{xx}(1)\pmb{x}_{t-1} \sim \text{I}(0)$, and therefore all RHS variables are I$(0)$ except possibly $\pmb{\phi}_{yx}(1)\pmb{x}_{t-1}$. However, since $\phi_{yy} = 0$, $\pmb{\phi}_{yx}(1)$ is not a cointegrating matrix for $\pmb{x}_t$ and therefore $\pmb{\phi}_{yx}(1)\pmb{x}_{t-1}$ may be I$(0)$ or I$(1)$. Either way, $y_t \sim \text{I}(1)$ regardless of the cointegrating rank $r_x$.

  • $H_{A_2,F}: \quad \phi_{yy}(1) \neq 0$ and $\pmb{\phi}_{yx}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\Phi}_{xx}(1) = \pmb{0}_k^\top$

    In this case, the CECM assumes the form: $$ \Delta y_t = \alpha_{y0} + \alpha_{y1}t - \phi_{yy}(1)y_{t-1} + \left(\widetilde{\phi}^\star_{yy}(L) -\pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\widetilde{\pmb{\phi}}^\star_{xy}(L)\right)\Delta y_t +\pmb{e}_1^\top\left((\pmb{I}_{k+1} - \pmb{\Psi})\widetilde{\pmb{\Phi}}^\star(L) + \pmb{\Psi}\right)\Delta\pmb{z}_t + u_{yt} $$ In fact, the equation is a special case of the Augmented Dickey-Fuller (ADF) regression. By Assumption 1, when $\pmb{\Phi}(1) = 0$, and therefore $\phi_{yy}(1) = 0$, the vector $\pmb{z}_t$, and therefore $y_t$, has a unit root. Under the alternative, however, $\phi_{yy}(1) \neq 0$ and we know that either $y_t \sim \text{I}(0)$ whenever $\alpha_{y1} = 0$, or $y_t$ is trend stationary should $\alpha_{y1} \neq 0$. Again, this holds regardless of the cointegrating rank $r_x$. Moreover, the result from Abadir and Magnus (2005) ensures that the cointegrating matrix (\ref{eq.ardl.19}) has rank $r_z = 1 + r_x$. It is important to note here that while the idea of a cointegrating relationship between $y_t$ and $\pmb{x}_t$ is not possible, there exists a relationship between $y_t$ and $\pmb{x}_t$ originating from the short-run dynamics manifesting through $\Delta \pmb{x}_t$. Since this is not an equilibrating relationship originating from $\pmb{x}_{t-1}$, the relationship is degenerate in equilibrium.

  • $H_{A_3,F}: \quad \phi_{yy}(1) \neq 0$ and $\pmb{\phi}_{yx}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\Phi}_{xx}(1) \neq \pmb{0}_k^\top$

    Here, the result from Abadir and Magnus (2005) guarantees that $r_z = 1 + r_x$. Moreover, Abadir and Magnus (2005) ensures us there exist $(k+1\times r)$-matrices $\pmb{A}$ and $\pmb{B}$ such that one can write $\pmb{\phi}(1)$ in rank factorization as follows: \begin{align*} \begin{bmatrix} \phi_{yy}(1) & \pmb{\phi}_{yx}(1)\\ \pmb{0}_k & \pmb{\Phi}_{xx}(1) \end{bmatrix} &= \begin{bmatrix} A_{yy}\\ \pmb{0}_k \end{bmatrix} \begin{bmatrix} B_{yy} & \pmb{B}^\top_{yx} \end{bmatrix} + \begin{bmatrix} \pmb{A}_{yx}\\ \pmb{A}_{xx} \end{bmatrix} \begin{bmatrix} \pmb{0}_k & \pmb{B}^\top_{xx} \end{bmatrix}\\ &= \begin{bmatrix} A_{yy}B_{yy} & A_{yy}\pmb{B}^\top_{yx} + \pmb{A}_{yx}\pmb{B}^\top_{xx}\\ \pmb{0}_k & \pmb{A}_{xx}\pmb{B}^\top_{xx} \end{bmatrix} \end{align*} Thus, $\pmb{\phi}_{xy}(1) = A_{yy}\pmb{B}^\top_{yx} + \pmb{A}_{yx}\pmb{B}^\top_{xx}$, where $\pmb{B}_{xx}^\top$ comprises the cointegrating matrix underlying $\pmb{\Phi}_{xx}(1) = \pmb{A}_{xx}\pmb{B}^\top_{xx}$ of $\pmb{x}_t$, irrespective of $y_t$. Accordingly, any equilibrating link between $y_t$ and $\pmb{x}_t$ is due to the cointegrating matrix $\pmb{B}^\top_{yx}$. Accordingly, we have two possibilities.

    • $\rank(\pmb{B}^\top_{yx},\pmb{B}^\top_{xx}) = r_x$. In this case, the cointegrating vector $\pmb{B}^\top_{yx}$ is subsumed by $\pmb{B}^\top_{xx}$ since $\rank(\pmb{\Phi}_{xx}(1)) = \rank(\pmb{B}_{xx}) = r_x$. Thus, the equilibrating relationship between $y_t$ and $\pmb{x}_t$ is not due to traditional cointegration, but is valid nonetheless. Here, $y_t \sim \text{I}(0)$ since $\phi_{yy}(1) \neq 0$.

    • $\rank(\pmb{B}^\top_{yx},\pmb{B}^\top_{xx}) = 1 + r_x$. In this case, the cointegrating vector $\pmb{B}^\top_{yx}$ is not redundant, and drives the cointegrating link between $y_t$ and $\pmb{x}_t$. The equilibrating relationship is now of the traditional cointegration type, and therefore $y_t \sim \text{I}(1)$.

    In either case, it is readily shown that the relationships which emerge are non-degenerate.
We can summarize the insight above as follows: $$ \begin{array}{l|c|l|c} & \text{Specification} & \text{Conclusion} & \text{Integration Order} \\ \hline H_{0,F} & \phi_{yy}(1) = 0 \text{ and } \pmb{\phi}_{yx}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\Phi}_{xx}(1) = \pmb{0}_k^\top & \text{No equilibrating relationship.} & y_t \sim I(1)\\ &&&\\ H_{A_1,F} & \phi_{yy}(1) = 0 \text{ and } \pmb{\phi}_{yx}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\Phi}_{xx}(1) \neq \pmb{0}_k^\top & \text{Equilibrating relationship} & y_t \sim I(1)\\ & & \text{is nonsensical.} &\\ &&&\\ H_{A_2,F} & \phi_{yy}(1) \neq 0 \text{ and } \pmb{\phi}_{yx}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\Phi}_{xx}(1) = \pmb{0}_k^\top & \text{Equilibrating relationship} & y_t \sim I(0) \text{ or TS}\\ & & \text{is degenerate.} &\\ &&&\\ H_{A_3,F} & \phi_{yy}(1) \neq 0 \text{ and } \pmb{\phi}_{yx}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\Phi}_{xx}(1) \neq \pmb{0}_k^\top & \text{Equilibrating relationship} & y_t \sim I(0) \text{ or } I(1)\\ & & \text{is non-degenerate.} & \end{array} $$ An important observation emerges. Notice that if we reject the null hypothesis, it is unclear which of the three alternative hypotheses manifests. Accordingly, rejecting $H_{0,F}$ does not guarantee that a non-degenerate relationship exists, or even a degenerate one! To identify the alternative (at least partially), one requires an additional test for $H_{0,t}: \phi_{yy}(1) = 0$, although in contrast to the test for $H_{0,F}$, testing $H_{0,t}$ is only sensible for cases I, III, and V of the deterministic restrictions in (\ref{eq.ardl.13}). While the usual $t$-statistic, $\tau_t$, will suffice, like $\tau_F$, its distribution is non-standard. In this regard, analogous to the limiting distributions of $\tau_F$, Pesaran, Shin, and Smith (2001) also provide sets of critical values $\xi_{L,t} < \xi_{U,t}$ for $\tau_t$, where $\xi_{L,t}$ and $\xi_{U,t}$ are derived respectively for $\pmb{x}_t \sim \text{I}(1)$ and $\pmb{x}_t \sim \text{I}(0)$. Since $\tau_t$ has a non-standard distribution, a rejection of $H_{0,t}$ requires $\tau_F$ to be greater than the appropriate critical value, or less than the negative of said critical value, since the test has a two sided alternative. Alternatively, one rejects the null hypothesis whenever the absolute value of $\tau_t$ is greater than the absolute value of the appropriate critical value. There are therefore three possibilities to consider:
  • $|\tau_t| < |\xi_{L,t}| < |\xi_{U,t}|$: As before, $\pmb{x}_t$ is either I$(0)$ or I$(1)$. Moreover, since $\tau_t < \xi_{L,t}$, we fail to reject $H_{0,t}$. Since we have already rejected $H_{0,F}$, this implies $\phi_{yy}(1) = 0$ and $\pmb{\phi}_{yx}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\Phi}_{xx}(1) \neq \pmb{0}_k^\top$. We conclude therefore that we $H_{A,F}$ manifests as $H_{A_1,F}$ and a nonsensical equilibrating relationship between $y_t$ and $\pmb{x}_t$ emerges.

  • $|\xi_{L,t}| < |\tau_t| < |\xi_{U,t}|$: Here we reject $H_{0,t}$ when $\pmb{x}_t \sim \text{I}(0)$ but fail to do so for the case where $\pmb{x}_t \sim \text{I}(1)$ and $0 < r_x < k$. Thus, examples may emerge where the $\pmb{x}_t$ are mutually cointegrated for which we may or may not reject $H_{0,t}$. Unless we know the rank of the cointegrating matrix (\ref{eq.ardl.19}), little more can be inferred.

  • $|\xi_{L,t}| < |\xi_{U,t}| < |\tau_t|$: In this case, we reject $H_{0,t}$ when $\pmb{x}_t$ is either I$(0)$ or I$(1)$, implying $\phi_{yy}(1) \neq 0$. Accordingly, unless we know that $\pmb{\phi}_{yx}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\Phi}_{xx}(1) = \pmb{0}_{k}^\top$, we must conclude that $H_{A,F}$ manifests either as $H_{A_2,F}$, or $H_{A_3,F}$. In either case, an equilibrating relationship emerges, albeit degenerate in case of $H_{A_2,F}$.

The process is visualized below:




Adjustment to Equilibrium Regression

We close with a discussion on estimating adjustment to equilibrium. Recall that in the VECM (\ref{eq.ardl.12}), $\pmb{\Phi}(1)$ not only governs the cointegrating properties among $\pmb{z}_t$, but $\pmb{\Phi}(1)\pmb{z}_t = \pmb{A}\pmb{B}^\top\pmb{z}_{t-1}$, where $\pmb{A}$ is a measure of adjustment to equilibrium. To do so, one first estimates the CECM (ARDL) (\ref{eq.ardl.18}) using OLS, then proceeds to compute an estimate of the long-run equation (\ref{eq.ardl.20}) post-estimation. Let $EC_t$ denote the non-stochastic part of this equation, a variable that is typically known as the error-correction (EC) term. In other words: $$EC_t = y_t - \frac{\alpha_{y0}}{\phi_{yy}(1)} - \left(\frac{\alpha_{y1}}{\phi_{yy}(1)}\right)t + \left(\frac{\pmb{\phi}_{yx}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\Phi}_{xx}(1)}{\phi_{yy}(1)}\right)\pmb{x}_{t}$$ Next, one substitutes $EC_t$ back into the CECM in place of the theoretical long-run equation to derive: \begin{align} \Delta y_t = -\phi_{yy}(1)EC_t +\pmb{e}_1^\top\left((\pmb{I}_{k+1} - \pmb{\Psi})\widetilde{\pmb{\Phi}}^\star(L) + \pmb{\Psi}\right)\Delta\pmb{z}_t + u_{yt} \label{eq.ardl.22} \end{align} Finally, one estimates the equation above using OLS again to derive an estimate of $\phi_{yy}(1)$, which is the parameter governing the speed of adjustment to equilibrium, and is analogous to the matrix $\pmb{A}$ in the original VECM. However, since one is only reparameterizing the CECM, whatever estimate is obtained for $\phi_{yy}(1)$ in the equation above, is in fact identical to the one obtained by estimating the ARDL to derive an estimate of the $EC_t$ in the first place. Thus, if one is only interested in obtaining estimate of the speed of adjustment to equilibrium, the regression above is redundant. Nevertheless, if one wishes to conduct inference on the parameter, such as a significance test, it is important to realize that the distribution involved cannot rely on the standard $t$-statistic distribution and $p$-values. To see this, observe that: $$ \Delta EC_t = \Delta y_t - \frac{\alpha_{y1}}{\phi_{yy}(1)} + \left(\frac{\pmb{\phi}_{yx}(1) - \pmb{\omega}_{yx}\pmb{\Omega}^{-1}_{xx}\pmb{\Phi}_{xx}(1)}{\phi_{yy}(1)}\right)\Delta\pmb{x}_{t} $$ Next, substitute $EC_t$ and $\Delta EC_t$ into (\ref{eq.ardl.22}) and note that it can be shown that: $$ \Delta EC_t = c_0 -c_1EC_{t-1} + c_2(L)\Delta EC_t + \pmb{c}_3(L)\Delta\pmb{x}_t + u_{yt} $$ where the coefficients $c_0 = \frac{\alpha_{y0}}{\phi_{yy}(1)}$, $c_2(L)$ and $\pmb{c}_3(L)$ are some lag polynomials from the coefficients of the system, and evidently, $c_1 = \phi_{yy}(1)$. Moreover, the equation is clearly a variant of the famous ADF regression for which the OLS estimate of $c_1$ is in fact an estimate of $\phi_{yy}(1)$. Nevertheless, while one easily derives the $t$-statistic for the estimate of $c_1$, since the regression is of the ADF variety, it has a non-standard limiting distribution. Accordingly, testing the null hypothesis $H_0: \phi_{yy}(1) = 0$ requires critical and $p$-values that are in accordance with the appropriate BM distributions.

Please stay tuned for our final blog entry in this series which will focus on implementing ARDL and the Bounds Test in EViews.

References:

Abadir, K.M. and Magnus, J.R. (2005). Matrix Algebra Cambridge University Press.
Boswijk, H. P. (1994). Testing for an unstable root in conditional and structural error correction models Journal of econometrics.63(1):37-60
Casella, G. and Berger R.L. (2002). Statistical Inference Duxbury Pacific Grove, CA
Engle, R.F., Hendry D.F., and Richard. J. (1983). Exogeneity Econometrica: Journal of the Econometric Society277-304
Ericsson, N.R. (1992). Cointegration, exogeneity, and policy analysis: An overview. Journal of policy modeling13(3)251-280
Pesaran, M. H. and Shin, Y. (1998). An autoregressive distributed-lag modelling approach to cointegration analysis. Econometric Society Monographs, 31:371--413.
Pesaran, M. H., Shin, Y., and Smith, R. J. (2001). Bounds testing approaches to the analysis of level relationships. Journal of applied econometrics, 16(3):289--326.
Sims, C.A. (1980). Macroeconomics and reality Econometrica: Journal of the Econometric Society, 1-48
Viewing all 69 articles
Browse latest View live