Most of the readers of this blog would be familiar with ordinary least squares estimator and regression models. Let us talk about one source that can cause these estimates and models to be biased and inconsistent. This is especially important when we think about the causal relationship of interest or the relationship which being studied. Proving causality can be difficult. But if we assess our econometric analysis and consider the different threats to the validity of our statistical inference then we can be more assured about our analysis, results, and inferences.
We shall be discussing omitted variables bias. This is bias that occurs when
the regressor X is correlated with an omitted variable Z.
omitted variable Z is a determinant of the dependent variable Y.
Both of these conditions result in the violation of the Gauss-Markov assumption of ordinary least squares regression
which is the assumption that states that the error term is uncorrelated with the regressors. The u denotes the error term while the X denotes the regressors. Simply put, this bias occurs when an econometric model leaves out one or more relevant variables. This bias results from the model attributing the effects of the missing variable(s) to those that are included in the model.
One simple example of this bias is by taking an example of dependent variable Salary, regressor Education and omitted variable Ability. Here, Salary is the annual salary of the individuals in our sample. Education can be either years of education or their scores on tests or any other measure of education. Ability is some variable which signifies talent, skill, or proficiency in general. We can think of Ability as being unmeasurable as well.
Let us think about a bias induced when we omit the Ability variable as a regressor, either due to a mistake or because we cannot measure it. At the same time, the true data generating process includes Ability and is
In this case, the variable Ability would have some impact on both Salary as well as Education. Hence, we can say that Ability is correlated with Education. The effect on Salary could be directly captured by including it in the regression. At the same time, we have no method to quantify Ability. And if we do not include it then its effect on Salary will not be picked up as an indirect effect through the variable Education.
If we do not use the true data generating process and instead use
Then we run into the problem of omitted variable bias. This causes our ordinary least squares estimate of the estimate of Education (denoted by beta_1)to be biased and inconsistent. This means that the bias cannot be prevented by increasing the sample size because omitted variable bias prevents ordinary least squares estimate from converging in probability to the true parameter value. The strength and direction of the bias is determined by the correlation between the error term and the regressor.
Now that we know exactly what the issue of Omitted Variable Bias is, let us consider some solutions.
One answer to this issue is to include more variables in the regression model. By doing this, the regression model uses as independent variables, not only the ones whose effects on the dependent variable are of interest, but also any potential variables which might cause omitted variables bias. Including these additional variables can help us reduce the risk of inducing omitted variables bias but at the same time, it may increase the variance of the estimator.
Some general guidelines to follow in this case that help us in our decision to include additional variables are:
Specify the coefficient of interest.
Based on your knowledge of the variables and model, identify possible sources of omitted variables bias. This should give you a starting point specification as baseline and a set of regressor variables, sometimes called control variables.
Use different model specifications and test against your baseline.
Use tables to provide full disclosure of your results – by presenting different model specifications, you can support your argument and enable readers to see the impact of including other regressors.
If diminishing the bias by including additional variables is not possible, such as in the cases where there are no adequate control variables, then there are still a variety of approaches which can help us solve this problem.
Making use of panel data methods.
Making use of instrumental variables regression methods such as Two Stage Least Squares.
Making use of a randomized control experiment.
These approaches are important to consider because they help us to avoid false inferences of causality due to the presence of another underlying variable, the omitted variable, that influences both the independent and dependent variables.
Bibliography: Wooldridge, Jeffrey M. (2009). “Omitted Variable Bias: The Simple Case”. Introductory Econometrics: A Modern Approach. Mason, OH: Cengage Learning. ISBN9780324660548. Greene, W. H. (1993). Econometric Analysis (2nd ed.). Macmillan.
I often see the students and faculty writing their papers and thesis on the topics like ‘Impact of ABC on economic growth’; with a trivial conclusion that ‘ABC is having impact on growth therefore we should focus on improving ABC’. In fact there are number of problems with this kind of research which make it very odd topic for research.
First, the choice of topics indicates that the researcher considers the economic growth as ultimate objective and every other variable is subservient to growth. To me, it is not sensible to treat the growth as ultimate objective of economic policies. It is making more sense to assume the humanity as the ultimate objective instead of the growth.
For example you may find many papers with the title similar to ‘Human Capital and Economic Growth’ with the conclusion that human capital improves economic growth therefore we should have focus on improving the human capital. But what is human capital? It consists of the measures of health and education of the humanity. It is actually a measure of the well-being of humanity and therefore it closer to the ultimate objective. That is, if someone is writing ‘there should be growth because it improves the public health’, it makes more sense. But if someone is writing ‘we should focus on health because it will improve the growth’ it looks very odd.
Sometimes, the growth does not have very strong relationship with the happiness and wellbeing of the public. In 2008, the President of France Mr. Nicholas Sarkozy formulated a commission ‘The Commission on the Measurement of Economic Performance and Social Progress’ to revise the measure of human wellbeing. The commission consisted of two Nobel laureates Amartya Sen and Joseph Stiglitz and and another well-known economist Jean Paul Fitoussi. The report highlights several flaws of the conventional measures of GDP to be used as the measure of well-being. They cite an example of a couple who is living in their home happily; they grow most of their food in kitchen garden, cook the meal for the family at home and enjoy reading newspaper together. All of these activities are not marketed, therefore did not count to GDP. In contrast, consider a person who lives in a hostel, eats unhealthy fast food, visits the prostitute and goes to the bar for the entertainment and while coming back from the bar, due to overdrinking, had a serious accident and goes to the mechanic to repair his car. All of these activities are the market activities and would count to GDP. One can very easily judge that the life of the couple is much better than the life of lonely young man, but the GDP would consider the young man to be better than the couple. So GDP is neither the ultimate objective nor is it a good measure of happiness and wellbeing.
The services of female at home are among the most valuable services, as they prepare and recruit the future generations. But these activities are not marketed, therefore don’t count to GDP. The same services would be counted if provided at a marketplace. The high percentage of economic growth might be a reflection of conversion of home activities into market activities. It may not indicate any improvement in the living standards of the people.
Beside the false philosophy of taking the GDP as ultimate objective, many times the research question itself is trivial. For example, you might have seen the research papers like ‘Impact of Financial Development on Economic Growth’. But what is the financial development? It is usually proxied by the profits on financial assets and these profits are already a part of GDP. The GDP includes all the goods and services produced in an economy and the financial assets are also part of the economy. So it is useless to ask whether or not GDP will increase with the increase in profits on financial assets. It is not possible to increase the financial development without having same increase in the overall GDP. Same is the case of the questions like ‘Energy consumption and economic growth’. The energy consumption is a part of GDP and an increase in energy consumption must increase the GDP. No unknown research question is addressed by this kind of research.
At the third place there is issue of the methodology of estimating the models. There are literally dozens of theories for economic growth and hundreds of variables are the candidates of explaining economic growth. In fact this is no surprise to have so many models for economic growth because everything produced in an economy ultimately counts to the growth. Numbers of haircuts at a barber shop also counts to the economic growth, and it indicates that there is no harm in developing a theory ‘haircut theory of economic growth’. Most probably, the regression of GDP on haircuts will yield very high correlation with growth.
But when you have so many determinants of a variable of interest, estimating a model without any of these variables would be subject to serious missing variable bias. Any model based on single theory is inherently subject to missing variable bias. You have to take care of all determinants of Y, even if they are not a part of your research question.
On the other hand it is also not easy to avoid this missing variable because there are so many theories and models for growth and a model encompassing all of these variables in one model is rarely feasible to estimate. This illustrates the inherent difficulty of estimating a growth model. Estimating a growth model with sensible procedures is not an easy job. Only few people have attempted the growth model in a serious mode, considering all important determinant of growth and one of those is Sala-i-Martin study titled as ‘I just ran 2 million regressions’. The title illustrates the difficulty of estimating a growth model in a sensible way, i.e. you may need to run 2 million regressions to get valid determinants of growth.
Same difficulty arise in many other kinds of economic models such as the model for inflation, consumption and others where there are so many theories to explain the variable of interest. But the academic journals are accepting the papers without any care of these considerations and people are increasing the lengths of their CVs by putting names of so many papers in it.
In fact having so many models for a variable of interest provides a unique opportunity of doing novel research, but such a research may need longer time and longer efforts. A note on selecting appropriate variables coming from different theories can be found in my blogs .
Suppose there are three theories for a variables of interest, it is easy to produce a paper based on theory 1. But a sensible research should take the variables from the theory 1,2 and 3 simultaneously and should come up with a final model.
In my previous blogs, I have explained how to do research in presence of multiple theories by constructing the Generalized Unrestricted Model. However, sometimes it is not possible to construct the generalized model. In the next few blogs, I will explain how we can do research in an area where there are so many models and constructing GUM is not possible. Stay tuned
Pandemic covid-19 was a global disaster and every nation was affected with different levels of intensity. Many huge industries such as the aviation and tourism industry faced huge losses. In such circumstances, it was not possible to provide a huge covid support package using the tax revenue. Despite this, many governments including those running deficit budgets for decades provided very large covid relief packages. For example, the UK recorded its last surplus budget in 2002. After 2002, every budget has been a deficit budget. Despite this, UK’s covid support package reaches 18% of their GDP. The tax revenue could never support such a huge spending, so how did the UK manage money for the package? They printed money to provide the package.
This reminds me of an old debate, sovereign governments are authorized to print money as much as they need, then why do the governments need to collect taxes and why don’t they cover all of their expenses just by printing money?
In fact, if the governments print money arbitrarily without any economic fundamentals, this may lead to hyperinflation. In the Second World War, Germany chose to print money to meet the expenditures of the War and as a result, German mark lost its value.
More recent example is that of Zimbabwe. Zimbabwe announced to print money to retire public debt and as a result, inflation in Zimbabwe jumped to over one million percent in the next year. In a couple of years, Zimbabwe had to abandon its currency. Therefore, currency printing must be done with extreme care and with careful analysis of economic fundamentals.
But what exactly these fundamentals are? How much money can be created without fear of inflation? This concept is very misunderstood by the academicians and by the policy makers.
There is very clear difference in practices adapted by the advanced economies and the emerging economies. Most of the advanced economies used their central banks to create money for Covid support program, but most of the developing nations remain reluctant to do so. So the question is, what is the limit of money printing without invoking inflation?
There is an emerging heterodox school of thought called Modern Monetary Theory having an entirely different perspective on the nature of money. I am not going into the theories and solutions forwarded by Modern Monetary Theory. My analysis entirely lies within the frame work of conventional economics. My observation is, the conventional wisdom on money-inflation relationship is very badly misunderstood by the profession.
Let’s start by very basic Quantity Theory of Money. The QTM is described by equation
M represents money supply
V represents velocity of money
P represents the price level
and Y denotes the aggregate GDP
This equation is an identity, which is bound to hold.
Usually V is assumed fixed and assume that in the short run Y is also fixed, and if M is increased, P must also increase so that the equality holds. Therefore the equation says, if money supply in an economy is increased, the price level P will also increase. But this conclusion is based on two assumptions: Constancy of velocity of money and constancy of the aggregate economic activity. Suppose new money is printed and is used to create entirely new economic activity, this means Y is increased. In this case, the equality may hold without increase in price level. The QTM doesn’t predict a necessary rise in prices.
This simple analysis indicates money can be created for new economic activity without fear of inflation. There are the countries opted to print money for new activities and did it successfully without inflation.
Similarly at the times of economic recessions, people tend to consume less, therefore V decreases. This decrease in V may lead to decrease in Y or P, i.e. to a recession or a deflation. Both deflation and recession are considered undesirable.
Alternatively governments may choose to increase M so that the downward changes in P and Y can be stopped. In this case, the money creation needs not to be inflationary.
This is also evident from the behavior of a large number of developed and some developing nations.
For example, UK had a budget deficit amounting to about 17% of GDP in Year 2000 . This is because of huge covid related spending and loss of productivity in the first half of 2020. UK financed this deficit by using its central bank. During the year 2020, the money supply in UK increased by about 12% but the inflation actually reduced from 1.7% to 0.4%. Germany spent about 35% of GDP on covid related spending leading to huge increase in public debt and the budget deficit. But inflation in Germany is not out of control. The money supply M2 increased by about 20% in Canada during 2020, without sparking a high inflation.
The international financial institutions such as IMF have observed that at least during the economic recessions, the central bank borrowing and other kinds of monetary expansions do not bring inflation. Despite this, these institutions use their influence on the developing nations who are obliged to them to enforce the policies that only add to the miseries.
What is needed at this time is a deeper analysis of relationship between central bank borrowing and inflation. Don’t print money in an uncontrolled manner, but do learn lessons from countries that printed successfully without inflation and try to follow it. Ultimately, the government surplus is people’s deficit and the vice versa
The mainstream monetary economics is filled with contradictions, logical inconsistencies, missed and messed normative implications and data inconsistencies. There exist heterodox theories having better match with historical data, but the theories are often undermined and ignored. It is in fact difficult to find something logical and valid in classical monetary economics. Despite a clear empirical failure, monetary economics is still widely believed which is quite surprising.
Thomas Tooke is perhaps the first person to produce a book in monetary economics. In 1857, he wrote a book titled ‘History of Prices and of the State of the Circulation during the Years 1793–1856’. He is also a pioneer of the ‘Banking School theory’. This theory predicts that higher interest rates should be associated with higher price levels. The logic for this view is very simple; the interest rate is a part of cost of production for the firms. Higher the interest rate, higher would be the cost of production leading to higher prices. This is the oldest theory on the relationship between interest rate and inflation.
However, mainstream economics adapted an opposite theory known as demand channel of monetary transmission. This says that if the interest rate increases, the people will reduce spending and there would be a reduction in aggregate demand which will lead to reduction in prices. This view was adapted at least as early as the 1890s and is popular to date. The inflation targeting framework which is the most popular framework for designing monetary policy today is also based on this hypothesis.
Historical data in every time period provided evidence against the demand channel. The most popular of the early evidences against the demand channel is findings of Gibson. Gibson (1923) analyzed the data on interest rate and prices for the United Kingdom for about 200 years and found that the high interest rate is associated with higher prices; something which is matching with Tooke’s view and supported by the oldest theory in monetary economics.
The findings of Gibson were so impressive that Keynes recognized his findings as ‘one of the most completely established empirical facts in the whole field of quantitative economics.’. However, Keynes termed this finding as ‘Gibson paradox’ indicating absence of any theory to explain the observation. Given the presence of Tooke’s Banking School Theory, this labeling was erroneous. However, Keynes’ recognition was a strong support to the idea that interest rate and inflation are positively associated. This is quite opposite to the logical foundations of the inflation targeting framework.
The history goes on and the empirical evidence supporting Tooke’s view were ignored by labeling as paradox. In the 1970s, there was a re-invention of supply side economics and people discussed the possibility of a cost channel of monetary transmission mechanism. This was strong theoretical support to the positive association between interest rate and inflation.
In 1992, Sims produced his seminal paper where he found that impulse response of inflation to changes in interest rate is positive. Despite the stature of Sims who later won the prestigious Nobel Prize, his findings were labeled as ‘price puzzle’, to indicate absence of theoretical underpinning of the observation. This was a denial of Tooke’s theory and the cost channel.
Brazil reduced policy rate from 14% to 2% during the three years starting from 2017. Such a drastic cut in interest should skyrocket the inflation if the widely believed demand channel was valid, but the opposite happened. The inflation in Brazil during 2017 has been about 10% which is now below 6%.
The response to the Global Financial Crisis and Covid-19 also mark the failure of classical monetary theory. All major economies responded to the pandemic by reducing interest rate and the inflation also reduced. Despite this failure of monetary theory, the international financial institutions such as IMF use to advocate the inflation targeting, which is quite strange.
Besides the contradictions with empirical evidence, there are logical inconsistencies and messed and missed normative implications. Assume for a while that the demand channel is valid i.e. increasing interest rate reduces inflation. If so, it can happen only through the luxuries. The demand for necessities cannot be reduced significantly. Therefore, if any reduction in aggregate price level occurs, it must be driven by prices of luxuries. Therefore, the rise in interest rate will improve the purchasing power of consumers of luxuries, and would be ineffective to improve the prices of necessities. There are very obvious normative implications, but conventional monetary economics never discusses the normative implications of monetary policy. That’s the missed normative implication.
Assume for a while that the traditional demand channel exists and that increase in interest rate reduces the prices. The demand channel also implies that a higher interest rate leads to increase in unemployment. Therefore, the cost of price stability shall be borne by those who will lose their jobs. It is also well known that those who are at the risk of losing jobs are the poorest people. Therefore price stability comes at the cost of the most vulnerable cohort of society, another very serious normative implication. But the traditional monetary economics totally ignores the normative implication of monetary policies.
It is also clear that any real implication of inflation on the economy comes from relative price movement. If prices of all goods and services increase at the same rate, no real variable would be affected; an implication known as monetary neutrality. Contrary to monetary neutrality, the Phillips curve assumes that inflation affects employment, and so happens due to differential in the price changes for wages and commodity prices. This means, focusing aggregate inflation is meaningless. One needs to look at the relative movement of sub-indices of the consumer price index. But the monetary policy especially the inflation targeting framework explicitly focuses on aggregate price level without taking any care of the relative price movement. There is no explanation for this in the literature.
In short, if you try to look into theoretical underpinning of monetary policy, you will find them to be very weak. If you look at the empirical data, the data shows invalidity of the underlying hypothesis. If you look into normative implication, you will find many which are practically ignored. Therefore, the textbooks on monetary economics need a rewrite, and an alternative monetary theory need to be developed which should be based on empirical data, not on the basis of hypothetical theories.
In my the previous blogs,  I have explained that following the General Specific Methodology, one can choose between theoretical models to find out a model which is compatible with data. Here is an example which shows step by step procedure of the general to simple methodology.
At the end of this blog, you will find the data on three variables, (i) Household Consumption (ii) GDP and (iii) for the South Korea. The data set is retrieved from WDI
Before starting the modeling, it is very useful to plot the data series. We have three data series, two of them are on same scale and can be plotted together. The third series ‘inflation’ is in percentage form and if plotted with the above mentioned series, it will not be visible. The graph of two series is as follows
You can see, the gap between income and consumption seems to be diverging over time. This is natural phenomenon, suppose a person has income 1000, and consumes 70% of it, the difference between consumption and income would be 300. Suppose the income has gone up to 10,000 and the MPC is same, than the difference between two variables would be widened to 3000. This widening gap is visible in the graph.
However, the widening gap creates problem in OLS. The residuals in the beginning of the data would have smaller variance and at the endpoints, they will have larger variance, i.e. there will be heteroskedasticity. In presence of heteroskedasticity, the OLS doesn’t remain efficient.
The graphs also show a non-linearity, the two series appear to behave like exponential series. A solution to the two problems is to use the log transform. The difference in log transform of two series is roughly equal to the percentage difference, and if the MPC remains same, the gap between two series would be smoothened.
I have taken the log transform and plotted the series again, the graphs is as follows
You can see the gap between log transform of two series is smooth compared to the previous graph. One can see the gap is still widening, but much smoother compared to the previous graph. The widening gap in this graph indicates decline in MPC overtime. Anyhow, the two graphs indicate that log transform is better to start building model.
I am starting with ARDL model of the following form
Where Ct indicates consumption Yt and indicates income
The estimated equation is as follows
The equation has very high R-square, but a high R-square in time series is no surprise. This turns out to be high even with unrelated series. However, the thing to note is the sigma which is the standard deviation of residuals, indicating average size of error is 0.0271. Before we proceed further we want to make sure that the estimated model is not having the issue of failure of assumption. We tested the model for normality, autocorrelation and heteroskedasticity, and the results are as follows;
The autocorrelation (AR) test has the null hypothesis of no autocorrelation and the P-value for AR test is above 5%, indicating that the null is no rejected and the hypothesis survived with a narrow margin. Normality test with null of normality and heteroskedasticity test with null of heteroskedasticity also indicate validity of the assumptions.
We want to ensure that the model is also good at prediction, because the ultimate goal of an econometric model is to predict the future. But the problem is, for the real time forecasting, we have to wait for years to see whether the model has the capability to predict. One solution to this problem is to leave some observation out of the model for purpose of prediction and then see how the model works to predict these observations.
The output indicates that the two tests for predictions have p-value much greater than 5%. The null hypothesis for Forecast Chi-square test is that the error variance for the sample period and forecast period are same and this hypothesis is not rejected. Similarly, the null hypothesis for Chow test is that the parameters remain same for the sample period and forecast period and this hypothesis is also not rejected.
All the diagnostic again show satisfactory results
Now let’s look back at the output of Eq(2). It shows the second lag variables Lconsumption_2 and LGDP_2 are insignificant. This means, keeping the Lconsumption_2 in the model, you can exclude LGDP_2 and vice versa. But to exclude both of these variables, you need to test significance of the two variables simultaneously. Sometime it happens that two variables are individually insignificant but become significant when taken together. Usually this happens due to multi-colinearity. We test joint significance of the two second lag variables, i.e.
The results of the test are
F(2,48) = 2.1631 [0.1260]
The results indicate that the hypothesis is not rejected, therefore, we can assume the coefficients of relevant variables to be zero, therefore the model becomes
The model M2 was estimated and the results are as follows
The results show the diagnostic tests for the newly estimated model are all OK, and the forecast performance for the new model is not affected by excluding the two variables. If you compare sigma for for Eq (2) and Eq(3), you will the difference only at fourth decimal. This means the size of model is reduced without paying any cost in terms of predictability.
Now the variables in the model are significant except the intercept for which the p-value is 0.178. This means the regression doesn’t support an intercept. We can reduce the model further by excluding intercept. This time we don’t need to test joint restriction because we want to exclude only one variable. After excluding the intercept, the model becomes
The output indicates that all the diagnostic are OK. All the variables are significant, so no variable can be excluded further.
Now we can impose some linear restrictions instead of the exclusion restrictions. For example, if we want to tests whether or not we can take difference of Cons and Income, we need to test following
And if we want to test restriction for the error correction model, we have to test
Apparently the two restriction seems valid because estimated value of is close to 1 and values of sum to 0. We have the choice to test R3 or R4. We are testing restriction R3 first. The results are as follows
This means the error correction model can e estimated for the data under consideration.
For the error correction model, one needs to estimate a static regression (without lags) and to use the residuals of the equation as error correction term. Estimating static regression yield
The estimates of this equation are representative of the long run coefficients of relationship between the two variables. This shows the long run elasticity of consumption with respect to income is 0.93
We have to estimate following kind of error correction regression
The intercept doesn’t enter in the error correction regression. The estimates are as follows
This is the parsimonious model made for the consumption and income. The Eq (5) is representative of long run relationship between two variable and Eq (6) informs about short run dynamics.
The final model has only two parameters, whereas as Eq(1) that we started with contains 6 parameters. The sigma for the Eq(6) and Eq (2) are roughly same which informs that the large model where we started has same predicting power as the last model. The diagnostic tests are all OK which means the final model is statistically adequate in that it the assumption of the model are not opposed by the data.
The final model is an error correction model, which contains information for both short run and long run. The short run information is present in equation (6), whereas the long run information is implicit in the error correction term and it is available in the static Eq (5).
The same methodology can be adapted for the more complex situations and the researcher needs to start from a general model, reducing it successively until the most parsimonious model which is statistically adequate is achieved
Consumption: Households and NPISHs Final consumption expenditure (current LCU)
Listening to the word ARDL, the first things that comes into mind is the bound testing approach introduced by Pesaran and Shin (1999).The Pesaran and Shin’s approach is an incredible use of the ARDL, however, the term ARDL is much elder, and the ARDL model has many other uses as well. In fact, the equation used by Pesaran and Shin is a restricted version of ARDL, and the unrestricted version of ARDL was introduced by Sargan (1964) and popularized by David F Hendry and his coauthors in several papers. The most important paper is one which is usually known as DHSY, but we will come to the details DHSY later. Let me introduce what is ARDL and what are the advantages of this model
What is ARDL model?
ARDL model is an a-theoretic model for modeling relationship between two time series. Suppose we want to see the effect of time series variable Xt on another variable Yt. The ARDL model for the purpose will be of the form
The same model can be written as
This means, in the layman language the dependent variable is regressed on its own lags, independent variable and the lags of independent variables. The above ARDL model can be termed as ARDL (j, k) model, referring to number of lags j & K in the model.
The model itself is written without any theoretical considerations. However, a large number of theoretical models are embedded inside this model and one can drive appropriate theoretical model by testing and imposing restrictions on the model.
To have more concrete idea, let’s consider the case of relationship between consumption and income. To further simplify, lets consider j=k=1, so that the ARDL(1,1) model for the relationship of consumption and income can be written as
Model 1: Ct=a+b1Ct-1+d0Yt+d1Yt-1+et
HereC denotes consumption and Y denotes income, a,b1,d0,d1 denote the regression coefficient and et denotes error term. So far, no theory is used to develop this model and the regression coefficients don’t have any theoretical interpretation. However, this model can be used to select appropriate theoretical model for the consumption.
Suppose we have estimated the above mentioned model and found the regression coefficients. We can test any one of the coefficient and/or number of coefficient for various kinds of restriction. Suppose we test the restriction that
R1: H0: (b1 d1)=0
Suppose testing restriction on actual data implies that restriction is valid, this means we can exclude the curresponding variables from the model. Excluding the variables, the model will become
Model 2: Ct=a+ d0Yt+et
The model 2 is actually the Keynesian consumption (also called absolute income hypothesis), which says that current consumption is dependent on current income only. The coefficient of income in this equation is the marginal propensity to consume and Keynes predicted that this coefficient would be between 0 and 1, implying that individuals consume a part of their income and save a part of their incomes for future.
Suppose that the data did not suppose the restriction R1, however, the following restriction is valid
R2: H0: d1=0
This means model 1 would become
Model 3: Ct=a+b1Ct-1+d0Yt+et
This means that current consumption is dependent on current income and past consumption. This is called Habit Persistence model. The past consumption here is the proxy of habit. The model says that what was consumed in the past is having effect on current consumption and is evident from human behavior.
Suppose that the data did not suppose the restriction R1, however, the following restriction is valid
R3: H0: b1=0
This means model 1 would become
Model 4: Ct=a+ d0Yt+d1Yt-1+et
This means that current consumption is dependent on current income and past income. This is called Partial Adjustment model. As per implications of Keynesian consumption function, the consumption should only depend on the current income, but the partial adjustment model says that it takes sometimes to adjust to the new income. Therefore, the consumption is partially on the current income and partially on the past income
In a similar way, one can derive many other models out of the model 1 which are representative of different theories. The details of the models that can be drawn from model 1 can be found in Charemza and Deadman (1997)’s ‘New Directions in Econometric Practice…’.
It can also be shown that the difference form models are also derivable from model 1. Consider the following restriction
If this restriction is valid, the model 1 will become
This model can be re-written as
Model 5: DCt=d0DYt+et
This indicates that the difference form models can also be derived from the model 1 with certain restrictions
Further elaboration shows that the error correction models can also be derived from model 1.
Consider model 1 again and subtract Ct-1 both sides, we will get
Ct- Ct-1=a+b1Ct-1 -Ct-1+d0Yt+d1Yt-1+et
Adding and subtracting d0Yt-1 on the right hand side we get
DCt=a+(b1-1)Ct-1+d0Yt+d1Yt-1 +d0Yt-1 -d0Yt-1 +et
DCt=a+(b1-1)Ct-1+d0DYt+d1Yt-1 +d0Yt-1 +et
This equation contains error correction mechanism if
R6: (b1-1)= – (d1+d0)
(b1-1)= – (d1+d0)=F
The equation will reduce to
DCt=a+F(Ct-1-Yt-1)+ d0DYt +et
This is our well known error correction model and can be derived if R6 is valid.
Therefore, existence of an error correction mechanism can also be tested from model 1 and restriction to be considered valid if R6 is valid.
As we have discussed, number of theoretical models can be driven from model 1 by testing certain restrictions. We can start from model 1 and go with testing different restrictions. We can impose the restriction which is found valid and discard the restrictions which were found invalid in our testing. This provides us a natural way of selection among various theoretical models.
When we say theoretical model, this means there is some economic sense of the model. For example the models 2 to model 6 all make economic sense. So, how to decide between these models? This problem can be solved if we start with an ARDL model and choose to impose restrictions which are permitted by the data
The famous DHSY paper recommends a methodology like this. DHSY recommend that we should start with a large model which encompasses various theoretical models. The model can then be simplified by testing certain restrictions.
In another blog I have argued that if there are different theories for a certain variables, the research must be comparative. This short blog gives the brief outlines about how we can do this. Practically, one need to take larger ARDL structures and number of models that can be derived from the parent model would also be large.
Consider a hypothetical question, a researcher was given with a research question; compare the mathematical ability of male and female students of grade 5. The researcher collected data of 300 female students and 300 male students of grade 5 and administered a test of mathematical questions. The average score for female students was 80% and average score of male student was 50%, the difference was statistically significant and therefore, the researcher concluded that the female students have better mathematical aptitude.
The findings seem strong and impressive, but let me add into the information that the male students were chosen from a far-off village with untrained educational staff and lack of educational facilities. The female students were chosen from an elite school of a metropolitan city, where the best teachers of the city actually serve. What should be the conclusion now? It can be argued that actually difference doesn’t come from the gender, the difference is coming from the school type.
The researcher carrying out the project says ‘look, my research assignment was only to investigate the difference due to gender, the school type is not the question I am interested in, therefore, I have nothing to do with the school type’.
Do you think that the argument of researcher is valid and the findings should be considered reliable? The answer is obvious, the findings are not reliable, and the school type creates a serious bias. The researcher must compare students from the same school type. This implies you have to take care of the variables not having any mention in your research question if they are determinants of your dependent variable.
Now let’s apply the same logic to econometric modeling, suppose we have the task to analyze the impact of financial development on economic growth. We are running a regression of GDP growth on a proxy of financial development; we are getting a regression output and presenting the output as impact of financial development on economic growth. Is it a reliable research?
This research is also deficient just like our example of gender and mathematical ability. The research is not reliable if ceteris paribus doesn’t hold. The other variables which may affect the output variable should remain same.
But in real life, it is often very difficult to keep all other variables same. The economy continuously evolves and so are the economic variables. The other solution to overcome the problem is to take the other variables into account while running regression. This means other variables that determine your dependent variable should be taken as control variables in the regression. This means suppose you want to check the effect of X1 on Y using model Y=a+bX1+e. Some other research studies indicate that another model exist for Y which is Y=c+dX2+e. Then I cannot run the first model ignoring the second model. If I am running only model 1 ignoring the other models, the results would be biased in a similar way as we have seen in our example of mathematical ability. We have to use the variables of model 2 as control variable, even if we are not interested in coefficients of model 2. Therefore, the estimated model would be like Y=a+bX1+cX2+e
Taking the control variables is possible when there are a few models. The seminal study of Davidson, Hendry, Sarba and Yeo titled ‘Econometric modelling of the aggregate time-series relationship between …. (often referred as DHSY)’ summarizes the way to build a model in such a situation. But it often happens that there exists very large number of models for one variable. For example, there is very large number of models for growth. A book titled ‘Growth Econometrics’ by Darlauf lists hundreds of models for growth used by researchers in their studies. Life becomes very complicated when you have so many models. Estimating a model with all determinants of growth would be literally impossible for most of the countries using the classical methodology. This is because growth data is usually available at annual or quarterly frequency and the number of predictors taken from all models collectively would exceed number of observations. The time series data also have dynamic structure and taking lags of variables makes things more complicated. Therefore, classical techniques of econometrics often fail to work for such high dimensional data.
Some experts have invented sophisticated techniques for the modeling in a scenario where number of predictor becomes very large. These techniques include Extreme Bound Analysis, Weighted Average Least Squares, and Autometrix etc. The high dimensional econometric techniques are also very interesting field of econometric investigation. However, DHSY is extremely useful for the situations where there are more than one models for a variable based on different theories. The DHSY methodology is also called LSE methodology, General to Specific Methodology or simply G2S methodology.