Featured

Human Resource Development in Pakistan—an obsessive quest?

With over 209 million populations, human capital is an indispensable asset for Pakistan. Besides increasing population, the country is experiencing a youth bulge, with the number of individuals entering the labor market over the upcoming years expanding at a faster rate than the total population. Hence creating enough jobs for a growing population, improve the quality and productivity of jobs, and enhance access to jobs and economic opportunities for youth, women and the disadvantaged, the government must take immediate steps to absorb the substantial number of young workers entering the labor force through investment on their education, and provide quality skills that can help them to find or create new jobs— consequently participate in the labor force for economic growth.

In addition to that, the country’s labor market will need robust policy and strategic measures, to fulfill the demand for the technical human skills of increasing investment activities in the country under CPEC project which is the main part of china “one belt one road” initiative project. In this short strategy paper, I provide an overview of HRD policy, gender and skill gaps, and future projection, with key strategic directions, that Pakistan need to adopt, in order to take benefit from the human capital resources for sustained economic growth. It also particularly emphasize improved standards of teaching and learning at all levels of education, in particular at the tertiary level to prepare itself for forthcoming industrial revolution 4.0.

The fragmented HRD policies in Pakistan

The country shows the most fragmented and politicizes HRD policies, as HRD in Pakistan is not properly addressed by both public and private (Corporate) sectors. Since its independence, the HRD has been the sole responsibility of the Federal government, which has faced 37 years of military dictatorship. Though, the public sector, after the 18th Constitutional amendment HRD is dealt at both federal and provincial level such as (1) planning and development division at federal level and planning and development department at the provincial level; (2) ministry of labor, manpower, and overseas Pakistanis at Federal level; (3) Labour and Human Resource Development Departments at the provincial level and other departments working on HRD are (4 ) the National Commission for Human Development (NCHD) and; (5) National Vocational & Technical Education Commission (NAVTEC); (6) Technical Education and Vocational Training Authorities (TEVTA) at provincial level; and (7) Higher Education Commission. However, the still only 58% of Pakistan population is literate, and the majority of its works are in the informal sector (PBS, 2019-20). Similarly, there are about 188 educational institutions in Pakistan, among which only two are ranked in global competitiveness ranking. Pakistan is also among the worst performers in terms of technical and vocational education and training –Technical & Vocational Education Training (TVET). It should use an effective development and implementation strategy for TVET to harness its young potential. Performing Economies such as China, Hong Kong and South Korea.

Likewise, the funding allocations made on the social sector in the Five-Year Plans could not achieve desired results from time to time because of limited focus on health and education, which have a comparatively higher payoff in terms of HRD (Mahmood, Akhtar, & Butt, 2015). The policies for HRD have been formulated but the interdependence among these policies was ignored. The primary objective of education policies, pursued in Pakistan has been universal literacy but this objective has not been achieved. Small allocations for the social sectors in the past resulted in low human development. A high population growth rate with increasing unemployment has resulted in social tension. The access of poor people to education, health, employment, and other social services was not ensured. This has led to the deterioration of human development in Pakistan. As the Human Development Index (HDI), ranks Pakistan 154th out of 189 countries (UNDP, 2020), and the situation is getting worse, as less than 2.5 % of the budget is invested in the education and vocational training sector and out of this budget a very negligible budget is allocated for the HRD activities (Aftab 2007, World Bank 2013) .

Working Labor and Youth Bulge—Labor productivity in crises

Since, country’s working-age population, including youth, is growing, while dependency ratios are falling—demographic changes that tend to be favorable for growth (as shown in figure 1). While the labor force has grown more rapidly (by an average of 2.9 percent per year) than the working-age population growth, indicating an opportunistic condition in the labor market in Pakistan (Figure 2). But Pakistan is not benefiting from these promising trends amid the underutilization of human capital in the labor market. Total employment growth lags behind labor force growth, indicating that a large majority of people are underutilized or searching for jobs. The growth rates of non-agricultural employment and paid employment are only 1 and 4 percentage points greater than that of total employment. This also suggests slow job creation in the non-agricultural sector and paid employment, which typically are job-creating sectors in dynamic economies.

Compare to Bangladesh, where the ratio of the working- to non-working-age population is also growing, the labor market shows a far more rapid growth in non-agriculture wage employment than in total employment. In addition, real wage growth among paid employees has been growing by only 1.5 percent a year in Pakistan, which suggests modest improvement in the quality of jobs, or labor productivity. Female employment presents significant growth per year, in large part due to increases in labor market activities in rural areas.

Additionally, the country’s labor productivity needs to grow faster to increase the country’s competitiveness. Labor productivity, measured as GDP per person employed, is stagnant: its growth rate is the lowest in the region and far below the average of lower-middle-income countries (Figure 4, left). In comparator economies such as India and China, average labor productivity growth rates over 2003– 14 were 6.3 and 9.2 percent, respectively. In part due to low and stagnant labor productivity, Pakistan’s competitiveness in the export market seems to be lagging. While exports have driven economic growth and poverty reduction in the region, Pakistan’s exports of goods and services in 2015 were at about the same level as in 2004 (Figure 3).

Lower Skill trap, and gender inequality

Pakistan is stuck in a low-skills trap where employers settle for the kind of low skills readily available in the market. Its failure to break away from its dependence on low-skills, low-technology manufactured exports has been because of its low level of human resource development (Amjad, 2005; United Nations Development Programme, 2017). In 2018, the illiteracy rate of the working-age population (10 years or older) was 48.2 per cent (Labour Force Survey, 2018). While the Government has identified nine priority Special Economic Zones (SEZs) under the China Pakistan Economic Corridor (CPEC) and anticipated job creation exceeding 800,000 (Government of Pakistan, 2018, 2019), however, there are no proper investment and planning to how to reach the demands for skills needed under these developmental project. The statistics on employment rates of skilled workers by occupation reinforce some of the earlier findings and highlight some specific challenges as well. Of all females employed, most were agricultural workers in both 2015 and 2018 and projections for 2021 indicate that 46.9 per cent of all working women would be agricultural workers (see Figure 4). Moreover, it is projected that 20.1 per cent of both men and women in employment in 2021 will occupy managerial roles. The data also show that there has been high employment growth between 2015 and 2018 for specific types of skilled workers – plant/machine operators (14 per cent) and technicians and associate professionals (10.6 per cent) – and modest growth of 5 per cent for professionals and craft and trade-related workers.

Recommendations

Pakistan needs to enhance investment, and particular emphasis on improved standards of teaching and learning at all levels of education, in particular at the tertiary level, if it is to leapfrog into the era of the fourth industrial revolution. Similarly, more emphasis on HRD and Entrepreneurship— as an investment in cultivating entrepreneurship and higher education institutions. The government, private sector and international development partners must facilitate start-ups, develop new skill development center, and provide grants, loans and act as intermediaries between students and HEI.

Pakistan should also build closer collaboration between the various systems – primary, secondary, higher level, vocational and skills training –enabling youth for the future world of work. Greater linkages between the education system and industry is also needed to address information failures in the labour market to improve links between employers and skilled. In addition to skills development, policies should address labor market imperfections and challenges.

Control on population and gender disparity—enabling women to participate in the labor market. Donors, industries and HEI along with international partners, should create inclusive programs on population control, and empower women. Apart from the government can introduce vocational programs and awareness campaigns about women’s legal rights at work, and how to access markets for quality employment and form interpersonal relations that encourage greater empowerment.

References

Aftab, S. 2007. An exploratory study of human resource development: A study focusing on the organizations of Islamabad, Rawalpindi and Wah Region in Pakistan. International Review of Business Research Papers 3, no. 3: 36-55

Amjad, R. 2005 ‘Skills and Competitiveness: Can Pakistan break out of the low-level skills trap?’, The Pakistan Development Review, vol. 44, no. 4, part I, Winter 2005, pp. 387–409.

Bossavie, Laurent, Upasana Khadka, and Victoria Strokova. 2018. ”Pakistan: A Labor Market Overview.” A Background Paper for the Jobs Diagnostic. World Bank.

Farole, Thomas and Yoonyoung Cho. 2017. “Bangladesh Jobs Diagnostic”, Jobs Series: No. 9. World Bank, Washington, DC.

Government of Pakistan, Pakistan Economic Survey 2018–19, Finance Division, Government of Pakistan, Islamabad, 2018.

Government of Pakistan, ‘SME Policy 2019’ (draft report), SMEDA, Government of Pakistan, Islamabad, 2019

Labour Force Survey. 2018.  Annual Labour Force Survey. Government of Pakistan, Islamabad.

Mahmood, E., Akhtar, M. M. S., & Butt, I. H. (2015). A Critical Review of the Evolution of Higher Education in Pakistan. Journal of Educational Research, 18(2), 57

PBS. 2019-20. Labour Force Survey. Islamabad: Pakistan Bureau of Statistics .

United Nations Development Programme, HDI Pakistan Status, UNDP, Islamabad, 2020

United Nations Development Programme, Pakistan National Human Development Report: Unleashing the potential of a young Pakistan, UNDP, Islamabad, 2017.

United Nations Development Programme, ‘Industrial & Skill Opportunities Under CPEC’ (draft report), UNDP Pakistan, Islamabad 2018.

World Bank. 2020. The World Development Report 2019 The Changing Nature of Work. Washington, DC: World Bank.

Central Bank Borrowing and Inflation: between Myth and Reality

Pandemic covid-19 was a global disaster and every nation was affected with different levels of intensity. Many huge industries such as the aviation and tourism industry faced huge losses. In such circumstances, it was not possible to provide a huge covid support package using the tax revenue. Despite this, many governments including those running deficit budgets for decades provided very large covid relief packages. For example, the UK recorded its last surplus budget in 2002. After 2002, every budget has been a deficit budget. Despite this, UK’s covid support package reaches 18% of their GDP. The tax revenue could never support such a huge spending, so how did the UK manage money for the package? They printed money to provide the package.    

This reminds me of an old debate, sovereign governments are authorized to print money as much as they need, then why do the governments need to collect taxes and why don’t they cover all of their expenses just by printing money? 

In fact, if the governments print money arbitrarily without any economic fundamentals, this may lead to hyperinflation. In the Second World War, Germany chose to print money to meet the expenditures of the War and as a result, German mark lost its value.

More recent example is that of Zimbabwe. Zimbabwe announced to print money to retire public debt and as a result, inflation in Zimbabwe jumped to over one million percent in the next year. In a couple of years, Zimbabwe had to abandon its currency. Therefore, currency printing must be done with extreme care and with careful analysis of economic fundamentals.

But what exactly these fundamentals are? How much money can be created without fear of inflation? This concept is very misunderstood by the academicians and by the policy makers. 

There is very clear difference in practices adapted by the advanced economies and the emerging economies.  Most of the advanced economies used their central banks to create money for Covid support program, but most of the developing nations remain reluctant to do so. So the question is, what is the limit of money printing without invoking inflation?

There is an emerging heterodox school of thought called Modern Monetary Theory having an entirely different perspective on the nature of money. I am not going into the theories and solutions forwarded by Modern Monetary Theory. My analysis entirely lies within the frame work of conventional economics. My observation is, the conventional wisdom on money-inflation relationship is very badly misunderstood by the profession.

Let’s start by very basic Quantity Theory of Money. The QTM is described by equation

MV=PY

Where

M represents money supply

V represents velocity of money

P represents the price level

and    Y denotes the aggregate GDP

This equation is an identity, which is bound to hold.

Usually V is assumed fixed and assume that in the short run Y is also fixed, and if M is increased, P must also increase so that the equality holds. Therefore the equation says, if money supply in an economy is increased, the price level P will also increase. But this conclusion is based on two assumptions: Constancy of velocity of money and constancy of the aggregate economic activity. Suppose new money is printed and is used to create entirely new economic activity, this means Y is increased. In this case, the equality may hold without increase in price level. The QTM doesn’t predict a necessary rise in prices.

This simple analysis indicates money can be created for news activity without fear of inflation. There are the countries opted to print money for new activities and did it successfully without inflation.  

Similarly at the times of economic recessions, people tend to consume less, therefore V decreases. This decrease in V may lead to decrease in Y or P, i.e. to a recession or a deflation. Both deflation and recession are considered undesirable.

Alternatively governments may choose to increase M so that the downward changes in P and Y can be stopped. In this case, the money creation needs not to be inflationary.

This is also evident from the behavior of a large number of developed and some developing nations.

For example, UK had a budget deficit amounting to about 17%  of GDP in Year 2000 . This is because of huge covid related spending and loss of productivity in the first half of 2020. UK financed this deficit by using its central bank. During the year 2020, the money supply in UK increased by about 12% but the inflation actually reduced from 1.7% to 0.4%. Germany spent about 35% of GDP on covid related spending leading to huge increase in public debt and the budget deficit. But inflation in Germany is not out of control. The money supply M2 increased by about 20% in Canada during 2020, without sparking a high inflation. 

The international financial institutions such as IMF have observed that at least during the economic recessions, the central bank borrowing and other kinds of monetary expansions do not bring inflation. Despite this, these institutions use their influence on the developing nations who are obliged to them to enforce the policies that only add to the miseries. 

What is needed at this time is a deeper analysis of relationship between central bank borrowing and inflation. Don’t print money in an uncontrolled manner, but do learn lessons from countries that printed successfully without inflation and try to follow it. Ultimately, government surplus is people’s deficit and the vice versa

Mainstream Monetary Economics: A Package of Logical Fallacies

The mainstream monetary economics is filled with contradictions, logical inconsistencies, missed and messed normative implications and data inconsistencies. There exist heterodox theories having better match with historical data, but the theories are often undermined and ignored. It is in fact difficult to find something logical and valid in classical monetary economics. Despite a clear empirical failure, monetary economics is still widely believed which is quite surprising.

Thomas Tooke is perhaps the first person to produce a book in monetary economics. In 1857, he wrote a book titled ‘History of Prices and of the State of the Circulation during the Years 1793–1856’. He is also a pioneer of the ‘Banking School theory’. This theory predicts that higher interest rates should be associated with higher price levels. The logic for this view is very simple; the interest rate is a part of cost of production for the firms. Higher the interest rate, higher would be the cost of production leading to higher prices. This is the oldest theory on the relationship between interest rate and inflation.

However, mainstream economics adapted an opposite theory known as demand channel of monetary transmission. This says that if the interest rate increases, the people will reduce spending and there would be a reduction in aggregate demand which will lead to reduction in prices. This view was adapted at least as early as the 1890s and is popular to date. The inflation targeting framework which is the most popular framework for designing monetary policy today is also based on this hypothesis.

Historical data in every time period provided evidence against the demand channel. The most popular of the early evidences against the demand channel is findings of Gibson. Gibson (1923) analyzed the data on interest rate and prices for the United Kingdom for about 200 years and found that the high interest rate is associated with higher prices; something which is matching with Tooke’s view and supported by the oldest theory in monetary economics.

The findings of Gibson were so impressive that Keynes recognized his findings as ‘one of the most completely established empirical facts in the whole field of quantitative economics.’. However, Keynes termed this finding as ‘Gibson paradox’ indicating absence of any theory to explain the observation. Given the presence of Tooke’s Banking School Theory, this labeling was erroneous. However, Keynes’ recognition was a strong support to the idea that interest rate and inflation are positively associated. This is quite opposite to the logical foundations of the inflation targeting framework.

The history goes on and the empirical evidence supporting Tooke’s view were ignored by labeling as paradox. In the 1970s, there was a re-invention of supply side economics and people discussed the possibility of a cost channel of monetary transmission mechanism. This was strong theoretical support to the positive association between interest rate and inflation.

In 1992, Sims produced his seminal paper where he found that impulse response of inflation to changes in interest rate is positive. Despite the stature of Sims who later won the prestigious Nobel Prize, his findings were labeled as ‘price puzzle’, to indicate absence of theoretical underpinning of the observation. This was a denial of Tooke’s theory and the cost channel.

Brazil reduced policy rate from 14% to 2% during the three years starting from 2017. Such a drastic cut in interest should skyrocket the inflation if the widely believed demand channel was valid, but the opposite happened. The inflation in Brazil during 2017 has been about 10% which is now below 6%.

Graph showing trend in interest rate and inflation in brazil for 2015-2021. from 2016 to 2020, interest rate dropped from 14% to 2%, and the inflation also reduced.

The response to the Global Financial Crisis and Covid-19 also mark the failure of classical monetary theory. All major economies responded to the pandemic by reducing interest rate and the inflation also reduced. Despite this failure of monetary theory, the international financial institutions such as IMF use to advocate the inflation targeting, which is quite strange.

The graph shows trend in interest rate and inflation in UK. in 2020, the interest rate was suddenly reduced from 0.7% to 0.1%. The mainstream wisdom predicts a rise of inflation after reductionin interest rate, but inflation also reduced after the fall in interest rate.

Besides the contradictions with empirical evidence, there are logical inconsistencies and messed and missed normative implications. Assume for a while that the demand channel is valid i.e. increasing interest rate reduces inflation. If so, it can happen only through the luxuries. The demand for necessities cannot be reduced significantly. Therefore, if any reduction in aggregate price level occurs, it must be driven by prices of luxuries. Therefore, the rise in interest rate will improve the purchasing power of consumers of luxuries, and would be ineffective to improve the prices of necessities. There are very obvious normative implications, but conventional monetary economics never discusses the normative implications of monetary policy. That’s the missed normative implication.

Assume for a while that the traditional demand channel exists and that increase in interest rate reduces the prices. The demand channel also implies that a higher interest rate leads to increase in unemployment. Therefore, the cost of price stability shall be borne by those who will lose their jobs. It is also well known that those who are at the risk of losing jobs are the poorest people. Therefore price stability comes at the cost of the most vulnerable cohort of society, another very serious normative implication. But the traditional monetary economics totally ignores the normative implication of monetary policies.

It is also clear that any real implication of inflation on the economy comes from relative price movement. If prices of all goods and services increase at the same rate, no real variable would be affected; an implication known as monetary neutrality. Contrary to monetary neutrality, the Phillips curve assumes that inflation affects employment, and so happens due to differential in the price changes for wages and commodity prices. This means, focusing aggregate inflation is meaningless. One needs to look at the relative movement of sub-indices of the consumer price index. But the monetary policy especially the inflation targeting framework explicitly focuses on aggregate price level without taking any care of the relative price movement. There is no explanation for this in the literature.  

In short, if you try to look into theoretical underpinning of monetary policy, you will find them to be very weak. If you look at the empirical data, the data shows invalidity of the underlying hypothesis. If you look into normative implication, you will find many which are practically ignored. Therefore, the textbooks on monetary economics need a rewrite, and an alternative monetary theory need to be developed which should be based on empirical data, not on the basis of hypothetical theories. 

Estimating long-run coefficients from an ARDL model

Whether if we’re working with Time Series Data or Panel Data, most of the times we want to follow the analysis of the long-run behavior and the short-run dynamics. An interesting but well-known model that enable us for such approach is the Auto-Regressive Distributed Lag model which stands as ARDL. There are a lot of implications regarding the form of the ARDL, maybe some re-parametrizations, maybe some conditional cointegration forms, or fully cointegration equations derived from the ARDL. In this article we’re going to describe how to calculate the long-run coefficient of an ARDL model either for time series or panel data.

Consider the basic Auto-Regressive Distributed Lag model with an exogenous variable, which is of the form:

Where y represents the dependent variable, p represents the autoregressive order of the ARDL, where it is directly associated to the y (the dependent variable). X is an exogenous explanatory variable which has l lags (also a contemporaneous value of x can be included) and the residual term u.

The present form of the ARDL is not actually a long-run form, in fact, it is more a short-run model. Therefore, the actual impact of x through α must be done considering the size and orders associated with the dependent variable y through ß. The above leads to a situation where we want to weigh the cumulative impact of α, and the way to do so is by using a long-run multiplier. Blackburne & Frank (2007) indicate to us that an approximation to this long-run multiplier, would involve a non-linear transformation to get a long-run coefficient, such transformation is given in the general form of:

Where this is the long-run multiplier of the variable X, also please note how this formula works. It’s using the sums of the coefficient α associated to the independent variable (and its lags) divided by 1 minus the sums of the autoregressive ß coefficients. Upper part corresponds to the Long-Run Propensity of X towards y, which is just simply the sums of the coefficients, and it’s interpreted that given one permanent change of one unit in x, the sums would be the long-run propensity as impact on y. The down part represents the weight associated to the response of the autoregressive structure.

This means that if for example, if we got an ARDL (2,2) it refers to a model where we got two lags of the dependent variable and two lags associated to the independent variable (considering of course the contemporaneous value of x). This model is one of the form of

And the weighted long-run multiplier will be given in the form of:

Where α goes from 1 up to 3, it starts from the contemporaneous value of x given by coefficient α1 and then sums the coefficients of the lag orders α2 for lag 1, and α3 for lag 2. Notice that we subtract the sums of the autoregressive parameters ß from the unity to weight the size of the impact of the cumulative sums of x.

Interpretation of the long-run coefficient goes as follow: if x in levels change by one unit, then the average/expected change in y would be given by the long-run coefficient.

Let’s put this together with an example in Stata.

Load up the data base and generate a time identification variable with:

use https://www.stata-press.com/data/r16/auto
generate t = _n

Then tell to Stata that you’re working with time series, so:

tsset t, y

Now let’s estimate an ARDL (2,2) model using the variables of price and weight, where the price is the dependent variable and weight is the independent variable (all assumed to be stationary variables).

reg price L.price L2.price weight L1.weight L2.weight

From here you can analyze a lot of things, for example, the long-run propensity will be given by:

** Long-run propensity of x (weight)
display _b[weight] +_b[L1.weight]+_b[L2.weight]

And the long-run multiplier which we discussed, can be calculated by:

** Long-run multiplier of x
display (_b[weight] +_b[L1.weight]+_b[L2.weight]) / (1-(_b[L1.price] + _b[L2.price]))

And from here, you can even go to estimate the long-run coefficient with statistical significance and the actual value of the long-run coefficient by using nlcom: this can be done by using:

nlcom (_b[weight] +_b[L1.weight]+_b[L2.weight]) / (1-(_b[L1.price] + _b[L2.price]))

Notice that when the weight increases in unit over the long-run the expected change would be of 1.68 units on the price, statistically significant with a 10% level of significance.

You can extend such analysis to the famous long-run & short-run dynamics of the Cointegration tests of Engle & Granger, where you just will have to compute the short-run coefficients in order to obtain the long-run coefficients, this will be done in a future next post.

An excellent video to help you to get this idea can be found in Nyboe Tabor (2016).

Bibliography:

Blackburne, E. F. & Frank, M.W. (2007) Estimation of nonstationary heterogeneous panels The Stata Journal (2007), 7, Number 2, pp. 197-208.

Nyboe Tabor, M. (2016) The ADL Model for Stationary Time Series: Long-run Multipliers and the Long-run Solution, Recuperated from: https://www.youtube.com/watch?v=GLpCVrZbW-g

General to Specific Modeling; a Step by Step Guide

In my the previous blogs, [1] [2] I have explained that following the General Specific Methodology, one can choose between theoretical models to find out a model which is compatible with data. Here is an example which shows step by step procedure of the general to simple methodology.

At the end of this blog, you will find the data on three variables, (i) Household Consumption (ii) GDP and (iii) for the South Korea. The data set is retrieved from WDI

Before starting the modeling, it is very useful to plot the data series. We have three data series, two of them are on same scale and can be plotted together. The third series ‘inflation’ is in percentage form and if plotted with the above mentioned series, it will not be visible. The graph of two series is as follows

You can see, the gap between income and consumption seems to be diverging over time. This is natural phenomenon, suppose a person has income 1000, and consumes 70% of it, the difference between consumption and income would be 300. Suppose the income has gone up to 10,000 and the MPC is same, than the difference between two variables would be widened to 3000. This widening gap is visible in the graph.

However, the widening gap creates problem in OLS. The residuals in the beginning of the data would have smaller variance and at the endpoints, they will have larger variance, i.e. there will be heteroskedasticity. In presence of heteroskedasticity, the OLS doesn’t remain efficient.

The graphs also show a non-linearity, the two series appear to behave like exponential series. A solution to the two problems is to use the log transform. The difference in log transform of two series is roughly equal to the percentage difference, and if the MPC remains same, the gap between two series would be smoothened.

I have taken the log transform and plotted the series again, the graphs is as follows

You can see the gap between log transform of two series is smooth compared to the previous graph. One can see the gap is still widening, but much smoother compared to the previous graph. The widening gap in this graph indicates decline in MPC overtime. Anyhow, the two graphs indicate that log transform is better to start building model.

I am starting with ARDL model of the following form

Where Ct indicates consumption Yt and indicates income

The estimated equation is as follows

The equation has very high R-square, but a high R-square in time series is no surprise. This turns out to be high even with unrelated series. However, the thing to note is the sigma which is the standard deviation of residuals, indicating average size of error is 0.0271. Before we proceed further we want to make sure that the estimated model is not having the issue of failure of assumption. We tested the model for normality, autocorrelation and heteroskedasticity, and the results are as follows;

The autocorrelation (AR) test has the null hypothesis of no autocorrelation and the P-value for AR test is above 5%, indicating that the null is no rejected and the hypothesis survived with a narrow margin. Normality test with null of normality and heteroskedasticity test with null of heteroskedasticity also indicate validity of the assumptions.

We want to ensure that the model is also good at prediction, because the ultimate goal of an econometric model is to predict the future. But the problem is, for the real time forecasting, we have to wait for years to see whether the model has the capability to predict. One solution to this problem is to leave some observation out of the model for purpose of prediction and then see how the model works to predict these observations.

The output indicates that the two tests for predictions have p-value much greater than 5%. The null hypothesis for Forecast Chi-square test is that the error variance for the sample period and forecast period are same and this hypothesis is not rejected. Similarly, the null hypothesis for Chow test is that the parameters remain same for the sample period and forecast period and this hypothesis is also not rejected.

All the diagnostic again show satisfactory results

Now let’s look back at the output of Eq(2). It shows the second lag variables Lconsumption_2 and LGDP_2 are insignificant. This means, keeping the Lconsumption_2 in the model, you can exclude LGDP_2 and vice versa. But to exclude both of these variables, you need to test significance of the two variables simultaneously. Sometime it happens that two variables are individually insignificant but become significant when taken together. Usually this happens due to multi-colinearity. We test joint significance of the two second lag variables, i.e.

The results of the test are

F(2,48)   =   2.1631 [0.1260] 

The results indicate that the hypothesis is not rejected, therefore, we can assume the coefficients of relevant variables to be zero, therefore the model becomes

The model M2 was estimated and the results are as follows

The results show the diagnostic tests for the newly estimated model are all OK, and the forecast performance for the new model is not affected by excluding the two variables. If you compare sigma for for Eq (2) and Eq(3), you will the difference only at fourth decimal. This means the size of model is reduced without paying any cost in terms of predictability.

Now the variables in the model are significant except the intercept for which the p-value is 0.178. This means the regression doesn’t support an intercept. We can reduce the model further by excluding intercept. This time we don’t need to test joint restriction because we want to exclude only one variable. After excluding the intercept, the model becomes

The output indicates that all the diagnostic are OK. All the variables are significant, so no variable can be excluded further.

Now we can impose some linear restrictions instead of the exclusion restrictions. For example, if we want to tests whether or not we can take difference of Cons and Income, we need to test following

      

And if we want to test restriction for the error correction model, we have to test

Apparently the two restriction seems valid because estimated value of  is close to 1 and values of  sum to 0. We have the choice to test R3 or R4. We are testing restriction R3 first.  The results are as follows

 

This means the error correction model can e estimated for the data under consideration.

For the error correction model, one needs to estimate a static regression (without lags) and to use the residuals of the equation as error correction term. Estimating static regression yield

The estimates of this equation are representative of the long run coefficients of relationship between the two variables. This shows the long run elasticity of consumption with respect to income is 0.93

We have to estimate following kind of error correction regression

 

The intercept doesn’t enter in the error correction regression. The estimates are as follows

This is the parsimonious model made for the consumption and income. The Eq (5) is representative of long run relationship between two variable and Eq (6) informs about short run dynamics.

The final model has only two parameters, whereas as Eq(1) that we started with contains 6 parameters. The sigma for the Eq(6) and  Eq (2) are roughly same which informs that the large model where we started has same predicting power as the last model. The diagnostic tests are all OK which means the final model is statistically adequate in that it the assumption of the model are not opposed by the data.

The final model is an error correction model, which contains information for both short run and long run. The short run information is present in equation (6), whereas the long run information is implicit in the error correction term and it is available in the static Eq (5).

The same methodology can be adapted for the more complex situations and the researcher needs to start from a general model, reducing it successively until the most parsimonious model which is statistically adequate is achieved

Data

Variables Details:

Consumption: Households and NPISHs Final consumption expenditure (current LCU)

GDP: GDP Current LCU

Country: Korea, Republic

Time Period: 1960-2019

Source: WDI online (open source data)

ConsumptionGDP
1960212720000000249860000000
1961252990000000301690000000
1962304100000000365860000000
1963417870000000518540000000
1964612260000000739680000000
1965693780000000831390000000
19668378900000001066070000000
196710398000000001313620000000
196812770600000001692900000000
196915977800000002212660000000
197020621000000002796600000000
197125924000000003438000000000
197231048000000004267700000000
197337986000000005527300000000
197454436000000007905000000000
1975728570000000010543600000000
1976931550000000014472800000000
19771136150000000018608100000000
19781501670000000025154500000000
19791943970000000032402300000000
19802491670000000039725100000000
19813118150000000049669800000000
19823527860000000057286600000000
19833979680000000068080100000000
19844444480000000078591300000000
19854930500000000088129700000000
198654837200000000102986000000000
198761775800000000121698000000000
198871362200000000145995000000000
198983899400000000165802000000000
1990100738000000000200556000000000
1991122045000000000242481000000000
1992141345000000000277541000000000
1993161105000000000315181000000000
1994192771000000000372493000000000
1995227070000000000436989000000000
1996261377000000000490851000000000
1997289425000000000542002000000000
1998270298000000000537215000000000
1999311177000000000591453000000000
2000355141000000000651634000000000
2001391692000000000707021000000000
2002440207000000000784741000000000
2003452737000000000837365000000000
2004468701000000000908439000000000
2005500911000000000957448000000000
20065332780000000001005600000000000
20075718100000000001089660000000000
20086063560000000001154220000000000
20096228090000000001205350000000000
20106670610000000001322610000000000
20117111190000000001388940000000000
20127383120000000001440110000000000
20137580050000000001500820000000000
20147804630000000001562930000000000
20158048120000000001658020000000000
20168348050000000001740780000000000
20178727910000000001835700000000000
20189115760000000001898190000000000
20199316700000000001919040000000000

ARDL model and General to simple methodology

Listening to the word ARDL, the first things that comes into mind is the bound testing approach introduced by Pesaran and Shin (1999).The Pesaran and Shin’s approach is an incredible use of the ARDL, however, the term ARDL is much elder, and the ARDL model has many other uses as well. In fact, the equation used by Pesaran and Shin is a restricted version of ARDL, and the unrestricted version of ARDL was introduced by Sargan (1964) and popularized by David F Hendry and his coauthors in several papers. The most important paper is one which is usually known as DHSY, but we will come to the details DHSY later. Let me introduce what is ARDL and what are the advantages of this model


What is ARDL model?

ARDL model is an a-theoretic model for modeling relationship between two time series. Suppose we want to see the effect of time series variable Xt on another variable Yt. The ARDL model for the purpose will be of the form

The same model can be written as

This means, in the layman language the dependent variable is regressed on its own lags, independent variable and the lags of independent variables. The above ARDL model can be termed as ARDL (j, k) model, referring to number of lags j & K in the model.

The model itself is written without any theoretical considerations. However, a large number of theoretical models are embedded inside this model and one can drive appropriate theoretical model by testing and imposing restrictions on the model.

To have more concrete idea, let’s consider the case of relationship between consumption and income. To further simplify, lets consider j=k=1, so that the ARDL(1,1) model for the relationship of consumption and income can be written as

Model 1:          Ct=a+b1Ct-1+d0Yt+d1Yt-1+et

HereC denotes consumption and Y denotes income, a,b1,d0,d1 denote the regression coefficient and et denotes error term. So far, no theory is used to develop this model and the regression coefficients don’t have any theoretical interpretation. However, this model can be used to select appropriate theoretical model for the consumption.

Suppose we have estimated the above mentioned model and found the regression coefficients. We can test any one of the coefficient and/or number of coefficient for various kinds of restriction. Suppose we test the restriction that   

R1: H0: (b1 d1)=0

Suppose testing restriction on actual data implies that restriction is valid, this means we can exclude the curresponding variables from the model. Excluding the variables, the model will become

Model 2:          Ct=a+ d0Yt+et

The model 2 is actually the Keynesian consumption (also called absolute income hypothesis), which says that current consumption is dependent on current income only. The coefficient of income in this equation is the marginal propensity to consume and Keynes predicted that this coefficient would be between 0 and 1, implying that individuals consume a part of their income and save a part of their incomes for future.

Suppose that the data did not suppose the restriction R1, however, the following restriction is valid

R2: H0: d1=0

This means model 1 would become

Model 3:          Ct=a+b1Ct-1+d0Yt+et

This means that current consumption is dependent on current income and past consumption. This is called Habit Persistence model. The past consumption here is the proxy of habit. The model says that what was consumed in the past is having effect on current consumption and is evident from human behavior.

Suppose that the data did not suppose the restriction R1, however, the following restriction is valid

R3: H0: b1=0

This means model 1 would become

Model 4:          Ct=a+ d0Yt+d1Yt-1+et

This means that current consumption is dependent on current income and past income. This is called Partial Adjustment model. As per implications of Keynesian consumption function, the consumption should only depend on the current income, but the partial adjustment model says that it takes sometimes to adjust to the new income. Therefore, the consumption is partially on the current income and partially on the past income

In a similar way, one can derive many other models out of the model 1 which are representative of different theories. The details of the models that can be drawn from model 1 can be found in Charemza and Deadman (1997)’s ‘New Directions in Econometric Practice…’.  

It can also be shown that the difference form models are also derivable from model 1. Consider the following restriction

R 4:

If this restriction is valid, the model 1 will become

Ct=a+Ct-1+d0Yt-d0Yt-1+et

This model can be re-written as

Ct-Ct-1=d0(Yt-Yt-1)+et

  This means

Model 5: DCt=d0DYt+et

This indicates that the difference form models can also be derived from the model 1 with certain restrictions

Further elaboration shows that the error correction models can also be derived from model 1.

Consider model 1 again and subtract Ct-1 both sides, we will get

 Ct- Ct-1=a+b1Ct-1 -Ct-1+d0Yt+d1Yt-1+et

Adding and subtracting d0Yt-1 on the right hand side we get

DCt=a+(b1-1)Ct-1+d0Yt+d1Yt-1 +d0Yt-1 -d0Yt-1 +et

DCt=a+(b1-1)Ct-1+d0DYt+d1Yt-1 +d0Yt-1 +et

DCt=a+(b1-1)Ct-1+d0DYt+(d1+d0)Yt-1+et

This equation contains error correction mechanism if

R6: (b1-1)= – (d1+d0)

Assume

 (b1-1)= – (d1+d0)=F

The equation will reduce to

DCt=a+F(Ct-1-Yt-1)+ d0DYt +et

This is our well known error correction model and can be derived if R6 is valid.

Therefore, existence of an error correction mechanism can also be tested from model 1 and restriction to be considered valid if R6 is valid.  

As we have discussed, number of theoretical models can be driven from model 1 by testing certain restrictions. We can start from model 1 and go with testing different restrictions. We can impose the restriction which is found valid and discard the restrictions which were found invalid in our testing. This provides us a natural way of selection among various theoretical models.

When we say theoretical model, this means there is some economic sense of the model. For example the models 2 to model 6 all make economic sense. So, how to decide between these models? This problem can be solved if we start with an ARDL model and choose to impose restrictions which are permitted by the data

The famous DHSY paper recommends a methodology like this. DHSY recommend that we should start with a large model which encompasses various theoretical models. The model can then be simplified by testing certain restrictions.

In another blog I have argued that if there are different theories for a certain variables, the research must be comparative. This short blog gives the brief outlines about how we can do this. Practically, one need to take larger ARDL structures and number of models that can be derived from the parent model would also be large.

Research in presence of multiple theories

Consider a hypothetical question, a researcher was given with a research question; compare the mathematical ability of male and female students of grade 5. The researcher collected data of 300 female students and 300 male students of grade 5 and administered a test of mathematical questions. The average score for female students was 80% and average score of male student was 50%, the difference was statistically significant and therefore, the researcher concluded that the female students have better mathematical aptitude.

The findings seem strong and impressive, but let me add into the information that the male students were chosen from a far-off village with untrained educational staff and lack of educational facilities. The female students were chosen from an elite school of a metropolitan city, where the best teachers of the city actually serve. What should be the conclusion now? It can be argued that actually difference doesn’t come from the gender, the difference is coming from the school type.

The researcher carrying out the project says ‘look, my research assignment was only to investigate the difference due to gender, the school type is not the question I am interested in, therefore, I have nothing to do with the school type’.

Do you think that the argument of researcher is valid and the findings should be considered reliable? The answer is obvious, the findings are not reliable, and the school type creates a serious bias.  The researcher must compare students from the same school type. This implies you have to take care of the variables not having any mention in your research question if they are determinants of your dependent variable.

Now let’s apply the same logic to econometric modeling, suppose we have the task to analyze the impact of financial development on economic growth. We are running a regression of GDP growth on a proxy of financial development; we are getting a regression output and presenting the output as impact of financial development on economic growth. Is it a reliable research?

This research is also deficient just like our example of gender and mathematical ability. The research is not reliable if ceteris paribus doesn’t hold. The other variables which may affect the output variable should remain same.

But in real life, it is often very difficult to keep all other variables same. The economy continuously evolves and so are the economic variables. The other solution to overcome the problem is to take the other variables into account while running regression. This means other variables that determine your dependent variable should be taken as control variables in the regression. This means suppose you want to check the effect of X1 on Y using model Y=a+bX1+e. Some other research studies indicate that another model exist for Y which is Y=c+dX2+e.   Then I cannot run the first model ignoring the second model. If I am running only model 1 ignoring the other models, the results would be biased in a similar way as we have seen in our example of mathematical ability. We have to use the variables of model 2 as control variable, even if we are not interested in coefficients of model 2. Therefore, the estimated model would be like Y=a+bX1+cX2+e

Taking the control variables is possible when there are a few models. The seminal study of Davidson, Hendry, Sarba and Yeo titled ‘Econometric modelling of the aggregate time-series relationship between …. (often referred as DHSY)’ summarizes the way to build a model in such a situation. But it often happens that there exists very large number of models for one variable. For example, there is very large number of models for growth. A book titled ‘Growth Econometrics’ by Darlauf lists hundreds of models for growth used by researchers in their studies. Life becomes very complicated when you have so many models. Estimating a model with all determinants of growth would be literally impossible for most of the countries using the classical methodology. This is because growth data is usually available at annual or quarterly frequency and the number of predictors taken from all models collectively would exceed number of observations. The time series data also have dynamic structure and taking lags of variables makes things more complicated. Therefore, classical techniques of econometrics often fail to work for such high dimensional data.

Some experts have invented sophisticated techniques for the modeling in a scenario where number of predictor becomes very large. These techniques include Extreme Bound Analysis, Weighted Average Least Squares, and Autometrix etc. The high dimensional econometric techniques are also very interesting field of econometric investigation. However, DHSY is extremely useful for the situations where there are more than one models for a variable based on different theories. The DHSY methodology is also called LSE methodology, General to Specific Methodology or simply G2S methodology.  

Panel Data Nonlinear Simultaneous Equation Models with Two-Stage Least Squares using Stata

In this article, we will follow Woolridge (2002) procedure to estimate a set of equations with nonlinear functional forms for panel data using the two-stage least squares estimator. It has to be mentioned that this topic is quite uncommon and not used a lot in applied econometrics, this is due that instrumenting the nonlinear terms might be somewhat complicated.

Assume a two-equation system of the form:

Where the y’s represents the endogenous variables, Z represents the exogenous variables taken as instruments and u are the residuals for each equation. Notice that y2 is in a quadratic form in the first equation but also present in linear terms on the second equation.

Woolridge calls this model as nonlinear in endogenous variable, yet the model still linear in the parameters γ making this a particular problem where we need to somehow instrument the quadratic term of y2.

Finding the instruments for the quadratic term is a particular challenge than already it is for linear terms in simple instrumental variable regression. He suggests the following:

“A general approach is to always use some squares and cross products of the exogenous variables appearing somewhere in the system. If something like exper2 appears in the system, additional terms such as exper3 and exper4 would be added to the instrument list.” (Wooldridge, 2002, p. 235).

Therefore, it worth the try to use nonlinear terms of the exogenous variables from Z, in the form of possible Z2 or even Z3. And use these instruments to deal with the endogeneity of the quadratic term y2. When we define our set of instruments, then any nonlinear equation can be estimated with two-stage least squares. And as always, we should check the overidentifying restrictions to make sure we manage to avoid inconsistent estimates.

The process with an example.

Let’s work with the Example of a nonlinear labor supply function. Which is a system of the form:

Some brief description of the model indicates that for the first equation, the hours (worked) are a nonlinear function of the wage, the level of education (educ), the age (age), the kids situation associated to the age, whether if they’re younger than 6 years old or between 6 and 18 (kidslt6 and kidsge6), and the wife’s income (nwifeinc).

On the second equation, the wage is a function of the education (educ), and a nonlinear function of the exogenous variable experience (exper and exper2).

We work on the natural assumptions that E(u|z)=0 therefore the instruments are exogenous. Z in this case contains all the other variables which are not endogenous (hours and wage are the endogenous variables).

We will instrument the quadratic term of the logarithm of the wage in the first equation, and for such instrumenting process we will add three new quadratic terms, which are:

And we include those in the first-stage regression.

With Stata we first load the dataset which can be found here.

https://drive.google.com/file/d/1m4bCzsWgU9sTi7jxe1lfMqM2T4-A3BGW/view?usp=sharing

Load up the data (double click the file with Stata open or use some path command to get it ready)

use MROZ.dta

Generate the squared term for the logarithm of the wage with:

gen lwage_sq=lwage *lwage

Then, get ready to use the following command with ivregress, however, we will explain it in detail.

ivregress 2sls hours educ age kidslt6 kidsge6 nwifeinc (lwage lwage_sq  = educ c.educ#c.educ exper expersq age c.age#c.age kidsge6 kidslt6 nwifeinc c.nwifeinc#c.nwifeinc), first

Which has the following interpretation. According to the syntaxis of Stata’s program. First, make sure you specify the first equation with the associated exogenous variables, we do that with the part.

ivregress 2sls hours educ age kidslt6 kidsge6 nwifeinc

Now, let’s tell to Stata that we have two other endogenous regressors, which are the wage and the squared term of the wages. We open the bracket and put

(lwage lwage_sq  =

This will tell to Stata that lwage and lwage_sq are endogenous, part of the first equation of hours, and after the equal, we specify ALL the exogenous variables including the instruments for the endogenous terms, this will lead to include the second part as:

(lwage lwage_sq  = educ c.educ#c.educ exper expersq age c.age#c.age kidsge6 kidslt6 nwifeinc c.nwifeinc#c.nwifeinc)

Notice that this second part will have a c.var#c.var structure, this is Stata’s operator to indicate a multiplication for continuous variables, (and we induce the quadratic terms without generating the variables with another command like we did with the wage).

So notice we have c.educ#c.educ which is the square of the educ variable, and c.age#c.age which is the square of the age, and we also square the wife’s income with c.nwifeinc#c.nwifeinc. These are the instruments for the quadratic term.

The fact that we have two variables on the left (lwage and lwage_sq) indicates that the set of instruments will hold first for an equation for lwage and second for lwage_sq given the exact same instruments.

We include the option , first to see what were the regressions in the first stage.

ivregress 2sls hours educ age kidslt6 kidsge6 nwifeinc (lwage lwage_sq  = educ c.educ#c.educ exper expersq age c.age#c.age kidsge6 kidslt6 nwifeinc c.nwifeinc#c.nwifeinc), first

The output of the above model for the first stage equations is:

And the output for the two stage equation is:

Which yields in the identical coefficients in Woolridge’s book (2002, p- 236) also with some slightly difference in the standard errors (yet these slight differences do not change the interpretation of the statistical significance of the estimators).

In this way, we instrumented both endogenous regressors lwage and lwage_sq. Which are a nonlinear relationship in the model.

As we can see, the quadratic term is not statistically significant to explain the hours worked.

At last, we need to make sure that overidentification restrictions are valid. So we use after the regression

estat overid

And within this result, we cannot reject the null that overidentifying restrictions are valid.

Bibliography

Wooldridge, J. M. (2002). Econometric Analysis of Cross Section and Panel Data. Cam-bridge, MA: MIT Press.