The black box and econometrics.

Note: Picture was taken from Aravindan (2019)

Some of the most popular models used in Data Analysis imply the use of the so-called “Black Box” approach. Regarding the simplest interpretation one can give in this context, it depends on the inputs and outputs that a certain model can deliver in terms of prediction power.

If econometrics is thought to estimate population parameters, and provide their causal inference, the black box approach proper of data analysis is somewhat opposite to this concept. In fact, we only care about responses and predicted responses to discriminate across models given a certain amount of data (captured in an observable sample). We then calculate the prediction contrasted with the actual value and derive measures of the error, and thus, we select a rational model which provides the best explanation of the response variable considering, of course, the tradeoff between variance and bias induced.

In an article by Mullainathan & Spiess (2017) from the Journal of Economic Perspectives, a short description of supervised and unsupervised approaches of machine learning are described. The out-of-sample performance for these methods in comparison to the least-squares is potentially greater. See the next table taken from the article of these authors:

Source: Mullainathan & Spiess (2017, 90) Note: The dependent variable is the log-dollar house value of owner-occupied units in the 2011 American Housing Survey from 150 covariates including unit characteristics and quality measures. All algorithms are fitted on the same, randomly drawn training sample of 10,000 units and evaluated on the 41,808 remaining held-out units. The numbers in brackets in the hold-out sample column are 95 percent bootstrap confidence intervals for hold-out prediction performance, and represent measurement variation for a fixed prediction function. For this illustration, we do not use sampling weights. Details are provided in the online Appendix at http://e-jep.org.

In this exercise, a training sample and a test sample were used to calculate the “prediction performance” given by the R2. In econometrics, we would call this, the goodness of fit of the model, or also, the percentage of linear explanation regarding the model. It is not a secret that when the goodness of fit of the model increases, we will have a higher prediction power (considering of course that we would never actually going to have an R2 of 1 unless we have some overfitting issues).

When you compare table 1 results in the “hold out of sample” column, you can find that some other approaches may outperform the least-squares regression in terms of the prediction power. A mere mention of this can be witnessed in the row corresponding to LASSO estimates, and hence, one can states that there’s an increased prediction performance compared to least squares. And therefore, the LASSO model is capturing somewhat better, the behavior of the response variable (at least for this sample).

One should ask at this point what is the objective of the analysis. If we are going for statistical inference and the estimation of population parameters, we should stick to the non-black-box approaches. Some of them may involve traditional LS, GMM, 2SLS to mention an example. But, if we are more interested in the prediction power and performance, the black box approaches surely will come in handy, and sometimes, may outperform the econometrical procedures to estimate population parameters. In the way I see it, the black box even when it is unknown to us in the closer details, has the ability to adapt itself to the data (but of course this should be considering the variety of machine learning methods and algorithms, not the penalized regression).

As the authors expressed in their article, it could be tentative to draw conclusions from these methods like we usually do in econometrics, but first, we need to consider some of the limitations in the application of the black box approaches. A mention of these could be defined as 1) Sometimes the correlation steps in, 2) The production of standard errors become harder, 2) Some of the methods are inconsistent if we change the initial conditions, 3) There’s a risk of choosing incorrect models and may induce to omitted variable bias.

However, even with the above problems, we are able to get some useful connections between the black box approaches and the econometrical methods. The advantage of machine learning over the estimation of traditional econometrical models may be superior in the context of large samples, in which, the researcher may need to define a set of covariates of influence to define or test a theory. Also, even for policymakers, it can be a useful tool associated with econometrical analysis. This provides the economist “a tool to infer actual from reported” values and proceed with comparisons given the samples of the researcher.

We are also able to correct some of the problems associated with the prediction powers to estimate population parameters, as the authors appoint, consider the case of two-stage least squares, where in the first stage we are required to make a prediction of the endogenous regressor considering an instrument, the black box approach may even be useful to perform better predictions and include it in second-stage regression, however, it should be noted that instruments selected should be at least reasonable exogenous, because if we let the black box alone, it would just take correlations and possible bring up reverse causality problems.

Supervised or non-supervised methods in machine learning may provide a better understanding from a different approach, and with this, I refer to the “black box” approach. Since even when it is not exactly part of the causal analysis. It may be useful to select some possible covariates of a phenomenon, thus, the rational analysis and the selected outcome should always be considered and criticized in terms to provide the best inference. From this perspective, even when we don’t know what exactly happens inside the box, the outcome of the black box itself is giving us some useful information.

This is a topic that is getting constant reviews and enhancement for real-world applications, I believe that the bridge between the black box approaches from machine learning and the econometrical theory will eventually be more strong over time, considering, of course, the needs of the growing society in terms of information.

Bibliography

Aravindan, G. (2019) Challenges of AI-based adoptions: Simplified, Sogetilabs. Recuperated from: https://labs.sogeti.com/challenges-of-ai-based-adoptions/

Mullainathan, S. & Spiess, J. (2017) Machine Learning: An Applied Econometric Approach, Volume 31, Number 2—Spring 2017—Pages 87–106.


Rudin, C. & Radin, J. (2019) Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From An Explainable AI Competition, Recuperated from: https://hdsr.mitpress.mit.edu/pub/f9kuryi8/release/6

Panel Data Nonlinear Simultaneous Equation Models with Two-Stage Least Squares using Stata

In this article, we will follow Woolridge (2002) procedure to estimate a set of equations with nonlinear functional forms for panel data using the two-stage least squares estimator. It has to be mentioned that this topic is quite uncommon and not used a lot in applied econometrics, this is due that instrumenting the nonlinear terms might be somewhat complicated.

Assume a two-equation system of the form:

Where the y’s represents the endogenous variables, Z represents the exogenous variables taken as instruments and u are the residuals for each equation. Notice that y2 is in a quadratic form in the first equation but also present in linear terms on the second equation.

Woolridge calls this model as nonlinear in endogenous variable, yet the model still linear in the parameters γ making this a particular problem where we need to somehow instrument the quadratic term of y2.

Finding the instruments for the quadratic term is a particular challenge than already it is for linear terms in simple instrumental variable regression. He suggests the following:

“A general approach is to always use some squares and cross products of the exogenous variables appearing somewhere in the system. If something like exper2 appears in the system, additional terms such as exper3 and exper4 would be added to the instrument list.” (Wooldridge, 2002, p. 235).

Therefore, it worth the try to use nonlinear terms of the exogenous variables from Z, in the form of possible Z2 or even Z3. And use these instruments to deal with the endogeneity of the quadratic term y2. When we define our set of instruments, then any nonlinear equation can be estimated with two-stage least squares. And as always, we should check the overidentifying restrictions to make sure we manage to avoid inconsistent estimates.

The process with an example.

Let’s work with the Example of a nonlinear labor supply function. Which is a system of the form:

Some brief description of the model indicates that for the first equation, the hours (worked) are a nonlinear function of the wage, the level of education (educ), the age (age), the kids situation associated to the age, whether if they’re younger than 6 years old or between 6 and 18 (kidslt6 and kidsge6), and the wife’s income (nwifeinc).

On the second equation, the wage is a function of the education (educ), and a nonlinear function of the exogenous variable experience (exper and exper2).

We work on the natural assumptions that E(u|z)=0 therefore the instruments are exogenous. Z in this case contains all the other variables which are not endogenous (hours and wage are the endogenous variables).

We will instrument the quadratic term of the logarithm of the wage in the first equation, and for such instrumenting process we will add three new quadratic terms, which are:

And we include those in the first-stage regression.

With Stata we first load the dataset which can be found here.

https://drive.google.com/file/d/1m4bCzsWgU9sTi7jxe1lfMqM2T4-A3BGW/view?usp=sharing

Load up the data (double click the file with Stata open or use some path command to get it ready)

use MROZ.dta

Generate the squared term for the logarithm of the wage with:

gen lwage_sq=lwage *lwage

Then, get ready to use the following command with ivregress, however, we will explain it in detail.

ivregress 2sls hours educ age kidslt6 kidsge6 nwifeinc (lwage lwage_sq  = educ c.educ#c.educ exper expersq age c.age#c.age kidsge6 kidslt6 nwifeinc c.nwifeinc#c.nwifeinc), first

Which has the following interpretation. According to the syntaxis of Stata’s program. First, make sure you specify the first equation with the associated exogenous variables, we do that with the part.

ivregress 2sls hours educ age kidslt6 kidsge6 nwifeinc

Now, let’s tell to Stata that we have two other endogenous regressors, which are the wage and the squared term of the wages. We open the bracket and put

(lwage lwage_sq  =

This will tell to Stata that lwage and lwage_sq are endogenous, part of the first equation of hours, and after the equal, we specify ALL the exogenous variables including the instruments for the endogenous terms, this will lead to include the second part as:

(lwage lwage_sq  = educ c.educ#c.educ exper expersq age c.age#c.age kidsge6 kidslt6 nwifeinc c.nwifeinc#c.nwifeinc)

Notice that this second part will have a c.var#c.var structure, this is Stata’s operator to indicate a multiplication for continuous variables, (and we induce the quadratic terms without generating the variables with another command like we did with the wage).

So notice we have c.educ#c.educ which is the square of the educ variable, and c.age#c.age which is the square of the age, and we also square the wife’s income with c.nwifeinc#c.nwifeinc. These are the instruments for the quadratic term.

The fact that we have two variables on the left (lwage and lwage_sq) indicates that the set of instruments will hold first for an equation for lwage and second for lwage_sq given the exact same instruments.

We include the option , first to see what were the regressions in the first stage.

ivregress 2sls hours educ age kidslt6 kidsge6 nwifeinc (lwage lwage_sq  = educ c.educ#c.educ exper expersq age c.age#c.age kidsge6 kidslt6 nwifeinc c.nwifeinc#c.nwifeinc), first

The output of the above model for the first stage equations is:

And the output for the two stage equation is:

Which yields in the identical coefficients in Woolridge’s book (2002, p- 236) also with some slightly difference in the standard errors (yet these slight differences do not change the interpretation of the statistical significance of the estimators).

In this way, we instrumented both endogenous regressors lwage and lwage_sq. Which are a nonlinear relationship in the model.

As we can see, the quadratic term is not statistically significant to explain the hours worked.

At last, we need to make sure that overidentification restrictions are valid. So we use after the regression

estat overid

And within this result, we cannot reject the null that overidentifying restrictions are valid.

Bibliography

Wooldridge, J. M. (2002). Econometric Analysis of Cross Section and Panel Data. Cam-bridge, MA: MIT Press.

Handling structural breaks with logarithms

As we saw in other econometric blogs of M&S Research Hub, the use of logarithms constitutes a usual practice in econometrics, not only for the problems that can be derived from overusing them, but also it was mentioned the advantage to reduce the Heteroscedasticity -HT- (Nau, 2019) present in the series of a dataset, and some improvements that the monotonic transformation performs on the data as well.  

In this article, we’re going to explore the utility of the logarithm transformation to reduce the presence of structural breaks in the time series context. First, we’ll review what’s a structural break, what are the implications of regressing data with structural breaks and finally, we’re going to perform a short empirical analysis with the Gross Domestic Product -GDP- of Colombia in Stata.

The structural break

We can define a structural break as a situation where a sudden, unexpected change occurs in a time series variable, or a sudden change in the relationship between two-time series (Casini & Perron, 2018). In this order of ideas, a structural change might look like this:

Source: Shresta & Bhatta (2018)

The basic idea is to identify abrupt changes in time series variables but we’re not restricting such identification to the domain of time, it can be detected also when we scatter X and Y variables that not necessarily consider the dependent variable as the time. We can distinguish different types of breaks in this context, according to Hansen (2012)  we can encounter breaks in 1) Mean, 2) Variance, 3) Relationships, and also we can face single breaks, multiple breaks, and continuous breaks.

Basic problems of the structural breaks

Without going into complex mathematical definitions of the structural breaks, we can establish some of the problems when our data has this situation. The first problem was identified by Andrews (1993) regarding to the parameter’s stability related to structural changes, in simple terms, in the presence of a break, the estimators of the least square regression tend to vary over time, which is of course something not desirable, the ideal situation is that the estimators would be time invariant to consolidate the Best Linear Unbiased Estimator -BLUE-.

The second problem of structural breaks (or changes) not taken in account during the regression analysis is the fact that the estimator turns to be inefficient since the estimated parameters are going to have a significant increase in the variance, so we’re not getting a statistical unbiased estimator and our exact inferences or forecasting analysis wouldn’t be according to reality.

A third problem might appear if the structural break influences the unit root identification process, this is not a wide explored topic but Tai-Leung Chong (2001) makes excellent appoints related to this. Any time series analysis should always consider the existence of unit roots in the variables, in order to provide further tools to handle a phenomenon, that includes the cointegration field and the forecasting techniques.

An empirical approximation

Suppose we want to model the tendency of the GDP of the Colombian economy, naturally this kind of analysis explicitly takes the GDP as the dependent variable, and the time as the independent variable, following the next form:

In this case, we know that the GDP expressed in Y is going to be a function of the time t. We can assume for a start that the function f(t) follows a linear approximation.

With this expression in (1), the gross domestic production would have an independent autonomous value independent of time defined in a, and we’ll get the slope coefficient in α which has the usual interpretation that by an increase of one-time unit, the GDP will have an increase of α.

The linear approximation sounds ideal to model the GDP against the changes over time, assuming that t has a periodicity of years, meaning that we have annual data (so we’re excluding stational phenomena); however, we shall always inspect the data with some graphics.

With Stata once we already tsset the database, we can watch the graphical behavior with the command “scatter y t”.

In sum, the linear approximation might not be a good idea with this behavior of the real GDP of the Colombian economy for the period of analysis (1950-2014). And it appears to be some structural changes judging by the tendency which changes the slope of the curve drastically around the year 2000.

If we regress the expression in (1), we’ll get the next results.

The linear explanation of the time (in years) related to the GDP is pretty good, around 93% of the independent variable given by the time, explains the GDP of the Colombian economy, and the parameter is significant with a level of 5%.

Now I want you to focus in two basic things, the variance of the model which is 1.7446e+09 and the confidence intervals, which positions the estimator between 7613.081 and 8743.697. Without having other values to compare these two things, we should just keep them in mind.

Now, we can proceed with a test to identify structural breaks in the regression we have just performed. So, we just type “estat sbsingle” in order to test for a structural break with an unknown date.

The interesting thing here is that the structural break test identifies one important change over the full sample period of 1950 to 2014, the whole sample test is called “supremum Wald test” and it is said to have less power than average or exponential tests. However, the test is useful in terms of simply identify structural terms which also tend to match with the graphical analysis. According to the test, we have a structural break in the year 2002, so it would be useful to graph the behavior before and after this year in order to conclude the possible changes.  We can do this with the command “scatter y t” and include some if conditions like it follows ahead.

 twoway (scatter Y t if t<=2002)(lfit  Y t if t<=2002)(scatter Y t if t>=2002)(lfit  Y t if t>=2002) 

We can observe that tendency is actually changing if we adjust the line for partial periods of time, given by t<2002 and t>2002, meaning that the slope change is a sign of structural break detected by the program. You can attend this issue including a dummy variable that would equal 0 in the time before 2002 and equal 1 after 2002. However, let’s graph now the logarithm transformation of GDP.  The mathematical model would be:

Applying natural logarithms, we got:

α now becomes the average growth rate per year of the GDP of the Colombian economy, to implement this transformation use the command “gen ln_y=ln(Y)” and the graphical behavior would look like this:

 gen ln_Y=ln(Y)
 scatter ln_Y t

The power of the monotonic transformation is now visible, there’s a straight line among the variable which can be fitted using a linear regression, in fact, let’s regress the expression in Stata.

Remember that I told you to keep in mind the variance and the confidence intervals of the first regression? well now we can compare it since we got two models, the variance of the last regression is 0.0067 and the intervals are indeed close to the coefficient (around 0.002 of difference between the upper and lower interval for the parameter). So, this model fits even greatly than the first.

If we perform again the “estat sbsingle” test again, it’s highly likely that another structural break might appear. But we should not worry a lot if this happens, because we rely on the graphical analysis to proceed with the inferences, in other words, we shall be parsimonious with our models, with little, explain the most.

The main conclusion of this article is that the logarithms used with its property of monotonic transformation constitutes a quick, powerful tool that can help us to reduce (or even delete) the influences of structural breaks in our regression analysis. Structural changes are also, for example, signs of exogenous transformation of the economy, as a mention to apply this idea for the Colombian economy, we see it’s growing speed changing from 2002 until the recent years, but we need to consider that in 2002, Colombia faced a government change which was focused on the implementation of public policies related to eliminating terrorist groups, which probably had an impact related to the investment process in the economy and might explain the growth since then.

Bibliography

Andrews, D. W. (1993). Tests for Parameter Instability and Structural Change With Unknown Change Point. Journal of the Econometric Society Vol. 61, No. 4 (Jul., 1993), 821-856.

Casini, A., & Perron, P. (2018). Structural Breaks in Time Series. Retrieved from Economics Department, Boston University: https://arxiv.org/pdf/1805.03807.pdf

Hansen, B. E. (2012). Advanced Time Series and Forecasting. Retrieved from Lecture 5 tructural Breaks. University of Wisconsin Madison: https://www.ssc.wisc.edu/~bhansen/crete/crete5.pdf

Nau, R. (2019). The logarithm transformation. Retrieved from Data concepts The logarithm transformation: https://people.duke.edu/~rnau/411log.htm

Shresta, M., & Bhatta, G. (2018). Selecting appropriate methodological framework for time series data analysis. Retrieved from The Journal of Finance and Data Science: https://www.sciencedirect.com/science/article/pii/S2405918817300405

Tai-Leung Chong, T. (2001). Structural Change In Ar(1) Models. Retrieved from Econometric Theory,17. Printed in the United States of America: 87–155

Taking Logarithms of Growth Rates and Log-based Data.

A usual practice while we’re handling economic data, is the use of logarithms, the main idea behind using them is to reduce the Heteroscedasticity -HT- of the data (Nau, 2019). Thus reducing HT, implies reducing the variance of the data. Several times, different authors implement some kind of double logarithm transformation, which is defined as taking logarithms of the data which is already in logarithms and growth rates (via differencing logarithms).

The objective of this article is to present the implications of this procedures, first by analyzing what does do the logarithm to a variable, then observing what possible inferences can be done when logarithms are applied to growth rates.

There are a series of properties about the logarithms that should be considered first, we’re not reviewing them here, however the reader can check them in the following the citation (Monterey Institute, s.f). Now let’s consider a bivariate equation:

The coefficient B represents the marginal effect of a change of one unit in X over Y. So, interpreting the estimation with ordinary least squares estimator gives the following analysis: When x increases in one unit, the result is an increase of B in y. It’s a lineal equation where the marginal effect is given by:

When we introduce logarithms to the equation of (1) by modifying the functional form, the estimation turns to be non-linear. However, let’s first review what logarithms might do to the x variable. Suppose x is a time variable which follows an upward tendency, highly heteroscedastic as the next graph shows.

We can graphically appreciate that variable x has a positive trend, and also that has deviations over his mean over time. A way to reduce the HT present in the series is to make a logarithm transformation. Using natural logarithms, the behavior is shown in the next graph.

The units have changed drastically, and we can define that logarithm of x is around 2 and 5. Whereas before we had it from 10 to 120 (the range has been reduced). The reason, the natural logarithm reduces HT because the logarithms are defined as a monotonic transformation (Sikstar, s.f.). When we use this kind of transformation in econometrics like the following regression equation:

The coefficient B is no longer the marginal effect, to interpret it we need to divide it by 100 (Rodríguez Revilla, 2014). Therefore, the result should be read as: an increase of one unit in x produces a change of B/100 in y.

If we use a double-log model, equation can be written as:

In this case, the elasticity is simply B which is interpreted in percentage. Example, if B=0.8. By an increase of 1% in x, the result would be an increase of 0.8% in y.

On the other hand, if we use log-linear model, equation can be written as:

In this case, B must be multiplied by 100 and it can be interpreted as a growth rate in average per increases of a unit of x. If x=t meaning years, then B is the average growth per year of y.

The logarithms also are used to calculate growth rates. Since we can say that:

The meaning of equation (5) is that growth rates of a variable (left hand of the equation) are approximately equal to the difference of logarithms. Returning with this idea over our x variable in the last graphic, we can see that the growth rate between both calculations are similars.

It’s appreciably the influence of the monotonic transformation; the growth rate formula has more upper (positive) spikes than the difference of logarithms does. And inversely the lower spikes are from the difference of logarithms.  Yet, both are approximately growth rates which indicate the change over time of our x variable.

For example, let’s place on the above graphic when is the 10th year.  The difference in logarithms indicates that the growth rate is -0.38% while the growth rate formula indicates a -0.41% of the growth-related between year 9th and now.  Approximately it’s 0.4% of negative growth between these years.

When we use logarithms in those kinds of transformations we’ll get mathematically speaking, something like this:

Some authors just do it freely to normalize the data (in other words reducing the HT), but Would be the interpretation remain the same? What are the consequences of doing this? It’s something good or bad?

As a usual answer, it depends. What would happen if, for example, we consider the years 9 and 10 again of our original x variable, we can appreciate that the change it’s negative thus the growth rate it’s negative. Usually, we cannot estimate a logarithm when the value is negative.

With this exercise, we can see that the first consequence of overusing logarithms (in differenced logarithms and general growth rates) is that if we got negative values, the calculus becomes undefined, so missing data will appear. If we graph the results of such thing, we’ll have something like this:

At this point, the graphic takes the undefined values (result of the logarithm of negative values) as 0 in the case of Excel, other software might not even place a point.  We got negative values of a growth rate (as expected), but what we got now is a meaningless set of data. And this is bad because we’re deleting valuable information from other timepoints.

Let’s forget for now the x variable we’ve been working with.  And now let’s assume we got a square function.

The logarithm of this variable since its exponential would be:

and if we apply another log transformation, then we’ll have:

However, consider that if z=0, the first log would be undefined, and thus, we cannot calculate the second. We can appreciate this in some calculations as the following table shows.

The logarithm of 0 is undefined, the double logarithm of that would be undefined too. When z=1 the natural logarithm is 0, and the second transformation is also undefined. Here we can detect another problem when some authors, in order to normalize the data, apply logarithms indiscriminately. The result would be potential missing data problem due to the monotonic transformation when values of the data are zero.

Finally, if we got a range of data between 0 and 1, the logarithm transformation will induce the calculus to a negative value. Therefore, the second logarithm transformation it’s pointless since all the data in this range is now undefined.

The conclusions of this article are that when we use logarithms in growth rates, one thing surely can happen: 1) If we got potential negative values in the original growth rate, and then apply logarithms on those, the value becomes undefined, thus missing data that will occur. And the interpretation becomes harder. Now if we apply some double transformation of log values, the zero and the negative values in the data will become undefined, thus missing data problem will appear again. Econometricians should take this in considerations since it’s often a question that arises during researches, and in order to do right inferences, analyzing the original data before applying logarithms should be a step before doing any econometric procedure.

Bibliography

Monterey Institute. (s.f). Properties of Logarithmic Functions. Obtained from: http://www.montereyinstitute.org/courses/DevelopmentalMath/TEXTGROUP-1-19_RESOURCE/U18_L2_T2_text_final.html

Nau, R. (2019). The logarithm transformation. Obtenido de Data concepts The logarithm transformation. Obtained from: https://people.duke.edu/~rnau/411log.htm

Rodríguez Revilla, R. (2014). Econometria I y II. Bogotá. : Universidad Los Libertadores.

Sikstar, J. (s.f.). Monotonically Increasing and Decreasing Functions: an Algebraic Approach. Obtained from: https://opencurriculum.org/5512/monotonically-increasing-and-decreasing-functions-an-algebraic-approach/

The impact of functional form over the normality assumption in the residuals

A discussed solution in order to accomplish the normality assumption in regression models relates to the correct specification of a Data Generating Process (Rodríguez Revilla, 2014), the objective here is to demonstrate how functional form might influence the distribution of the residuals in a regression model using ordinary least squares technique.

Let’s start with a Monte Carlo exercise using the theory of Mincer (1974) in which we have a Data Generating Process -DGP- of the income for a cross-sectional study of a population of a city.

With

The DGP expressed in (1) is the correct specification of income for the population of our city. Where y is the income in monetary units, schooling is the years of school of the individual, exp is the number of years of experience in the current job. Finally, we got the square of the experience which reflects by the negative sign, the decreasing returns of the variable over the income.

Let’s say we want to study the income in our city, so one might use a simple approximation model for the regression equation. In this case, we know by some logic that schooling and experience are related to the income, so we propose to use the next model in (2) to study the phenomena.

Regressing this model with our Monte Carlo exercise with the specification in (2) we got the next results, considering a sample size of 1000 individuals.

We can see that coefficients of the experience and the constant term are not so close to the DGP process, and that the estimator of schooling years on the other hand it’s approximately accurate. All variables are relevant at a 5% significance level and R^2 is pretty high.

We want to make sure if we got the right variables, so we use Ramsey RESET test to check if we got a problem of omitted variables. Let’s predict first the residuals with predict u, res of the above regression and then perform the test of omitted variables (using Ramsey omitted variable test with estat ovtest):

Ramsey test indicates no omitted variables at a 5% level of significance, so we have now an idea that we’re using the right variables. Let’s check out now, the normality assumption with a graphic distribution of the predicted residuals, in Stata we use the command histogram u, norm

Graphically the result shows that the behavior of the residuals is non-normal. In order to confirm this, we perform a formal test with sktest u and we’ll see the following results.

The test of normality of the residuals is not good. Meaning that with a 5% of the significance level of the error, the predicted residuals have a non-normal distribution. This invalidates the results of the t statistics in the coefficients in the regression of equation (2).

We should get back to our functional form in the regression model in (2), and now we should consider that experience might have some decreasing or increasing returns over the income. So, we adapt our specification including the square term of the experience to capture the marginal effect of the variable:

Now in order to regress this model in Stata, we need to generate the squared term of the experience. To do this we type gen exp_sq=experience*experience where experience is our variable.

We have now our squared variable of experience which we include the regression command as the following image presents.

We can see that coefficients are pretty accurate to the DGP of (1), which is because the specification is closer to the real relationship of the variables in our simulated exercise. The negative sign in the squared term indicates a decreasing return of experience over the income, and the marginal effect is given by:

Let’s predict our residuals of our new regression model with predict u2,res and let’s check the distribution of the residuals using histogram u2, norm 

Residuals by graphic inspection presents a normal distribution, we confirm this with the formal test of normality with the command sktest u2

According to the last result we cannot reject the null hypothesis of a normal distribution in the predicted residuals of our second regression model, so we accept that residuals of our last estimates have a normal distribution with a 5% significance level.

The conclusion of this exercise is that even if we have the right variables for a regression model, just like we considered in equation (2), if the specification functional form isn’t correct then the behavior of the residuals will be not be normally distributed.

A correction in the specification form of the regression model can be considered as a solution for non-normality problems, since the interactions of the variables can be modeled better. However in real estimations, finding the right functional form is frequently harder and it’s attached to problems of the data, non-linear relationships, external shocks and atypical observations, but it worth the try to inspect the data in order to find what could be the proper functional form of the variables in order to establish a good regression model which come as accurate as possible to the data generating process.

References

Mincer, J. (1974). Education, Experience and the Distribution of Earnings and of employment. New York: National bureau of Economic Research (for the Carnegie Comission).

Rodríguez Revilla, R. (2014). Econometria I y II. Bogotá. Colombia : Universidad Los Libertadores.

StataCorp (2017) Stata Statistical Software: Release 15. College Station, TX: StataCorp LLC. Avaliable in: https://www.stata.com/products/