Handling structural breaks with logarithms

As we saw in other econometric blogs of M&S Research Hub, the use of logarithms constitutes a usual practice in econometrics, not only for the problems that can be derived from overusing them, but also it was mentioned the advantage to reduce the Heteroscedasticity -HT- (Nau, 2019) present in the series of a dataset, and some improvements that the monotonic transformation performs on the data as well.  

In this article, we’re going to explore the utility of the logarithm transformation to reduce the presence of structural breaks in the time series context. First, we’ll review what’s a structural break, what are the implications of regressing data with structural breaks and finally, we’re going to perform a short empirical analysis with the Gross Domestic Product -GDP- of Colombia in Stata.

The structural break

We can define a structural break as a situation where a sudden, unexpected change occurs in a time series variable, or a sudden change in the relationship between two-time series (Casini & Perron, 2018). In this order of ideas, a structural change might look like this:

Source: Shresta & Bhatta (2018)

The basic idea is to identify abrupt changes in time series variables but we’re not restricting such identification to the domain of time, it can be detected also when we scatter X and Y variables that not necessarily consider the dependent variable as the time. We can distinguish different types of breaks in this context, according to Hansen (2012)  we can encounter breaks in 1) Mean, 2) Variance, 3) Relationships, and also we can face single breaks, multiple breaks, and continuous breaks.

Basic problems of the structural breaks

Without going into complex mathematical definitions of the structural breaks, we can establish some of the problems when our data has this situation. The first problem was identified by Andrews (1993) regarding to the parameter’s stability related to structural changes, in simple terms, in the presence of a break, the estimators of the least square regression tend to vary over time, which is of course something not desirable, the ideal situation is that the estimators would be time invariant to consolidate the Best Linear Unbiased Estimator -BLUE-.

The second problem of structural breaks (or changes) not taken in account during the regression analysis is the fact that the estimator turns to be inefficient since the estimated parameters are going to have a significant increase in the variance, so we’re not getting a statistical unbiased estimator and our exact inferences or forecasting analysis wouldn’t be according to reality.

A third problem might appear if the structural break influences the unit root identification process, this is not a wide explored topic but Tai-Leung Chong (2001) makes excellent appoints related to this. Any time series analysis should always consider the existence of unit roots in the variables, in order to provide further tools to handle a phenomenon, that includes the cointegration field and the forecasting techniques.

An empirical approximation

Suppose we want to model the tendency of the GDP of the Colombian economy, naturally this kind of analysis explicitly takes the GDP as the dependent variable, and the time as the independent variable, following the next form:

In this case, we know that the GDP expressed in Y is going to be a function of the time t. We can assume for a start that the function f(t) follows a linear approximation.

With this expression in (1), the gross domestic production would have an independent autonomous value independent of time defined in a, and we’ll get the slope coefficient in α which has the usual interpretation that by an increase of one-time unit, the GDP will have an increase of α.

The linear approximation sounds ideal to model the GDP against the changes over time, assuming that t has a periodicity of years, meaning that we have annual data (so we’re excluding stational phenomena); however, we shall always inspect the data with some graphics.

With Stata once we already tsset the database, we can watch the graphical behavior with the command “scatter y t”.

In sum, the linear approximation might not be a good idea with this behavior of the real GDP of the Colombian economy for the period of analysis (1950-2014). And it appears to be some structural changes judging by the tendency which changes the slope of the curve drastically around the year 2000.

If we regress the expression in (1), we’ll get the next results.

The linear explanation of the time (in years) related to the GDP is pretty good, around 93% of the independent variable given by the time, explains the GDP of the Colombian economy, and the parameter is significant with a level of 5%.

Now I want you to focus in two basic things, the variance of the model which is 1.7446e+09 and the confidence intervals, which positions the estimator between 7613.081 and 8743.697. Without having other values to compare these two things, we should just keep them in mind.

Now, we can proceed with a test to identify structural breaks in the regression we have just performed. So, we just type “estat sbsingle” in order to test for a structural break with an unknown date.

The interesting thing here is that the structural break test identifies one important change over the full sample period of 1950 to 2014, the whole sample test is called “supremum Wald test” and it is said to have less power than average or exponential tests. However, the test is useful in terms of simply identify structural terms which also tend to match with the graphical analysis. According to the test, we have a structural break in the year 2002, so it would be useful to graph the behavior before and after this year in order to conclude the possible changes.  We can do this with the command “scatter y t” and include some if conditions like it follows ahead.

 twoway (scatter Y t if t<=2002)(lfit  Y t if t<=2002)(scatter Y t if t>=2002)(lfit  Y t if t>=2002) 

We can observe that tendency is actually changing if we adjust the line for partial periods of time, given by t<2002 and t>2002, meaning that the slope change is a sign of structural break detected by the program. You can attend this issue including a dummy variable that would equal 0 in the time before 2002 and equal 1 after 2002. However, let’s graph now the logarithm transformation of GDP.  The mathematical model would be:

Applying natural logarithms, we got:

α now becomes the average growth rate per year of the GDP of the Colombian economy, to implement this transformation use the command “gen ln_y=ln(Y)” and the graphical behavior would look like this:

 gen ln_Y=ln(Y)
 scatter ln_Y t

The power of the monotonic transformation is now visible, there’s a straight line among the variable which can be fitted using a linear regression, in fact, let’s regress the expression in Stata.

Remember that I told you to keep in mind the variance and the confidence intervals of the first regression? well now we can compare it since we got two models, the variance of the last regression is 0.0067 and the intervals are indeed close to the coefficient (around 0.002 of difference between the upper and lower interval for the parameter). So, this model fits even greatly than the first.

If we perform again the “estat sbsingle” test again, it’s highly likely that another structural break might appear. But we should not worry a lot if this happens, because we rely on the graphical analysis to proceed with the inferences, in other words, we shall be parsimonious with our models, with little, explain the most.

The main conclusion of this article is that the logarithms used with its property of monotonic transformation constitutes a quick, powerful tool that can help us to reduce (or even delete) the influences of structural breaks in our regression analysis. Structural changes are also, for example, signs of exogenous transformation of the economy, as a mention to apply this idea for the Colombian economy, we see it’s growing speed changing from 2002 until the recent years, but we need to consider that in 2002, Colombia faced a government change which was focused on the implementation of public policies related to eliminating terrorist groups, which probably had an impact related to the investment process in the economy and might explain the growth since then.


Andrews, D. W. (1993). Tests for Parameter Instability and Structural Change With Unknown Change Point. Journal of the Econometric Society Vol. 61, No. 4 (Jul., 1993), 821-856.

Casini, A., & Perron, P. (2018). Structural Breaks in Time Series. Retrieved from Economics Department, Boston University: https://arxiv.org/pdf/1805.03807.pdf

Hansen, B. E. (2012). Advanced Time Series and Forecasting. Retrieved from Lecture 5 tructural Breaks. University of Wisconsin Madison: https://www.ssc.wisc.edu/~bhansen/crete/crete5.pdf

Nau, R. (2019). The logarithm transformation. Retrieved from Data concepts The logarithm transformation: https://people.duke.edu/~rnau/411log.htm

Shresta, M., & Bhatta, G. (2018). Selecting appropriate methodological framework for time series data analysis. Retrieved from The Journal of Finance and Data Science: https://www.sciencedirect.com/science/article/pii/S2405918817300405

Tai-Leung Chong, T. (2001). Structural Change In Ar(1) Models. Retrieved from Econometric Theory,17. Printed in the United States of America: 87–155

Taking Logarithms of Growth Rates and Log-based Data.

A usual practice while we’re handling economic data, is the use of logarithms, the main idea behind using them is to reduce the Heteroscedasticity -HT- of the data (Nau, 2019). Thus reducing HT, implies reducing the variance of the data. Several times, different authors implement some kind of double logarithm transformation, which is defined as taking logarithms of the data which is already in logarithms and growth rates (via differencing logarithms).

The objective of this article is to present the implications of this procedures, first by analyzing what does do the logarithm to a variable, then observing what possible inferences can be done when logarithms are applied to growth rates.

There are a series of properties about the logarithms that should be considered first, we’re not reviewing them here, however the reader can check them in the following the citation (Monterey Institute, s.f). Now let’s consider a bivariate equation:

The coefficient B represents the marginal effect of a change of one unit in X over Y. So, interpreting the estimation with ordinary least squares estimator gives the following analysis: When x increases in one unit, the result is an increase of B in y. It’s a lineal equation where the marginal effect is given by:

When we introduce logarithms to the equation of (1) by modifying the functional form, the estimation turns to be non-linear. However, let’s first review what logarithms might do to the x variable. Suppose x is a time variable which follows an upward tendency, highly heteroscedastic as the next graph shows.

We can graphically appreciate that variable x has a positive trend, and also that has deviations over his mean over time. A way to reduce the HT present in the series is to make a logarithm transformation. Using natural logarithms, the behavior is shown in the next graph.

The units have changed drastically, and we can define that logarithm of x is around 2 and 5. Whereas before we had it from 10 to 120 (the range has been reduced). The reason, the natural logarithm reduces HT because the logarithms are defined as a monotonic transformation (Sikstar, s.f.). When we use this kind of transformation in econometrics like the following regression equation:

The coefficient B is no longer the marginal effect, to interpret it we need to divide it by 100 (Rodríguez Revilla, 2014). Therefore, the result should be read as: an increase of one unit in x produces a change of B/100 in y.

If we use a double-log model, equation can be written as:

In this case, the elasticity is simply B which is interpreted in percentage. Example, if B=0.8. By an increase of 1% in x, the result would be an increase of 0.8% in y.

On the other hand, if we use log-linear model, equation can be written as:

In this case, B must be multiplied by 100 and it can be interpreted as a growth rate in average per increases of a unit of x. If x=t meaning years, then B is the average growth per year of y.

The logarithms also are used to calculate growth rates. Since we can say that:

The meaning of equation (5) is that growth rates of a variable (left hand of the equation) are approximately equal to the difference of logarithms. Returning with this idea over our x variable in the last graphic, we can see that the growth rate between both calculations are similars.

It’s appreciably the influence of the monotonic transformation; the growth rate formula has more upper (positive) spikes than the difference of logarithms does. And inversely the lower spikes are from the difference of logarithms.  Yet, both are approximately growth rates which indicate the change over time of our x variable.

For example, let’s place on the above graphic when is the 10th year.  The difference in logarithms indicates that the growth rate is -0.38% while the growth rate formula indicates a -0.41% of the growth-related between year 9th and now.  Approximately it’s 0.4% of negative growth between these years.

When we use logarithms in those kinds of transformations we’ll get mathematically speaking, something like this:

Some authors just do it freely to normalize the data (in other words reducing the HT), but Would be the interpretation remain the same? What are the consequences of doing this? It’s something good or bad?

As a usual answer, it depends. What would happen if, for example, we consider the years 9 and 10 again of our original x variable, we can appreciate that the change it’s negative thus the growth rate it’s negative. Usually, we cannot estimate a logarithm when the value is negative.

With this exercise, we can see that the first consequence of overusing logarithms (in differenced logarithms and general growth rates) is that if we got negative values, the calculus becomes undefined, so missing data will appear. If we graph the results of such thing, we’ll have something like this:

At this point, the graphic takes the undefined values (result of the logarithm of negative values) as 0 in the case of Excel, other software might not even place a point.  We got negative values of a growth rate (as expected), but what we got now is a meaningless set of data. And this is bad because we’re deleting valuable information from other timepoints.

Let’s forget for now the x variable we’ve been working with.  And now let’s assume we got a square function.

The logarithm of this variable since its exponential would be:

and if we apply another log transformation, then we’ll have:

However, consider that if z=0, the first log would be undefined, and thus, we cannot calculate the second. We can appreciate this in some calculations as the following table shows.

The logarithm of 0 is undefined, the double logarithm of that would be undefined too. When z=1 the natural logarithm is 0, and the second transformation is also undefined. Here we can detect another problem when some authors, in order to normalize the data, apply logarithms indiscriminately. The result would be potential missing data problem due to the monotonic transformation when values of the data are zero.

Finally, if we got a range of data between 0 and 1, the logarithm transformation will induce the calculus to a negative value. Therefore, the second logarithm transformation it’s pointless since all the data in this range is now undefined.

The conclusions of this article are that when we use logarithms in growth rates, one thing surely can happen: 1) If we got potential negative values in the original growth rate, and then apply logarithms on those, the value becomes undefined, thus missing data that will occur. And the interpretation becomes harder. Now if we apply some double transformation of log values, the zero and the negative values in the data will become undefined, thus missing data problem will appear again. Econometricians should take this in considerations since it’s often a question that arises during researches, and in order to do right inferences, analyzing the original data before applying logarithms should be a step before doing any econometric procedure.


Monterey Institute. (s.f). Properties of Logarithmic Functions. Obtained from: http://www.montereyinstitute.org/courses/DevelopmentalMath/TEXTGROUP-1-19_RESOURCE/U18_L2_T2_text_final.html

Nau, R. (2019). The logarithm transformation. Obtenido de Data concepts The logarithm transformation. Obtained from: https://people.duke.edu/~rnau/411log.htm

Rodríguez Revilla, R. (2014). Econometria I y II. Bogotá. : Universidad Los Libertadores.

Sikstar, J. (s.f.). Monotonically Increasing and Decreasing Functions: an Algebraic Approach. Obtained from: https://opencurriculum.org/5512/monotonically-increasing-and-decreasing-functions-an-algebraic-approach/