ARDL model and General to simple methodology

Listening to the word ARDL, the first things that comes into mind is the bound testing approach introduced by Pesaran and Shin (1999).The Pesaran and Shin’s approach is an incredible use of the ARDL, however, the term ARDL is much elder, and the ARDL model has many other uses as well. In fact, the equation used by Pesaran and Shin is a restricted version of ARDL, and the unrestricted version of ARDL was introduced by Sargan (1964) and popularized by David F Hendry and his coauthors in several papers. The most important paper is one which is usually known as DHSY, but we will come to the details DHSY later. Let me introduce what is ARDL and what are the advantages of this model


What is ARDL model?

ARDL model is an a-theoretic model for modeling relationship between two time series. Suppose we want to see the effect of time series variable Xt on another variable Yt. The ARDL model for the purpose will be of the form

The same model can be written as

This means, in the layman language the dependent variable is regressed on its own lags, independent variable and the lags of independent variables. The above ARDL model can be termed as ARDL (j, k) model, referring to number of lags j & K in the model.

The model itself is written without any theoretical considerations. However, a large number of theoretical models are embedded inside this model and one can drive appropriate theoretical model by testing and imposing restrictions on the model.

To have more concrete idea, let’s consider the case of relationship between consumption and income. To further simplify, lets consider j=k=1, so that the ARDL(1,1) model for the relationship of consumption and income can be written as

Model 1:          Ct=a+b1Ct-1+d0Yt+d1Yt-1+et

HereC denotes consumption and Y denotes income, a,b1,d0,d1 denote the regression coefficient and et denotes error term. So far, no theory is used to develop this model and the regression coefficients don’t have any theoretical interpretation. However, this model can be used to select appropriate theoretical model for the consumption.

Suppose we have estimated the above mentioned model and found the regression coefficients. We can test any one of the coefficient and/or number of coefficient for various kinds of restriction. Suppose we test the restriction that   

R1: H0: (b1 d1)=0

Suppose testing restriction on actual data implies that restriction is valid, this means we can exclude the curresponding variables from the model. Excluding the variables, the model will become

Model 2:          Ct=a+ d0Yt+et

The model 2 is actually the Keynesian consumption (also called absolute income hypothesis), which says that current consumption is dependent on current income only. The coefficient of income in this equation is the marginal propensity to consume and Keynes predicted that this coefficient would be between 0 and 1, implying that individuals consume a part of their income and save a part of their incomes for future.

Suppose that the data did not suppose the restriction R1, however, the following restriction is valid

R2: H0: d1=0

This means model 1 would become

Model 3:          Ct=a+b1Ct-1+d0Yt+et

This means that current consumption is dependent on current income and past consumption. This is called Habit Persistence model. The past consumption here is the proxy of habit. The model says that what was consumed in the past is having effect on current consumption and is evident from human behavior.

Suppose that the data did not suppose the restriction R1, however, the following restriction is valid

R3: H0: b1=0

This means model 1 would become

Model 4:          Ct=a+ d0Yt+d1Yt-1+et

This means that current consumption is dependent on current income and past income. This is called Partial Adjustment model. As per implications of Keynesian consumption function, the consumption should only depend on the current income, but the partial adjustment model says that it takes sometimes to adjust to the new income. Therefore, the consumption is partially on the current income and partially on the past income

In a similar way, one can derive many other models out of the model 1 which are representative of different theories. The details of the models that can be drawn from model 1 can be found in Charemza and Deadman (1997)’s ‘New Directions in Econometric Practice…’.  

It can also be shown that the difference form models are also derivable from model 1. Consider the following restriction

R 4:

If this restriction is valid, the model 1 will become

Ct=a+Ct-1+d0Yt-d0Yt-1+et

This model can be re-written as

Ct-Ct-1=d0(Yt-Yt-1)+et

  This means

Model 5: DCt=d0DYt+et

This indicates that the difference form models can also be derived from the model 1 with certain restrictions

Further elaboration shows that the error correction models can also be derived from model 1.

Consider model 1 again and subtract Ct-1 both sides, we will get

 Ct- Ct-1=a+b1Ct-1 -Ct-1+d0Yt+d1Yt-1+et

Adding and subtracting d0Yt-1 on the right hand side we get

DCt=a+(b1-1)Ct-1+d0Yt+d1Yt-1 +d0Yt-1 -d0Yt-1 +et

DCt=a+(b1-1)Ct-1+d0DYt+d1Yt-1 +d0Yt-1 +et

DCt=a+(b1-1)Ct-1+d0DYt+(d1+d0)Yt-1+et

This equation contains error correction mechanism if

R6: (b1-1)= – (d1+d0)

Assume

 (b1-1)= – (d1+d0)=F

The equation will reduce to

DCt=a+F(Ct-1-Yt-1)+ d0DYt +et

This is our well known error correction model and can be derived if R6 is valid.

Therefore, existence of an error correction mechanism can also be tested from model 1 and restriction to be considered valid if R6 is valid.  

As we have discussed, number of theoretical models can be driven from model 1 by testing certain restrictions. We can start from model 1 and go with testing different restrictions. We can impose the restriction which is found valid and discard the restrictions which were found invalid in our testing. This provides us a natural way of selection among various theoretical models.

When we say theoretical model, this means there is some economic sense of the model. For example the models 2 to model 6 all make economic sense. So, how to decide between these models? This problem can be solved if we start with an ARDL model and choose to impose restrictions which are permitted by the data

The famous DHSY paper recommends a methodology like this. DHSY recommend that we should start with a large model which encompasses various theoretical models. The model can then be simplified by testing certain restrictions.

In another blog I have argued that if there are different theories for a certain variables, the research must be comparative. This short blog gives the brief outlines about how we can do this. Practically, one need to take larger ARDL structures and number of models that can be derived from the parent model would also be large.

Please follow and like us:

Research in presence of multiple theories

Consider a hypothetical question, a researcher was given with a research question; compare the mathematical ability of male and female students of grade 5. The researcher collected data of 300 female students and 300 male students of grade 5 and administered a test of mathematical questions. The average score for female students was 80% and average score of male student was 50%, the difference was statistically significant and therefore, the researcher concluded that the female students have better mathematical aptitude.

The findings seem strong and impressive, but let me add into the information that the male students were chosen from a far-off village with untrained educational staff and lack of educational facilities. The female students were chosen from an elite school of a metropolitan city, where the best teachers of the city actually serve. What should be the conclusion now? It can be argued that actually difference doesn’t come from the gender, the difference is coming from the school type.

The researcher carrying out the project says ‘look, my research assignment was only to investigate the difference due to gender, the school type is not the question I am interested in, therefore, I have nothing to do with the school type’.

Do you think that the argument of researcher is valid and the findings should be considered reliable? The answer is obvious, the findings are not reliable, and the school type creates a serious bias.  The researcher must compare students from the same school type. This implies you have to take care of the variables not having any mention in your research question if they are determinants of your dependent variable.

Now let’s apply the same logic to econometric modeling, suppose we have the task to analyze the impact of financial development on economic growth. We are running a regression of GDP growth on a proxy of financial development; we are getting a regression output and presenting the output as impact of financial development on economic growth. Is it a reliable research?

This research is also deficient just like our example of gender and mathematical ability. The research is not reliable if ceteris paribus doesn’t hold. The other variables which may affect the output variable should remain same.

But in real life, it is often very difficult to keep all other variables same. The economy continuously evolves and so are the economic variables. The other solution to overcome the problem is to take the other variables into account while running regression. This means other variables that determine your dependent variable should be taken as control variables in the regression. This means suppose you want to check the effect of X1 on Y using model Y=a+bX1+e. Some other research studies indicate that another model exist for Y which is Y=c+dX2+e.   Then I cannot run the first model ignoring the second model. If I am running only model 1 ignoring the other models, the results would be biased in a similar way as we have seen in our example of mathematical ability. We have to use the variables of model 2 as control variable, even if we are not interested in coefficients of model 2. Therefore, the estimated model would be like Y=a+bX1+cX2+e

Taking the control variables is possible when there are a few models. The seminal study of Davidson, Hendry, Sarba and Yeo titled ‘Econometric modelling of the aggregate time-series relationship between …. (often referred as DHSY)’ summarizes the way to build a model in such a situation. But it often happens that there exists very large number of models for one variable. For example, there is very large number of models for growth. A book titled ‘Growth Econometrics’ by Darlauf lists hundreds of models for growth used by researchers in their studies. Life becomes very complicated when you have so many models. Estimating a model with all determinants of growth would be literally impossible for most of the countries using the classical methodology. This is because growth data is usually available at annual or quarterly frequency and the number of predictors taken from all models collectively would exceed number of observations. The time series data also have dynamic structure and taking lags of variables makes things more complicated. Therefore, classical techniques of econometrics often fail to work for such high dimensional data.

Some experts have invented sophisticated techniques for the modeling in a scenario where number of predictor becomes very large. These techniques include Extreme Bound Analysis, Weighted Average Least Squares, and Autometrix etc. The high dimensional econometric techniques are also very interesting field of econometric investigation. However, DHSY is extremely useful for the situations where there are more than one models for a variable based on different theories. The DHSY methodology is also called LSE methodology, General to Specific Methodology or simply G2S methodology.  

Please follow and like us:

The Rise of Behavioral Econometrics..

The lessons from behavioral economics have ameliorated social wellbeing and economic success in recent years. Academics and policymakers now recognize that integrating how individuals behave and make decisions in real-life dramatically improves the effectiveness of public policies and the validity of simple theoretical models. Thus, this area of research has enhanced our understanding of the barriers to decision-making and led to the emergence of a wider and richer theoretical and empirical framework to inform human decision making.

This framework builds on fields such as sociology, anthropology, psychology, economics, and political science. Two of the last four Nobel Prizes in Economics (2017 and 2019) have been awarded to Behavioral and Experimental economists working also on development-related problematics. The wider results from this body of work have been used by academics, governments, and international organizations to design evidence-based policies in a wide range of activities such as finance, tax collection, healthcare, education, energy consumption and human cooperation.


Based on this relevance, the present workshop aims to teach foundations on behavioral economics and how their instruments can help improve social and economic outcomes in problems found in modern public policy. Similarly, the workshop will establish statistical and econometric techniques (and commands) to secure the correct implementation of interventions, and the assessment of their results.

Learn more and register at the upcoming workshop in March 2021 at https://ms-researchhub.com/home/training/expert-metrics-behavioral-and-experimental-econometrics.html

Please follow and like us:

متضيعش وقتك ….

انا مش قادر افهم… بتضيع فلوسك ووقتك عشان تاخد شهادة مش معترف بيها بره محافظتك و تتعلم تعليم ملهوش اى علاقة بعلم و تتخرج تدور علي شغل بمؤهلاتك متلاقيش…. متلمش الا نفسك فى الاخر

التعليم فى المانيا ببلاش لكل مراحل التعليم حتى الدكتوراه, و منح كتير جدا عشان متصرفش مليم من جيبك و فرص شغل وخبره اذا قررت تستقر هناك او رجعت بلدك….

الموضوع شكله سهل, بس هوا مش سهل, بس احنا ان شاء الله هنخليه سهل…

احنا شركة بحثية واكاديمية فى المانيا و من شهر اعلنا عن خدمة دعم طلابى, لكل المراحل الجامعية, بكالريوس, ماجستير و دكتوراه فى كل المجالات عشان نساعدهم يقدموا على الجامعات و المنح وياخدوا فرصة حقيقية لتغيير و تعليم افضل. الخدمة مش مجانية بس مصاريفها مخفضة بشكل كبير خاصة للطلبة من الدول النامية.

فى اخر سنة فى ثانوي, دة الوقت المناسب عشان تجهز ورقك للبكالريوس …لو اخر سنة كلية, ده الوقت المناسب عشان تقدم على الدراسات العليا

…بتفكر تسافر امريكا او بريطانيا…ليه تدفع الالاف فى الدراسة لما ممكن تتعلم ببلاش فى المانيا و فى افضل جامعات العالم

الموضوع محتاج وقت ومجهود

املا الاستمارة على موقنا وهنتواصل معاك نوضحلك الخطوات اللى جاية

https://ms-researchhub.com/home/study_support.html

ولو فعلا مش قادر تدفع مصاريف الخدمة ونفسك تبذل وقت ومجهود عشان تحقق هدفك, املا الاستمارة و ممكن نخفض ليك المصاريف اكتر او نشيلها خالص

وفى علمكم فرصة الدراسة فى المانيا مجانا بقت محدوده, لان بدأت جامعات تفرض رسوم دراسية عالية على الطلبة خارج الاتحاد الاوروبي و بدا النظام ده فى اكبر 8 جامعات فى المانيا و هيستمر حتى يتطبق على كل الجامعات

والمقال عن الموضوع ده

https://www.studying-in-germany.org/germany-will-reintroduce-tuition-fees-non-eu-students/

متضيعش الفرصة وابدا النهارده قبل بكره

Please follow and like us:

Log-linearisation in Short

Log-linearisation in Short with an example

There exist many different types of models of equations for which there exists no closed form solution. In these cases, we use a method known as log-linearisation. One example of these kinds of models are non-linear models like Dynamic Stochastic General Equilibrium (DSGE) models. DSGE models are non-linear in both parameter and in variables. Because of this, solving and estimating these models is challenging.

Hence, we have to use approximations to the non-linear models. We have to make concessions in this, as some features of the models are lost, but the models become more manageable.

In the simplest terms, we first take the natural logs of the non-linear equations and then we linearise the logged difference equations about the steady state. Finally, we simplify the equations until we have linear equations where the variables are percentage deviations from the steady state. We use the steady state as that is the point where the economy ends up in the absence of future shocks.

Usually in the literature, the main part of estimation consisted of linearised models, but after the global financial crisis, more and more non-linear models are being used. Many discrete time dynamic economic problems require the use of log-linearisation.

There are several ways to do log-linearisation. Some examples of which, have been provided in the bibliography below.

One of the main methods is the application of Taylor Series expansion. Taylor’s theorem tells us that the first-order approximation of any arbitrary function is as below.

We can use this to log-linearise equations around the steady state. Since we would be log-linearising around the steady state, x* would be the steady state.

For example, let us consider a Cobb-Douglas production function and then take a log of the function.

The next step would be to apply Taylor Series Expansion and take the first order approximation.

Since we know that

Those parts of the function will cancel out. We are left with –

For notational ease, we define these terms as percentage deviation of x about x* where x* signifies the steady state.
Thus, we get

At last, we have log-linearised the Cobb-Douglas production function around the steady state.

Bibliography:
Sims, Eric (2011). Graduate Macro Theory II: Notes on Log-Linearization – 2011. Retrieved from https://www3.nd.edu/~esims1/log_linearization_sp12.pdf


Zietz, Joachim (2006). Log-Linearizing Around the Steady State: A Guide with Examples. SSRN Electronic Journal. 10.2139/ssrn.951753.


McCandless, George (2008). The ABCs of RBCs: An Introduction to Dynamic Macroeconomic Models, Harvard University Press


Uhlig, Harald (1999). A Toolkit for Analyzing Nonlinear Dynamic Stochastic Models Easily, Computational Methods for the Study of Dynamic
Economies, Oxford University Press

Please follow and like us:

Learning Central Limit Theorem with Microsoft Excel

Many statistical and econometric procedures depend on the assumption of normality. The importance of the normal distribution lies in the fact that sums/averages of random variables tend to be approximately normally distributed regardless of the distribution of draws. The central limit theorem explains this fact. Central Limit Theorem is very important since it provides justification for most of statistical inference. The goal of this paper is to provide a pedagogical introduction to present the CLT, in form of self study computer exercise. This paper presents a student friendly illustration of functionality of central limit theorem. The mathematics of theorem is introduced in the last section of the paper. 

CENTRAL LIMIT THEOREM

We start by an example where we observe a phenomenon and than we will discuss the theoretical background of the phenomenon.

Consider 10 players playing with identical dice simultaneously. Each player rolls the dice large number of times. The six numbers on the dice have equal probability of occurrence on any roll and before any player. Let us ask computer to generate data that resembles with the outcomes of these rolls.

We need to have Microsoft Excel ( above 2007 preferable) for this exercise. Point to ‘Data’ tab in the menu bar, it should show ‘Data Analysis’ in the tools bar. If Data Analysis is not there, than you need to install the data analysis tool pack, for this  you have to click on the office button, which is the yellow color button at top left corner of Microsoft Excel Window.  Choose ‘Add Ins’ from the left pan that appears, than check the box against ‘Analysis Tool Pack’ and click OK.

Select Office Button Excel OptionsSelect Add Ins Þ  Analysis ToolPack ÞGo from the screen that appears

Computer will take few moments to install the analysis toolpack. After installation is done, you will see ‘Data Analysis’ on pointing again to Data Tab in the menu bar. The analysis tool pack provides a variety of tool for statistical procedures.

We will generate data that matches with the situation described above using this tool pack.

Open an Excel spread sheet, write 1, 2, 3,…6 in cells A1:A6,

Write ‘=1/6’ in cell B1 and copy it down

This shows you possible outcomes of roll of dice and their probabilities.

 This will show you following table:

10.167
20.167
30.167
40.167
50.167
60.167

Here first column contain outcomes of roll of dice and second column contain probability of outcomes. Now we want the computer to have some draws from this distribution. That is, we want computer to roll dice and record outcomes.

For this go to Data Þ Data AnalysisÞ Random Number Generation and select discrete distribution. Write number of variables =10 and number of random number =1000, enter value input and probability range A1:B6, put output range D1 and click OK.

This will generate a 1000×10 matrix of outcomes of roll of dice in cells A8:J1007. Each column represent outcome for a certain player in 1000 draws whereas rows represent outcomes for 10 players in some particular draw. In the next column ‘K’ we want to have sum of each row. Write ‘=SUM(A8:J8) and copy it down. This will generate column of sum for each draw.

Now, we are interested in knowing that what distribution of outcome for each player is:  

Let us ask Excel to count the frequency of each outcome for player 1. Choose Tools/Data Analysis/Histogram and fill the dialogue box as follows:

The screenshot shows the dialogue box filled to count the frequency of outcomes listed observed by player A. The input range is the column for which we want to count frequency of outcomes and bin range is the range of possible outcomes.  This process will generate frequency of six possible outcomes for the single player. When we did this, we got following output:

BinFrequency
1155
2154
3160
4169
5179
6183
More0

The table above gives the frequency of the outcomes whereas same frequencies are plotted in the bar chart. You observe that frequency of occurrence is not approximately equal. The height of vertical bars is approximately same. This implies that the distribution of draws is almost uniform. And we know this should happen because we made draws from uniform distribution. If we calculate percentage of each outcome it will become 15.5%, 15.4%, 16%, 16.9%, 17.9% and 18.3% respectively. These percentages are close to the probability of these outcomes i.e. 16.67%.

Now we want to check the distribution of column which contain sum of draws for 10 players, i.e. the column K. Now the range of possible values of column of sum varies from 10 to 60 (if all column have 1, the sum would be 10 and if all columns have 6 than sum would be 60, in all other cases it would be between these two numbers). It would be in-appropriate to count frequencies of all numbers in this range. Let us make few bins and count the frequencies of these bins. We choose following bins; (10,20), (20, 30),…(50, 60). Again we would ask Excel to count frequencies of these bins. To do this, write 10, 20,…60 in column M of Excel spread sheet (these numbers are the boundaries of bins we made). Now select Tools/Data Analysis/Histogram and fill the dialogue box that appears.

 The input range would be the range that contains sum of draws i.e. K8 to K1007 and bin range would be the address of cells where we have written the boundary points of our desired bins. Completing this procedure would produce the frequencies of each bin. Here is the output that we got from this exercise.

BinFrequency
100
205
30211
40638
50144
602
More0

First row of this output tells that there was no number smaller than starting point of first bin i.e. smaller than 10, and 2nd, 3rd …rows tell frequencies of bins (10-20), (20,30),…respectively. Last row informs about frequency of numbers larger than end point of last bin i.e. 60.

Below is the plot of this frequency table.

 Obviously this plot has no resemblance with uniform distribution. Rather if you remember famous bell shape of the normal distribution, this plot is closer to that shape.

Let us summarize our observation out of this experiment. We have several columns of random numbers that resemble roll of dice i.e. possible outcomes are 1…6 each with probability 1/6 (uniform distribution). If we count frequency of these outcomes in any column, the outcomes reveal the distributional shape and the histogram is almost uniform. Last column was containing sum of 10 draws from uniform distribution and we saw that distribution of this column is no longer uniform, rather it has closer match with shape of normal distribution.

Explanation of the observation:

The phenomenon that we observed may be explained by central limit theorem. According to central limit, let  be independent draws from any distribution (not necessarily uniform) with finite variance, than distribution of sum of draws  and average of draws would be approximately normal if sample size ‘n’ is large.

Mean and SE for sum of draws:

From our primary knowledge about random variables we know that:

And

Suppose

Let , than and

These two statements tell the parameters of normal distribution that emerges from sum of random numbers and we have observed this phenomenon described above.

Verification

Consider the exercise discussed above; column A:J are draws from dice roll with expectation 3.5 and variance 2.91667. Column K is sum of 10 previous columns. Thus expected value of K is thus 10*3.5=35 and variance 2.91667*10. This also implies that SE of column K is 5.400 (square root of variance.

The SD and variance in the above exercise can be calculated as follows:

Write ‘AVERAGE(K8:K1007)’ in any blank cell in spreadsheet. This will calculate sample mean of numbers in column K. The answer will be close to 35. When I did this, I found 34.95.

Write ‘VAR(K8:K1007)’ in any blank cell in spreadsheet. This will calculate sample variance of numbers in column K. The answer will be close to 29.16, when I did this, I found 30.02

Summary:

In this exercise, we observed that if we take draws from some certain distribution, the frequency of draws will reflect the probability structure of parent distribution. But when we take sum of draws, the distribution of sum reveals the shape of normal distribution. This phenomenon has its root in central limit theorem which is stated in Section …..

Please follow and like us:

A brief mathematical revision of the Ramsey Model

We mentioned in the last post the Solow-Swan model in order to explain the importance of the specification related to theories and the regression analysis. In this post, I’m going to explain a little bit more the neoclassical optimization related to consumption, in this case, it’s going to be fundamental to the theory of Ramsey (1928) related to the behavior of savings & consumption.

We declare first some usual assumptions, like closed economy XN=0, net investment equals I=K-δK where δ is a common depreciation rate of the economy for all kinds of capital. There’s no government spending in the model so G=0. And finally, we’re setting a function which is going to capture the individual utility u(c) given by:

This one is referred to as the constant intertemporal elasticity function of the consumption c over time t. The behavior of this function can be established as:

This is a utility function with a concave behavior, basically, as consumption in per capita terms is increasing, the utility also is increasing, however, the variation relative to the utility and the consumption is decreasing until it gets to a semi-constant state, where the slope of the points c1 and c2 is going to be decreasing.

We can establish some results of the function here, like

And that

That implies that the utility at a higher consumption point is bigger than on a low consumption point, but the variation of the points is decreasing every time.

The overall utility function for the whole economy evaluated at a certain time can be written as:

Where U is the aggregated utility of the economy at a certain time (t=0), e is the exponential function, ρ is the intergenerational discount rate of the consumption (this one refers to how much the individuals discount their present consumption related to the next generations) n is the growth rate of the population, t is the time, and u(c) is our individual utility function, dt is just the differential which indicates what are we integrating.

Think of this integral as a sum. You can aggregate the proportion of individual utilities at a respective time considering the population size, but also you need to bring back to the present the utility of other generations which are far away from our time period, this is where ρ enters and its very familiar to the role of the interest rate in the present value in countability.

This function is basically considering all time periods for the sum of individuals’ utility functions, in order to aggregate the utility of the economy U (generally evaluated at t=0).

This is our target function because we’re maximizing the utility, but we need to restrict the utility to the income of the families. So, in order to do this, the Ramsey model considers the property of the financial assets of the Ricardian families. This means that neoclassical families can have a role in the financial market, having assets, obtaining returns or debts.

The aggregated equation to the evolution of financial assets and bonuses B is giving by:

Where the left-side term is the evolution of all of the financial assets of the economy over time, w refers to the real rate of the wages, L is the aggregate amount of labor, r is the interest rate of return of the whole assets in the economy B, and finally, C is the aggregated consumption.

The equation is telling us that the overall evolution of the total financial assets of the economy is giving by the total income (related to the amount of wages multiplied the hours worked, and the revenues of the total stock of financial assets) minus the total consumption of the economy.

We need to find this in per capita terms, so we divide everything by L

And get to this result.

Where b=B/L and c is the consumption in per capita terms.  Now we need to find the term with a dot on B/L, and to do this, we use the definition of financial assets in per capita terms given by:

And now we difference respect to time. So, we got.

We solve the derivate in general terms as:

And changing the notation with dots (which indicate the variation over time):

We have

We separate fractions and we got:

Finally, we have:

Where we going to clear the term to complete our equation derived from the restriction of the families.

And we replace equation (2) into equation (1). And we have

To obtain.

This is the equation to find the evolution of financial assets in per capita terms, where we can see it depends positively on the rate of wages of the economy and the interest rate of returns of the financial assets, meanwhile it depends negatively on the consumption per capita and the growth rate of the population.

The maximization problem of the families is giving then as

Where we assume that b(0)>0 which indicates that at the beginning of the time, there was at least one existing financial asset.

We need to impose that utility function is limited, so we state:

Where in the long run, the limit of utility is going to equal 0.

Now here’s the tricky thing, the use of dynamical techniques of optimization. Without going into the theory behind optimal control. We can use the Hamiltonian approach to find a solution to this maximization problem, the basic structure of the Hamiltonian is the following:

H(.) = Target Function + v (Restriction)

We first need to identify two types of variables before implementing it in our exercise, the control variable, and the state variable. The control variable is the one that focuses on the agent which is a decision-maker, (in this case, the consumption is decided by the individual, and the state variable is the one relegated in the restriction). The state variable is the financial assets or bonus b. Now the term v is the dynamic multiplier of Lagrange, consider it, as the shadow price of the financial assets in per capita terms, and it represents an optimal change in the individual utility given by one extra unit of the assets.

We’re setting what is inside of our integral as our objective, and our restriction remains the same and the Hamiltonian is finally written as:

The first-order conditions are giving by:

One could ask why we’re setting the partial derivates as this? Well, it’s part of the optimum control theory, but in sum, the control variable is set to be maximized (that’s why it’s equally to 0) but our financial bonus (the state variable) needs to be set negatively to the shadow prices of the evolution of the bonus because we need to find a relationship where for any extra financial asset in time we’ll decrease our utility.

The solution of the first-order condition dH/dc is giving by:

To make easier the derivate we can re-express:

To have now:

Solving the first part we got:

To finally get:

 

Now solving the term of the first-order condition we obtain:

thus the first-order condition is:

Now let’s handle the second equation of first-order condition in dH/db.

Which is a little bit easier since:

So, it remains:

Thus we got.

And that’s it folks, the result of the optimization problem related to the first-order conditions are giving by

Let’s examine these conditions: the first one is telling us that the shadow price of the financial assets in per capita terms it’s equal to the consumption and the discount factor of the generations within the population grate, some better interpretation can be done by using logarithms. Lets applied them.

let’s differentiate respect to time and we get:

Remember that the difference of logarithms it’s equivalent approximately to a growth rate, so we can write this another notation.

Where

In equation (4) we can identify that the growth rate of the shadow prices of the financial assets is negatively related to the discount rate ρ, and the growth rate of consumption. (in the same way, if you clear the consumption from this equation you can find out that is negatively associated with the growth rate of the shadow prices of the financial assets). Something interesting is that the growth rate of the population is associated positively with the growth in the shadow prices, meaning that if the population is increasing, some kind of pull inflation is going to rise up the shadow prices for the economy.

If we multiply (4) by -1, like this

and replace it in the second equation of the first order which is

Multiplied by both sides by v, we get

Replacing above equations drive into:

Getting n out of the equation would result in:

Which is the Euler equation of consumption!

References

Mankiw, N. G., Romer, D., & Weil, N. D. (1992). A CONTRIBUTION TO THE EMPIRICS OF ECONOMIC GROWTH. Quarterly Journal of Economics, 407- 440.

Ramsey, F. P. (1928). A mathematical theory of saving. Economic Journal, vol. 38, no. 152,, 543–559.

Solow, R. (1956). A Contribution to the Theory of Economic Growth. The Quarterly Journal of Economics, Vol. 70, No. 1 (Feb., 1956),, 65-94.

Please follow and like us: