Hence, we have to use approximations to the non-linear models. We have to make concessions in this, as some features of the models are lost, but the models become more manageable.

In the simplest terms, we first take the natural logs of the non-linear equations and then we linearise the logged difference equations about the steady state. Finally, we simplify the equations until we have linear equations where the variables are percentage deviations from the steady state. We use the steady state as that is the point where the economy ends up in the absence of future shocks.

Usually in the literature, the main part of estimation consisted of linearised models, but after the global financial crisis, more and more non-linear models are being used. Many discrete time dynamic economic problems require the use of log-linearisation.

There are several ways to do log-linearisation. Some examples of which, have been provided in the bibliography below.

One of the main methods is the application of Taylor Series expansion. Taylor’s theorem tells us that the first-order approximation of any arbitrary function is as below.

We can use this to log-linearise equations around the steady state. Since we would be log-linearising around the steady state, x* would be the steady state.

For example, let us consider a Cobb-Douglas production function and then take a log of the function.

The next step would be to apply Taylor Series Expansion and take the first order approximation.

Since we know that

Those parts of the function will cancel out. We are left with –

For notational ease, we define these terms as percentage deviation of x about x* where x* signifies the steady state.

Thus, we get

At last, we have log-linearised the Cobb-Douglas production function around the steady state.**Bibliography:**

Sims, Eric (2011). Graduate Macro Theory II: Notes on Log-Linearization – 2011. Retrieved from https://www3.nd.edu/~esims1/log_linearization_sp12.pdf

Zietz, Joachim (2006). Log-Linearizing Around the Steady State: A Guide with Examples. SSRN Electronic Journal. 10.2139/ssrn.951753.

McCandless, George (2008). The ABCs of RBCs: An Introduction to Dynamic Macroeconomic Models, Harvard University Press

Uhlig, Harald (1999). A Toolkit for Analyzing Nonlinear Dynamic Stochastic Models Easily, Computational Methods for the Study of Dynamic

Economies, Oxford University Press

The approach is used to test first-order serial correlation, the general form of the test is given the statistic as:

Where the statistic of Box- Pierce Q is defined as the product between the number of observations and the sum of the square autocorrelation ρ in the sample at lag h. The test is closely related to the Ljung & Box (1978) autocorrelation test, and it used to determine the existence of serial correlation in the time series analysis. The test works with chi-square distribution by the way.

The null hypothesis of this test can be defined as H0: Data is distributed independently, against the alternative hypothesis of H1: Data is not distributed independently. Therefore, the null hypothesis is that data is not suffering from an autocorrelation structure against the alternative which proposes that the data has an autocorrelation structure.

The test was implemented in Stata with the panel data structure by Emad Abd Elmessih Shehata & Sahra Khaleel A. Mickaiel (2004), the test works in the context of ordinary least squares panel data regression (the pooled OLS model). And we will develop an example here.

First we install the package using the command ssc install as follows:

ssc install lmabpxt, replace

Then we will type help options.

help lmabpxt

From that we got the next result displayed.

We can notice that the sintax of the general form is:

lmabpxt depvar indepvars [if] [in] [weight] , id(var) it(var) [noconstant coll ]

In this case id(var) and it(var) represents the identificatory of individuals (id) and identificatory of the time structure (it), so we need to place them in the model.

Consider the next example

clear all

use http://www.stata-press.com/data/r9/airacc.dta

xtset airline time,y

reg pmiles inprog

lmabpxt pmiles inprog, id(airline) it(time)

Notice that the Box-Pierce test implemented by Emad Abd Elmessih Shehata & Sahra Khaleel A. Mickaiel (2004) will re-estimate the pooled regression. And the general output would display this:

In this case, we can see a p-value associated to the Lagrange multiplier test of the box-pierce test, and such p-value is around 0.96, therefore, with a 5% level of significance, we cannot reject the null hypothesis, which is the No AR(1) panel autocorrelation in the residuals.

Consider now, that you might be using fixed effects approach. A numerical approach would be to include dummy variables (in the context of least squares dummy variables) of the individuals (airlines in this case) and then compare the results.

To do that we can use:

tab airlines, gen(a)

and then include from a2 to a20 in the regression structure, with the following code:

lmabpxt pmiles inprog a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a16 a17 a18 a19 a20 , id(airline) it(time)

This would be different from the error component structure, and it would be just a fixed effects approach using least squares dummy variable regression. Notice the output.

Using the fixed effects approach with dummy variables, the p-value has decreased significantly, in this case, we reject the null hypothesis at a 5% level of significance, meaning that we might have a problem of first-order serial correlation in the panel data.

With this example, we have done the Box-Price test for panel data (and additionally, we established that it’s sensitive to the fixed effects in the regression structure).

N*otes:*

*The lmabpxt appears to be somewhat sensitive if the number of observations is too large (bigger than 5000 units).*

*There are an incredible compilation and contributions made by Shehata, Emad Abd Elmessih & Sahra Khaleel A. Mickaiel which can be found in the next link:*

http://www.haghish.com/statistics/stata-blog/stata-programming/ssc_stata_package_list.php

I su*ggest you to check it out if you need anything related to Stata.*

B**ibliography**

Box, G. E. P. and Pierce, D. A. (1970) “Distribution of Residual Autocorrelations in Autoregressive-Integrated Moving Average Time Series Models”, Journal of the American Statistical Association, 65: 1509–1526. JSTOR 2284333

G. M. Ljung; G. E. P. Box (1978). “On a Measure of a Lack of Fit in Time Series Models”. Biometrika 65 (2): 297-303. doi:10.1093/biomet/65.2.297.

Shehata, Emad Abd Elmessih & Sahra Khaleel A. Mickaiel (2014) LMABPXT: “Stata Module to Compute Panel Data Autocorrelation Box-Pierce Test”

]]>Assume a basic fitted model given by:

Where **y** is the vector of containing the dependent variable with *nx1* observations, **X** is the matrix that contains the explanatory variables which is *nxk* (n are the total observations and k are the number of independent variables). The vector **b **represents the estimated coefficient vector.

Ramsey test fits a regression model of the type

Where **z** represents the powers of the fitted values of **y**, the Ramsey test performs a standard F test of **t=**0 and the default setting is considering the powers as:

In Stata this is easily done with the command

estat ovtest

after the regression command reg.

To illustrate this, consider the following code:

use https://www.stata-press.com/data/r16/auto regress mpg weight foreign estat ovtest

The null hypothesis is that **t=0** so it means that the powers of the fitted values have no relationship which serves to explain the dependent variable **y**, meaning that the model has no omitted variables. The alternative hypothesis is that the model is suffering from an omitted variable problem.

In the panel data structure where we have multiple time series data points and multiple observations for each time point, in this case we fit a model like:

With i=1, 2, 3, …, n observations, and for each i, we have t=1, 2, …, T time periods of time. And ** v** represents the heterogenous effect which can be estimated as parameter (in fixed effects: which can be correlated to the explanatory variables) and as variable (in random effects which is not correlated with the explanatory variables).

To implement the Ramsey test manually in this regression structure in Stata, we will follow Santos Silva (2016) recommendation, and we will start predicting the fitted values of the regression (with the heterogenous effects too!). Then we will generate the powers of the fitted values and include them in the regression in (4) with clustered standard errors. Finally, we will perform a significant test jointly for the coefficients of the powers.

use https://www.stata-press.com/data/r16/nlswork xtreg ln_w grade age c.age#c.age ttl_exp c.ttl_exp#c.ttl_exp tenure c.tenure#c.tenure 2.race not_smsa south, fe cluster(idcode) predict y_hat,xbu gen y_h_2=y_haty_hat gen y_h_3=y_h_2y_hat gen y_h_4=y_h_3*y_hat xtreg ln_w grade age c.age#c.age ttl_exp c.ttl_exp#c.ttl_exp tenure c.tenure#c.tenure 2.race not_smsa south y_h_2 y_h_3 y_h_4, fe cluster (idcode) test y_h_2 y_h_3 y_h_4

Alternative you can skip the generation of the powers and apply them directly using c. and # operators in the command as it follows this other code:

use https://www.stata-press.com/data/r16/nlswork xtreg ln_w grade age c.age#c.age ttl_exp c.ttl_exp#c.ttl_exp tenure c.tenure#c.tenure 2.race not_smsa south, fe cluster(idcode) predict y_hat,xbu xtreg ln_w grade age c.age#c.age ttl_exp c.ttl_exp#c.ttl_exp tenure c.tenure#c.tenure 2.race not_smsa south c.y_hat#c.y_hat c.y_hat#c.y_hat# c.y_hat c.y_hat#c.y_hat# c.y_hat# c.y_hat , fe cluster (idcode) test c.y_hat#c.y_hat c.y_hat#c.y_hat# c.y_hat c.y_hat#c.y_hat# c.y_hat# c.y_hat

At the end of the procedure you will have this result.

Where the null hypothesis is that the model is correctly specified and has no omitted variables, however in this case, we reject the null hypothesis with a 5% level of significance, meaning that our model has omitted variables.

As an alternative but somewhat more restricted, also with more features, you can use the user-written package “resetxt” developed by Emad Abd & Sahra Khaleel (2015) which can be used after installing it with:

ssc install resetxt, replace

This package however doesn’t work with factor-variables or time series operators, so we cannot include c. or i. and d. or L. operators for example.

clear all use https://www.stata-press.com/data/r16/nlswork gen age_sq=ageage gen ttl_sq= ttl_expttl_exp gen tenure_sq= tenure* tenure xtreg ln_w grade age age_sq ttl_exp ttl_sq tenure tenure_sq race not_smsa south, fe cluster(idcode) resetxt ln_w grade age age_sq ttl_exp ttl_sq tenure tenure_sq race not_smsa south, model(xtfe) id(idcode) it(year)

however, the above code might be complicated to calculate in Stata, depending on how much memory do you have to do the procedure. That’s why in this post it was implemented the manual procedure of the Ramsey test in the panel data structure.

**Bibliography**

Emad Abd, S. E., & Sahra Khaleel, A. M. (2015). *RESETXT: Stata Module to Compute Panel Data REgression Specification Error Tests (RESET).* Obtained from: Statistical Software Components S458101: https://ideas.repec.org/c/boc/bocode/s458101.html

Ramsey, J. B. (1969). Tests for specification errors in classical linear least-squares regression analysis. *Journal of the Royal Statistical Society Series B 31*, 350–371.

Santos Silva, J. (2016). *Reset test after xtreg & xi:reg .* Obtained from: The Stata Forum: https://www.statalist.org/forums/forum/general-stata-discussion/general/1327362-reset-test-after-xtreg-xi-reg?fbclid=IwAR1vdUDn592W6rhsVdyqN2vqFKQgaYvGvJb0L2idZlG8wOYsr-eb8JFRsiA

**CENTRAL LIMIT THEOREM**

We start by an example where we observe a phenomenon and than we will discuss the theoretical background of the phenomenon.

Consider 10 players playing with identical dice simultaneously. Each player rolls the dice large number of times. The six numbers on the dice have equal probability of occurrence on any roll and before any player. Let us ask computer to generate data that resembles with the outcomes of these rolls.

We need to have Microsoft Excel ( above 2007 preferable) for this exercise. Point to ‘Data’ tab in the menu bar, it should show ‘Data Analysis’ in the tools bar. If Data Analysis is not there, than you need to install the data analysis tool pack, for this you have to click on the office button, which is the yellow color button at top left corner of Microsoft Excel Window. Choose ‘Add Ins’ from the left pan that appears, than check the box against ‘Analysis Tool Pack’ and click OK.

Select Office Button Excel Options | Select Add Ins Þ Analysis ToolPack ÞGo from the screen that appears |

Computer will take few moments to install the analysis toolpack. After installation is done, you will see ‘Data Analysis’ on pointing again to Data Tab in the menu bar. The analysis tool pack provides a variety of tool for statistical procedures.

We will generate data that matches with the situation described above using this tool pack.

Open an Excel spread sheet, write 1, 2, 3,…6 in cells A1:A6,

Write ‘=1/6’ in cell B1 and copy it down

This shows you possible outcomes of roll of dice and their probabilities.

This will show you following table:

1 | 0.167 |

2 | 0.167 |

3 | 0.167 |

4 | 0.167 |

5 | 0.167 |

6 | 0.167 |

Here first column contain outcomes of roll of dice and second column contain probability of outcomes. Now we want the computer to have some draws from this distribution. That is, we want computer to roll dice and record outcomes.

For this go to Data Þ Data AnalysisÞ Random Number Generation and select discrete distribution. Write number of variables =10 and number of random number =1000, enter value input and probability range A1:B6, put output range D1 and click OK.

This will generate a 1000×10 matrix of outcomes of roll of dice in cells A8:J1007. Each column represent outcome for a certain player in 1000 draws whereas rows represent outcomes for 10 players in some particular draw. In the next column ‘K’ we want to have sum of each row. Write ‘=SUM(A8:J8) and copy it down. This will generate column of sum for each draw.

Now, we are interested in knowing that what distribution of outcome for each player is:

Let us ask Excel to count the frequency of each outcome for player 1. Choose Tools/Data Analysis/Histogram and fill the dialogue box as follows:

The screenshot shows the dialogue box filled to count the frequency of outcomes listed observed by player A. The input range is the column for which we want to count frequency of outcomes and bin range is the range of possible outcomes. This process will generate frequency of six possible outcomes for the single player. When we did this, we got following output:

Bin | Frequency |

1 | 155 |

2 | 154 |

3 | 160 |

4 | 169 |

5 | 179 |

6 | 183 |

More | 0 |

The table above gives the frequency of the outcomes whereas same frequencies are plotted in the bar chart. You observe that frequency of occurrence is not approximately equal. The height of vertical bars is approximately same. This implies that the distribution of draws is almost uniform. And we know this should happen because we made draws from uniform distribution. If we calculate percentage of each outcome it will become 15.5%, 15.4%, 16%, 16.9%, 17.9% and 18.3% respectively. These percentages are close to the probability of these outcomes i.e. 16.67%.

Now we want to check the distribution of column which contain sum of draws for 10 players, i.e. the column K. Now the range of possible values of column of sum varies from 10 to 60 (if all column have 1, the sum would be 10 and if all columns have 6 than sum would be 60, in all other cases it would be between these two numbers). It would be in-appropriate to count frequencies of all numbers in this range. Let us make few bins and count the frequencies of these bins. We choose following bins; (10,20), (20, 30),…(50, 60). Again we would ask Excel to count frequencies of these bins. To do this, write 10, 20,…60 in column M of Excel spread sheet (these numbers are the boundaries of bins we made). Now select Tools/Data Analysis/Histogram and fill the dialogue box that appears.

The input range would be the range that contains sum of draws i.e. K8 to K1007 and bin range would be the address of cells where we have written the boundary points of our desired bins. Completing this procedure would produce the frequencies of each bin. Here is the output that we got from this exercise.

Bin | Frequency |

10 | 0 |

20 | 5 |

30 | 211 |

40 | 638 |

50 | 144 |

60 | 2 |

More | 0 |

First row of this output tells that there was no number smaller than starting point of first bin i.e. smaller than 10, and 2^{nd}, 3^{rd} …rows tell frequencies of bins (10-20), (20,30),…respectively. Last row informs about frequency of numbers larger than end point of last bin i.e. 60.

Below is the plot of this frequency table.

Obviously this plot has no resemblance with uniform distribution. Rather if you remember famous bell shape of the normal distribution, this plot is closer to that shape.

Let us summarize our observation out of this experiment. We have several columns of random numbers that resemble roll of dice i.e. possible outcomes are 1…6 each with probability 1/6 (uniform distribution). If we count frequency of these outcomes in any column, the outcomes reveal the distributional shape and the histogram is almost uniform. Last column was containing sum of 10 draws from uniform distribution and we saw that distribution of this column is no longer uniform, rather it has closer match with shape of normal distribution.

Explanation of the observation:

The phenomenon that we observed may be explained by central limit theorem. According to central limit, let be independent draws from any distribution (not necessarily uniform) with finite variance, than distribution of sum of draws and average of draws would be approximately normal if sample size ‘n’ is large.

**Mean and SE for sum of draws:**

From our primary knowledge about random variables we know that:

And

Suppose

Let , than and

These two statements tell the parameters of normal distribution that emerges from sum of random numbers and we have observed this phenomenon described above.

Verification

Consider the exercise discussed above; column A:J are draws from dice roll with expectation 3.5 and variance 2.91667. Column K is sum of 10 previous columns. Thus expected value of K is thus 10*3.5=35 and variance 2.91667*10. This also implies that SE of column K is 5.400 (square root of variance.

The SD and variance in the above exercise can be calculated as follows:

Write ‘AVERAGE(K8:K1007)’ in any blank cell in spreadsheet. This will calculate sample mean of numbers in column K. The answer will be close to 35. When I did this, I found 34.95.

Write ‘VAR(K8:K1007)’ in any blank cell in spreadsheet. This will calculate sample variance of numbers in column K. The answer will be close to 29.16, when I did this, I found 30.02

Summary:

In this exercise, we observed that if we take draws from some certain distribution, the frequency of draws will reflect the probability structure of parent distribution. But when we take sum of draws, the distribution of sum reveals the shape of normal distribution. This phenomenon has its root in central limit theorem which is stated in Section …..

]]>*“While the internet is
an important resource in efforts to stay informed and proceed with daily lives
during the COVID-19 pandemic, these online approaches to reducing risk are not
available to everyone in the same way. There are several challenges,
Pakistan is confronting at the domestic front, however a less obvious, yet
nonetheless an important, issue that the digital divide is complicating efforts
to respond to the challenge of pandemic government and society face
collectively. Indeed, a private sector, and especially Telecom, should come
forward, during this pandemic, and finds ways to bridge the digital divide as
quickly as possible through reliable and cost-effective (subsidize, affordable)
internet and broadband services, which became a matter of life and death in
Pakistan”*

Like many other developing countries, the majority of Pakistani households do not have physical access to the internet, primarily due to low income and poverty. As per recent research of Digital Rights Foundation (2020)[i], internet access in Pakistan stands at around 35 percent, with 78 million broadband and 76 million mobile internet (3/4G) connections. According to the Inclusive Internet Index 2019[ii], Pakistan fell into the last quartile of index countries, ranking 76 out of a 100; particularly low on indicators pertaining to affordability[iii]. the broadband internet is not affordable for large segments of the population and many can only afford limited mobile internet packages. As more services move from offline to digital, it is evident that a new inequality trend-the digital gap is emerging, which can exacerbate the pandemic and human health situation, since a significant disadvantage(s) arises when it comes to accessing the real-time information people need to respond to COVID-19. This is a problem not only for people without broadband access, but also for society as a whole as we struggle to flatten the COVID-19 curve.

In Pakistan, besides the structural inequalities such as class, gender, location, ability, and ethnicity, the internet access is undercut by its affordability, and severe economic pressure during the pandemic, as more and more people are losing jobs, calling for an immediate response from donors, government and particularly service providers, the Telecom sector.

While, several other organizations, are trying to support marginalized communities, teachers, and students, I am curious whether the telecom sector has any plan to give relief to the people in the time of pandemic? The average cost of call and internet is neither subsidized nor any free data packages are announced via Corporate Social Responsibility. Here are some benefits of providing free access to the internet during the time of pandemic:

1. Reduce informational asymmetries between service providers and people in need (it includes NGOs, public sector organizations, and academic institutions)

2. Speed up or facilitating registration and data acquisition process for social safety nets and other relief operations

3. Help to connect with families for overseas relatives, students and provides social cushioning and mental relief in the time of panic and when social and mainstream media is creating sensitization.

4. Reduce internet poverty which may result in reducing informational blockages and/digital inequality, consequently improve families social and economic conditions. Because, freelancers and others have access to such internet facilities and they can work from home as well as reduce pressures on employers.

5. It also helps telecom by increasing their demand, and may enhance their profits as a proportion of society may opt for companies that have better and low-cost internet and call services, making them have more customers. Thus, we urge the Telco to come up to support and take a lead to reduce digital inequality and help communities to fight against the Corona virus pandemic.

However, the Government besides other measures and policy response, must also enable Telco to perform its duties unrestricted, ensure that the Pakistan Telecommunications Authority (PTA) assists and facilitate the Telco to increase the bandwidth capacity in Pakistan as a whole, but significantly, in the marginalize areas of Baluchistan, Federally administrated tribal areas and Gilgit-Baltistan. It also is imperative for government and Telco to ensure citizen’s privacy and Cyberthreats, and reduce vulnerable towards unreliable internet connections at a time when the internet is tied to essential services. Last but not least, the government and service providers should also ensure rapid response towards establishing and revisiting the existing infrastructure, since it may lead to slow-downs or malfunction during this increasing demand, and flow of data and information.

[1] Program Director, M&S Research Hub, Germany

[i] Joint Statement by Digital Rights Foundation and BoloBhi: The Digital Gap During the COVID-19 Pandemic is Exasperating, March 31, 2020, Inequalitieshttps://digitalrightsfoundation.pk/joint-statement-by-digital-rights-foundation-and-bolobhi-the-digital-gap-during-the-covid-19-pandemic-is-exasperating-inequalities/

[ii] Inclusive Internet Index (2019). Retrieved at https://theinclusiveinternet.eiu.com/

]]>A replication process generally consists of two parts. The first part is concerned with reproducing key findings from the original study. If this step was successful, the next part will be performing robustness checks. Meta-analysis reveals another side of replicating published research. Meta-based studies survey the empirical results of a group of published papers attempting to test three key dimensions— statistical power, selective reporting bias, and between-study heterogeneity.

From the perspective of contributing to scientific research, replication studies are important for the continued progress of science. Given the relative scarcity of replication studies and in recognition of the importance of these methods, there has been increasing attention by editors of A-class journals (American Economic Review, Journal of Political Economy, Review of Economic Studies, Journal of Applied Econometrics) in publishing replicative studies.

The one-day intensive online workshop on 29 June 2020 by “Econometric Replication: Methods & Guidelines for Designing a Replicated Study” will teach you theoretically and practically how to design a novel replicated study.

Learn about the workshop and moderator at https://www.ms-researchhub.com/home/events/workshops/econometric-replication.html

References:

http://www.economics-ejournal.org/special-areas/replications-1

https://www.deakin.edu.au/__data/assets/pdf_file/0007/1198456/WhatMeta-AnalysesReveal_WP.pdf

]]>When it comes to the educational facilities and policies for children with disabilities in Pakistan, they are at a two-fold disadvantage[3]. The recent estimates by UNESCO suggest that as many as 1.4 million children with disabilities are left without access to either inclusive or special schools[4]. While, the government strategy has been primarily to offer education for children and adults with disabilities in separate “Special schools”, there are several drawbacks, including that these facilities are **inadequate **and **accessible **to only a small proportion of children with disabilities. According to the British Council, “there are 330 special education schools in Islamabad, Punjab, Sindh and Khyber Pakhtunkhwa provinces. Most of these schools are in urban areas, which makes education for persons with disabilities in rural areas a challenge and 50% of children with disabilities have access to such schools. They are costly from a public budget standpoint and keep children excluded from the rest of society. Even if there is an “Access”, these special schools vary in the “quality of education” and often have inadequate support in terms of “pedagogy and lack of proper instructors/teachers”, making “quality a question mark”[5].

This exclusion has an economic cost too, as estimates from the World Bank, and Economist Intelligence Unit suggests that by 2018 the costs of exclusion of disabled persons from employment towards Pakistan’s could reach US$20bn a year. The current base is US$12bn, and these costs will continue to rise each year—that’s approximately US$5.5m per day, every day[6]. Other socio-economic reasons also come into play. Families may simply be too poor to send their children to either public or private school, or there are limited transport options where schools are far away. Beyond needing the means for schooling, the social stigma also leads many parents to withhold their children from available schooling and there is also a huge drop-out rate for children with disabilities. In the case of girls, families are too protective, as well as worried about safety and sexual harassment or violence (rape), when it comes to handicapped girls. There are also gender-biases, especially where families belong to ultra-conservative or marginalize communities from Khyber Pukhtunkhwa and Baluchistan, where even normal girls are withheld from education.

Therefore, considering a high drop-out rate, scarcity of teachers, insufficient
resources, poor infrastructure, social and cultural practices, discriminatory
stigmas, attitudes of the parents and difficulties in transitioning to higher
levels of education, we need following key steps towards “inclusive education”
strategies/program for persons with disabilities at **Government** and **community
**level:

- Develop a comprehensive set of laws to protect the rights and dignity of persons, especially Children and Girls with disabilities in all aspects of living. This includes laws that protect them regardless of their age, gender or caste to discourage discrimination, provide inclusive education, improve educational infrastructure, build teacher’s capacities and transportation, use ICT4 Special Education and upgrade the curricula and quota system.
- Strengthen existing ministerial and government departments on special education, and develop a rigorous monitoring system to not only implement PwD’s laws and policies but use available resources wisely and evaluate their performance through key performance indicators. The existing mechanism has to be revamped both at Federal, provincial and local government levels and more representation should be given to the people with disabilities, so they can help in devising better policies and laws. It is also imperative that donors and other stakeholder resources and funds for education for PwD are maintained through external/internal evaluation and proper and regular monitoring is followed.
- In order to tackle access, and limited governmental resources to build new schools, the Pakistani government can use available resources through upgrading existing “stand-alone special schools” by integrating them into the “regular schooling system”in remotes areas. Since available Special education promotes segregation, but if the inclusive educational system is promoted, persons with disabilities can be taught in the same classroom as mainstream students, making educational services more accessible and affordable. However, the government must ensure that traditional schools are equipped with proper facilities, resources and ensure effective training of instructors to teach PwD.
- The government should invest in school capacity development, and utilizing youth, which can offer summer schools for PwD. Similarly, changes in school infrastructure and investment in better teacher training is equally significant to reduce disparities towards PwD.
- Similarly, Innovative community-based mechanisms can also reduce the burden on the government. There are successful examples in Pakistan, such as the using digital technologies/ICT to educate people with disabilities[7] distance and e-learning programs[8], as well as the Lady Health Worker Programme and several similar community-based mechanisms, which can help in educating PwD, building awareness, changing attitudes and driving change as well as reaching marginalize communities and to serve/educate PwD in a cost-effective way.
- At the Community level, government and community-based organizations, can work with religious leaders, Maddaris(religious schools), and non-formal educational facilities can be utilize to cater needs of education for PwD. Mosques can be used for “workshop along with worship” as they are frequently available and can be used for “people Welfare. These available resources also need proper mechanisms and facilities through citizen-government initiatives.
- The existing Disabled people’s organizations (DPOs)[9] need to work together with government, private organizations, schools such as Allied/ Bacon House School System and local communities to ensure a united front that communicates change from a rights-based approach. These DPOs largely focused on a charity or medical aid, which are both important services, especially where the government falls short, however, there needs to be better awareness of communities, reducing parental biases, stigmas and fears the need for broader change.
- Last but not least, to improve the economic condition of the families having people with disabilities, government, Non-profit organizations, corporations (as part of their Corporate Social Responsibility CSR) and DPOs should focus on enhancing such families economic resources, provide them social safety nets and other employments, so they can invest in their children and PwD more effectively.

In conclusion, inclusive, adaptive and innovative mechanisms supported at government and community level, will further efforts to achieve the educational outcomes for the people with disabilities and improve their status in contributing towards Pakistan’s economy as a productive member.

[1] The Economist; Intelligence Unit, Moving from the Margins; August 2014 and Information from the National Forum Supporting Women with Disabilities Emerging Concept of Women with Disabilities

[2] Moving from the margins: Mainstreaming persons with disabilities in Pakistan – produced in 2014 by the Economist Intelligence Unit for the British Council

[3] Ibd

[4] Daily times (2014). 1.4m disabled Pakistani children have no access to schools says Dr. Kozue Kay Nagata, Director UNESCO Islamabad. Available on http://unesco.org.pk/education/documents/2014/efa_week/national_forum/pc_DailyTimes.pdf

[5] Ibd

[6] Ibd

[7] See for instance “ICT and AT4DPwDs” at http://ict4dpwd.ning.com and www.telecentre.org/group/telecentrefordisabilities.providing some successful apps and technologies mainstreaming education for PwD.

[8] Helping the disabled: AIOU opens e-learning centre for visually impaired. Published in Express Tribune, Oct. 26^{th}, 2015. Retrieved at https://tribune.com.pk/story/979070/helping-the-disabled-aiou-opens-e-learning-centre-for-visually-impaired/?amp=1

[9] Disability organizations in Pakistan. Available at https://disabilityict4d.wordpress.com/pakistan/

]]>

Actors, football players, and models are millionaires and their monthly paychecks can sometimes exceed some developing countries’ annual budgets., while doctors, researchers, and teachers in the majority of countries earn only what is enough for living.

In such times, when the entire humankind is at threat and faces a global crisis, everyone stands at the researchers’ and doctors’ doorsteps waiting for them to develop a cure or a vaccine to save the world. In such times, the true value of science and researching emerges and the significance of investing in knowledge and education becomes evident and nonnegligible.

Our vision at MSR HUB “Bridging Knowledge between those who have it and those who need” derives our team and defines our mission, accordingly and as a part of our contribution in alleviating and supporting the world in the forthcoming global recession.

MS Research Hub institute will administrate and fund the first research project that will be moderated by selected team members to empirically investigate and predict how economies behave – and should behave- in times of the Coronavirus. Using historical data of similar epidemics that have hit the humankind, starting from the Spanish flu at the beginning of the 19 century, passing by MARS and MERS, our objective will be to develop a prescription that the world can use to mitigate the recessionary spillovers.

This research project will be the official launching of our institute’s “Research Grant Program” that aims to fund independent researchers from the least developed countries to carry on their planned human-related research projects in all scientific fields.

We believe first and always in mighty Allah, human-kind, and the power of knowledge and science in facing the current crises.

Dr. Sherif Hassan CEO & Academic Division director at MSR HUB- Germany

We declare first some usual assumptions, like closed economy *XN=0*, net investment equals *I=K-δK* where *δ* is a common depreciation rate of the economy for all kinds of capital. There’s no government spending in the model so *G=*0. And finally, we’re setting a function which is going to capture the individual utility *u(c)* given by:

This one is referred to as the constant intertemporal elasticity function of the consumption* c *over time *t*. The behavior of this function can be established as:

This is a utility function with a concave behavior, basically, as consumption in per capita terms is increasing, the utility also is increasing, however, the variation relative to the utility and the consumption is decreasing until it gets to a semi-constant state, where the slope of the points c1 and c2 is going to be decreasing.

We can establish some results of the function here, like

And that

That implies that the utility at a higher consumption point is bigger than on a low consumption point, but the variation of the points is decreasing every time.

The overall utility function for the whole economy evaluated at a certain time can be written as:

Where *U* is the aggregated utility of the economy at a certain time (t=0), *e* is the exponential function, *ρ* is the intergenerational discount rate of the consumption (this one refers to how much the individuals discount their present consumption related to the next generations) *n* is the growth rate of the population, *t* is the time, and *u(c)* is our individual utility function, *dt *is just the differential which indicates what are we integrating.

Think of this integral as a
sum. You can aggregate the proportion of individual utilities at a respective
time considering the population size, but also you need to bring back to the
present the utility of other generations which are far away from our time
period, this is where *ρ* enters and its
very familiar to the role of the interest rate in the present value in
countability.

This function is basically considering all time periods for the sum of individuals’ utility functions, in order to aggregate the utility of the economy U (generally evaluated at t=0).

This is our target function because we’re maximizing the utility, but we need to restrict the utility to the income of the families. So, in order to do this, the Ramsey model considers the property of the financial assets of the Ricardian families. This means that neoclassical families can have a role in the financial market, having assets, obtaining returns or debts.

The aggregated equation to the evolution of financial assets and bonuses *B* is giving by:

Where the left-side term is the evolution of all of the financial assets of the economy over time, w refers to the real rate of the wages, *L* is the aggregate amount of labor, *r* is the interest rate of return of the whole assets in the economy *B*, and finally, *C* is the aggregated consumption.

The equation is telling us that the overall evolution of the total financial assets of the economy is giving by the total income (related to the amount of wages multiplied the hours worked, and the revenues of the total stock of financial assets) minus the total consumption of the economy.

We need to find this in per capita terms, so we divide everything by L

And get to this result.

Where *b=B/L* and c is the consumption in per capita terms. Now we need to find the term with a dot on B/L, and to do this, we use the definition of financial assets in per capita terms given by:

And now we difference respect to time. So, we got.

We solve the derivate in general terms as:

And changing the notation with dots (which indicate the variation over time):

We have

We separate fractions and we got:

Finally, we have:

Where we going to clear the term to complete our equation derived from the restriction of the families.

And we replace equation (2) into equation (1). And we have

To obtain.

This is the equation to find the evolution of financial assets in per capita terms, where we can see it depends positively on the rate of wages of the economy and the interest rate of returns of the financial assets, meanwhile it depends negatively on the consumption per capita and the growth rate of the population.

The maximization problem of the families is giving then as

Where we assume that b(0)>0 which indicates that at the beginning of the time, there was at least one existing financial asset.

We need to impose that utility function is limited, so we state:

Where in the long run, the limit of utility is going to equal 0.

Now here’s the tricky thing, the use of dynamical techniques of optimization. Without going into the theory behind optimal control. We can use the Hamiltonian approach to find a solution to this maximization problem, the basic structure of the Hamiltonian is the following:

*H(.) = Target Function + v (Restriction)*

We first need to identify two types of variables before implementing it in our exercise, the control variable, and the state variable. The control variable is the one that focuses on the agent which is a decision-maker, (in this case, the consumption is decided by the individual, and the state variable is the one relegated in the restriction). The state variable is the financial assets or bonus b. Now the term v is the dynamic multiplier of Lagrange, consider it, as the shadow price of the financial assets in per capita terms, and it represents an optimal change in the individual utility given by one extra unit of the assets.

We’re setting what is inside of our integral as our objective, and our restriction remains the same and the Hamiltonian is finally written as:

The first-order conditions are giving by:

One could ask why we’re setting the partial derivates as this? Well, it’s part of the optimum control theory, but in sum, the control variable is set to be maximized (that’s why it’s equally to 0) but our financial bonus (the state variable) needs to be set negatively to the shadow prices of the evolution of the bonus because we need to find a relationship where for any extra financial asset in time we’ll decrease our utility.

The solution of the first-order condition *dH/dc *is giving by:

To make easier the derivate we can re-express:

To have now:

Solving the first part we got:

To finally get:

Now solving the term of the first-order condition we obtain:

thus the first-order condition is:

Now let’s handle the second equation of first-order condition in *dH/db*.

Which is a little bit easier since:

So, it remains:

Thus we got.

And that’s it folks, the result of the optimization problem related to the first-order conditions are giving by

Let’s examine these conditions: the first one is telling us that the shadow price of the financial assets in per capita terms it’s equal to the consumption and the discount factor of the generations within the population grate, some better interpretation can be done by using logarithms. Lets applied them.

let’s differentiate respect to time and we get:

Remember that the difference of logarithms it’s equivalent approximately to a growth rate, so we can write this another notation.

Where

In equation (4) we can identify that the growth rate of the shadow prices of the financial assets is negatively related to the discount rate ρ, and the growth rate of consumption. (in the same way, if you clear the consumption from this equation you can find out that is negatively associated with the growth rate of the shadow prices of the financial assets). Something interesting is that the growth rate of the population is associated positively with the growth in the shadow prices, meaning that if the population is increasing, some kind of pull inflation is going to rise up the shadow prices for the economy.

If we multiply (4) by -1, like this

and replace it in the second equation of the first order which is

Multiplied by both sides by v, we get

Replacing above equations drive into:

Getting n out of the equation would result in:

Which is the Euler equation of consumption!

Mankiw, N. G.,
Romer, D., & Weil, N. D. (1992). A CONTRIBUTION TO THE EMPIRICS OF ECONOMIC
GROWTH. *Quarterly Journal of Economics*, 407- 440.

Ramsey, F. P.
(1928). A mathematical theory of saving. *Economic Journal, vol. 38, no. 152,*,
543–559.

Solow, R. (1956). A
Contribution to the Theory of Economic Growth. *The Quarterly Journal of Economics,
Vol. 70, No. 1 (Feb., 1956),*, 65-94.