**What is CGE Modeling?**

Computable General Equilibrium (CGE) modeling is a type of economic modeling used to study the impacts of economic policies and shocks on the economy as a whole. The goal of CGE modeling is to provide a comprehensive and consistent representation of the interactions among different sectors and agents within an economy.

In a CGE model, the economy is divided into several sectors (such as agriculture, manufacturing, services, etc.), and agents (such as households, firms, government, etc.). The model represents the interdependence among these sectors and agents through their relationships in markets for goods, services, and factors of production.

CGE models are used to analyze a wide range of policy issues, such as trade liberalization, tax changes, environmental regulations, and natural disasters. The models are often used to estimate the impacts of these policies on macroeconomic variables such as GDP, consumption, investment, exports, and imports.

CGE modeling is a powerful tool for policy analysis, but it also has its limitations, including the simplifying assumptions made about the structure of the economy and the data requirements. Therefore, it is important to understand the limitations of CGE models and interpret the results with caution.

**There are two main software/coding environments to apply and simulate CGE models**

**1- GTAP/GEMPACK**

CGE-GTAP is a computational general equilibrium (CGE) model that is used in the study of global trade policies and their impacts. The GTAP model is a widely used tool for evaluating the economic impacts of trade policy and other global economic changes.

To learn CGE-GTAP, you can follow these steps:

- Familiarize yourself with basic concepts in economics, especially macroeconomics, trade theory, and computable general equilibrium models.
- Consider taking a GTAP course or training program, either online or in-person, to deepen your understanding of the software and its applications, such as an intro. and advanced GTAP Training offered as part of M&S Research Hub’s GEM program at (https://www.ms-researchhub.com/home/training/gem-training.html)
- Study the GTAP model from the Global Trade Analysis Project website (https://www.gtap.agecon.purdue.edu/default.asp), its structure and assumptions, as well as its various modules and databases.
- Learn how to use the software tool associated with the GTAP model, such as GAMS (General Algebraic Modeling System) or other software that can interface with GTAP data.
- Get hands-on experience by conducting your own simulations using the GTAP model and interpreting the results.
- Read and stay updated with the latest research that utilizes the CGE-GTAP model, to keep up-to-date with new developments and best practices.

**2- GAMS**

GAMS (General Algebraic Modeling System) is a high-level modeling system for mathematical programming and optimization. It is widely used in the field of economics, especially in computable general equilibrium (CGE) modeling, to develop, implement and solve complex mathematical models.

GAMS is particularly useful for CGE modeling because it provides a flexible and efficient platform for representing and solving complex economic models. With GAMS, researchers can easily implement and solve their models and compare the results to alternative scenarios. The system also provides tools for data handling, result visualization, and scenario analysis.

To learn GAMS, you can follow these steps:

- Familiarize yourself with basic concepts in economics, mathematical programming and optimization. This will help you understand the underlying principles and applications of GAMS.
- Get a basic understanding of the GAMS language syntax and structure by going through the GAMS user manual and tutorials available on the GAMS website (www.gams.com)
- Consider taking a GAMS course or training program, either online or in-person, to deepen your understanding of the software and its applications, such as GAMS Training offered as part of M&S Research Hub’s GEM program at (https://www.ms-researchhub.com/home/training/gem-training.html)
- Start solving simple problems with GAMS and gradually increase the complexity of the models you work with. This will help you understand the capabilities and limitations of the software and how to use it effectively.
- Get hands-on experience by working on projects and case studies using GAMS. This will help you to understand how to use GAMS to solve real-world problems.
- Join online forums and discussion groups to connect with other GAMS users and share knowledge and experience. This will also provide you with opportunities to learn from others and stay updated with the latest developments in the field.

References

Demystifying Modelling Methods for Trade Policy by Roberta Piermartini and Robert Teh (WTO discussion paper No. 10)

Chat GPT- Powered by Open AI

]]>Let’s say we want to use the full power of the information that belongs to the World Bank, but it may be tedious to download the data alone and then import it. We can omit this by using the WDI package in R. so let’s proceed to gather some data!

install.packages('WDI')

library(WDI)

Now that we got the library, we can pick some countries and variables from the World Bank (https://data.worldbank.org/indicator), and then, we proceed to download everything using R. So, I will choose the country of USA, Germany, China, and Brazil for our analysis. And I want to explore the relationship between CO2 emissions and GDP.

To proceed with this task I require to identify the code of the WDI indicators, which can be found in the URL of the variables. like the next image shows.

So in this case the CO2 emissions (metric tons per capita) has the code of “EN.ATM.CO2E.PC”, we can repeat the same case for the GDP which is “NY.GDP.PCAP.PP.KD” and with these, we can now put everything in the WDI() function in R like the next code depict.

df <- WDI(country = c("CHN","USA","DEU","BRA"), indicator = c("EN.ATM.CO2E.PC", "NY.GDP.PCAP.PP.KD") )

Notice that to select the countries we used ISO3 standard format. for China, USA, Germany, and Brazil and also the indicators associated. This will download all the years from 1960’s to the last date, even if there’s missing data.

Now that we put the data in the object df, we can start inspecting what we got now.

head(df)

summary(df)

We can see that there’s a lot of missing data, we can delete missing observations with

df <- na.omit(df)

Let’s say we also want to include a classification to discriminate whether the country is from the OECD or not. Thus we classify based on the ISO3 code manually as:

df$OCDE <- 0 df$OCDE[which(df$iso3c=="USA")] <- 1 df$OCDE[which(df$iso3c=="DEU")] <- 1

The first line creates a variable called OCDE into object df where we have our data. the second line identifies whether the country has “USA” in the variable iso3c contained in df, and if it is true, it will put 1’s in the respective rows for the column OCDE. The same reasoning goes for the third line, where if iso3c code equals “DEU” as Germany, it will insert 1’s in the rows related to the OCDE column in df. As we created the variable with defaults 0’s, all other non-OECD are correctly classified.

You can also improve this to contain characters instead of numbers by using:

df$OCDE <- "Non-OCDE" df$OCDE[which(df$iso3c=="USA")] <- "OCDE" df$OCDE[which(df$iso3c=="DEU")] <- "OCDE"

Now that we have cleared the data, and also did an example of how to use WDI packages, we can now introduce some nice descriptive statistics. for such purposes we’re going to load up ggplot2 library and also GGally library.

#install.packages("ggplot2") library(ggplot2) #install.packages("GGally") library(GGally)

Now let’s create some descriptive scatter plot, in which we won’t require year dynamics. and to do so, we’re going to filter the data we have to only the necessary columns. So we create a new object called df_a with the important columns.

names(df) df_a <- df[,c("EN.ATM.CO2E.PC", "NY.GDP.PCAP.PP.KD", "OCDE")]

First line tell us the names of the columns, and second line creates df_a with only the columns of the variables we’re interested.

Now we use the function ggpairs from the GGally package and plot it, but we’re going to discriminate also the category of OECD or not. so then we introduce the aes() function inside the ggpairs() such that we got:

ggpairs(df_a, aes(color= factor(OCDE)))

To obtain:

So for the continuous variables like CO2 emissions (metric tons per capita) and the GDP per capita, we can see the respective scatter plots down of the main diagonal. In the actual main diagonal we encounter the distribution of the variables (first by their distributions for each variable and later their proportions), and in the upper diagonal, we encounter the general correlation and the correlation of groups. where No OECD (China and Brazil) and OECD (USA and Germany), are classified by the OCDE variable.

Notice that the third column does not display coefficients of correlations for each group, but rather it proportionates the bar graphs relative to the mean (dark big line in the middle of the boxes), the standard deviation (box), and then atypical values (lines). In the last row, we counter the histogram distribution which is the main input to create the densities of the main diagonal, you can inspect by eye that they’re roughly similar.

With this article, now you are able to use the power of the World Bank data bases, their indicators, and use some elegant descriptive statistics in the form of a correlation matrix for your empirical work.

**References**

https://cran.r-project.org/web/packages/GGally/index.html

]]>Central bank digital currencies, or CBDCs, are currently one of the topics de jour within circles of monetary authorities and financial market regulators. CBDCs are essentially digital banknotes that can be used by individuals to pay businesses, shops, or each other (i.e., retail CBDC), or between financial institutions to settle financial transactions or trades (i.e., wholesale CBDC) (BIS n.d.).

CBDCs are not necessarily new, but interest has gained considerable traction in recent months. The Bank of Finland’s Avant smart card system, launched in the 1990’s is deemed the world’s first retail CBDC (Grym 2020), although the underlying structure is different from that of the current projects being developed. Fast forward, Sweden’s Riksbank’s e-krona became the first publicly announced work on retail CBDCs in 2017, while the Sand Dollar launched by the Central Bank of the Bahamas in 2020 is generally considered the first live retail CBDC (Auer et al. 2021).

As it stands, the 2021 Bank for International Settlement (BIS) Survey results indicate that 86% of the respondent central banks are actively researching the use of CBDCs, 60% are experimenting with the technology, and 14% are deploying pilot projects (Boar and Wehrli 2021; BiS n.d.). As of December 2022, there are at least 11 countries that have rolled out a functional CBDC, namely the Bahamas, Jamaica, Nigeria, and the 8 countries under the Eastern Caribbean Central Bank according to the live database put together by the Atlantic Council.

Sifting through the information contained in the Atlantic Council’s CBDC Tracker further reveals that retail and wholesale CBDC development is happening side-by-side globally. Motivations indicated largely revolve around the intent to move away from cash and facilitate financial stability and inclusion. The dataset also captures the strong interest in cross-border bridge projects, documenting at least 12 initiatives participated in by different economies in different parts of the world (i.e., not necessarily concentrated in one region), largely under the guidance of the BIS. Among others, these include the Multiple CBDC Bridge Project (mBridge) which involves the central banks of China, Hong Kong, the United Arab Emirates, and Thailand; Project Dunbar, which involves the central banks of Australia, Malaysia, Singapore, and South Africa; and Project Icebreaker, which involves the central banks of Norway, Israel, and Sweden.

All these initiatives naturally raise some questions about the potential changes in the financial sector’s infrastructure and credit flows, which will depend on the architecture and coverage of the CBDC project. Clarity of understanding and economic stakeholder communication as regards their corresponding impact on competition, monetary policy, financial stability, the future of cash, the status of legal tender, and many other issues are, thus, necessary to gain user patronage and facilitate adoption (see: Aleksi (2000)). Understandably, even the role of private commercial banks as intermediaries can change significantly with CBDC systems in place. CBDCs are noted to allow digital payments even without a bank account, as it is facilitated by a central bank–issued digital wallet. (Denecker et al. 2022). Commercial banks are nonetheless seen to play a key role in CBDC rollouts, particularly in terms of client onboarding as well as in the execution and recording of transactions.

**References**

Atlantic Council. 2022. Central Bank Digital Currency (CBDC) Tracker. Webpage. https://www.atlanticcouncil.org/cbdctracker/.

Auer, R., J. Frost, L. Gambacorta, C. Monnet, T. Rice, and H. S. Shin. 2021. Central Bank Digital Currencies: Motives, Economic Implications and the Research Frontier. *BIS Working Papers No 976*. Bank for International Settlements. Basel.

Boar, C. and A. Wehrli. 2021. Ready, Steady, Fo? – Results of the third BIS Survey on Central Bank Digital Currency. *BIS Papers No 114*. Bank for International Settlements. Basel.

Bank for International Settlements (BIS). n. d. BIS Innovation Hub work on central bank digital currency (CBDC). Webpage. Basel. https://www.bis.org/about/bisih/topics/cbdc.htm.

Denecker, O., A. d’Estienne, P-M Gompertz, and E. Sasia. 2022. Central Bank Digital Currencies: An Active Role for Commercial Banks. McKinsey & Company. New York.

Grym, A. 2020. Lessons learned from the world’s first CBDC. BoF Economic Review. No. 8. Bank of Finland. Helsinki.

]]>Neo banks, digital banks, and virtual banks are terms that have been interchangeably used to refer to banks that only operate in virtual space without physical branches. The origins of neo-banks are rather unclear at this point, albeit many accounts indicate that they started after the global financial crisis in 2007-08, while taking advantage of the financial digitalization initiatives launched in many countries.

Neo-banks have interestingly evolved in different ways. Some like Fidor and N26 have been neo-banks from the start while others have started out as something else. The latter group includes digital payment and trading platforms, digital remittance outlets, digital wallets, and credit companies that have taken up digital bank licenses. With such interest, the number of neo-banks has ballooned in recent years. As of January 2023, *The Financial Brand’s Neobank Tracker* has listed over 350 active neo-banks globally that offer an array of banking services.

Digital innovations like neo-banks have been largely hailed for their potential to bridge the financial participation gap. Notably, according to the World Bank’s most recent estimate (World Bank 2022), approximately 1.4 billion people worldwide remain unbanked. The ease with which these neo-banks can be accessed–simply through mobile phones in many cases–aided by the substantial broadening of internet coverage in recent years, is a critical feature that countries hope to exploit to expand the reach of formal financial services. These digital solutions have also been noted to have helped avert a much deeper economic difficulty in many countries at the height of the COVID-19 quarantines and movement control measures, which have disproportionately impacted the more vulnerable segments.

Nevertheless, evidence of the linkage between access to full banking services for the previously unbanked population and the proliferation of neo-banks is still arguably not well-established. This, even though, there are ample accounts of increased usage of digital wallets to store and transfer funds among individuals who previously have no bank accounts (see for example: Rizwana, Singh, and Raveendra 2021, Riandani et al. 2022). It could be that neo-banks are largely simply pulling in the clients that are already banked. It could also be that different forms of digital financial solutions have differing impacts on financial inclusion. Clearly, this needs an in-depth assessment for policies to be in line with the ground conditions.

Apart from the unclear financial inclusion linkage, the IMF (2022) also worryingly finds that neo-banks “are growing in systemic importance in their local markets” and are associated with “higher risk-taking in retail loan originations without appropriate provisioning and under-pricing of credit risk; higher risk-taking in the securities portfolio; and an inadequate liquidity management framework,” which suggest that macro-prudential frameworks have to catch up.

**References**

International Monetary Fund. 2022. *Global Financial Stability Report April 2022*. Washington D.C.

Riandani, O., D. Sari, N. Rubiyanti, N. K. Moeliono, and M. Fakhri. 2022. The Relationship between Digital Wallet Adoption and Usage to Financial Inclusion. Proceedings of the International Conference on Industrial Engineering and Operations Management. Nsukka, Nigeria, 5 – 7 April, 2022.

Rizwana, M., P. Singh, and P. V. Raveendra. 2021. Promoting Financial Inclusion Through Digital Wallets: An Empirical Study with Street Vendors. Financial Inclusion in Emerging Markets. Palgrave Macmillan, Singapore.

The Financial Brand’s Neobank tracker, https://thefinancialbrand.com/list-of-digital-banks/ (accessed January 2023).

World Bank. 2022. COVID-19 Boosted the Adoption of Digital Financial Services. Feature Story. Washington D. C.

]]>Some new evidence points out that heterogeneous effects are derived from both of these situations in regards the behavioral response of Welfare across continental regions. (A working paper titled “

We can witness that a general pattern arises when Gini Coefficient increases significantly (in particular more than 40), where a reduction in the Sen’s Welfare Index is occurring. This is not a general behavior in what regards the slopes of the relationship. It is a steeper slope for Sub-Saharan Africa in comparison to Latin America and the Caribbean.

Preliminary results evidence that not all the continents are affected with statistically significant results of the inequality towards Welfare, and Latin America tends to be a special case in the relationship.

Published results excluding Venezuela from the article of Riveros-Gavilanes (2020) point out that for this part of the American continent, the magnitude over the long run of the inequality represents much larger effects than economic growth. However, important discrepancies may exists in what relates to short-run dynamics. As appointed for this case of study. the short-run behavior only provides evidence that economic growth tends to alter the growth of Welfare in comparison to the long-run behavior where improvements in inequality tend to increase Welfare. In particular, the empirical strategy involved using the Human Development Index as the measure of Welfare across the Latin-American countries, and the income per capita was measured by the real GDP per capita and the complement of the Gini coefficient. This complement is used as the basis for “higher equality” when the regressions are estimated.

The relationship between Welfare and inequality, along with economic growth is a topic that requires further research, (see the working paper in the repository (https://ms-researchhub.com/home/research/msrworkingpapers.html) which will provide a better overview of the regional heterogeneous effects.

Seeking equality is a common objective of States and governments, but in recent times, the task has been more complex as more evasion and flexibility of the rules may occur. Inequality then here comes with the cost of Welfare for the highest amount of the population as the effect is given by the proportions of the income distribution. The highest amount of individuals affected, the larger the harm to Welfare.

In fact, micro-data research could be particularly important from both theoretical and empirical aspects in what relates to the harm to Welfare, if somehow, we are able to present a “sufficient statistic” of Welfare. Or at least, a form to measure individual welfare. In the mean time, the theoretical discussion provides great foundations for what may occur in the future as income inequality rises in the world. In particular, after the Covid-19 Pandemic and the Russian-Ukranian conflict.

References

Riveros-Gavilanes (2020) Estimación de la función de bienestar social de Amartya Sen para America Latina, Ensayos de Economia. 31(59), 13-40. https://revistas.unal.edu.co/index.php/ede/article/view/88235/82113

]]>Authors of the 2 winning posts will receive 100 USD and 50 USD respectively, besides public exposure of their blog/profiles and publicizing their work over wide academic and social networks.

We invite students, researchers, public officials, junior and senior academicians to submit their work to John Mandor at info@ms-researchhub.com using the subject line “MSR economic perspectives contest and author names”.The posts can be under any of the following categories:1- Technical/methodology related2- Summary of an innovative research3- Commentary/analytical/opinion post on global events and local or international economic challenges

The deadline to submit posts is 30 April 2022.

Contest participants should mind the following terms:1- There is no limit for each post’s author board. If the post wins the competition the prize will be distributed evenly among all authors.2- Posts should be between 1000-2000 words.3- By submission, authors claim they have no conflict of interest, and they have the copyrights to publish their work. 4- Non-English posts should be submitted along with a concise and sound English translation.5- Shortlisted posts will be published under their author names at MSR economic perspectives at https://lnkd.in/e6thRiNb6- Deadline to send work (30 April 2022), shortlisting decision (Mid-May 2022), results of final selection, and public voting (end June 2022).

]]>The key to understanding the endogeneity of the co-regressor far away from the classic perspective is derived from the potential covariance which may be different from 0 between the regressors X in a regression model, this produces the topic called “M-Bias” and essentially it is a situation when if one regressor moves, the other regressor do moves too, (slightly different from near-multicollinearity which on the other hand is a linear relationship and not actual changes!).

Let’s start with a linear model as always:

In this setup, we have two regressors which are in notation of matrixes (to simplify the calculous of Least Squares), and we may derive the Least Squares estimator as:

If Cov(X1,X2)= 0 holds, the least-squares estimator for B1 will be unbiased!

Thus, we are interested in the case where the Cov(X1,X2)≠0, a clear implication of the last statement is that when X2 changes also it does change X1. The effect on this will bias the estimates of B1, it creates an “amplification bias effect” because the causal channel is no longer clear in the process to isolate the true effect of X1 on Y.

Notice this is not derived from an existing linear relationship which we may believe from near-multicollinearity, we may define X1=a+bX2+u and this does not necessarily mean that Cov(X1,X2) ≠0, since a linear relationship may exist when two phenomenons are not caused by each other, and X1 must also be potentially endogenous to bias the estimates. Hence, the difference between correlation and causation is important here.

Graphically this means that:

As you can note, X2 is affecting X1 and X1 is also affecting Y, also there may be potential unobserved effects/variables U, which are affecting X1 and Y together. This is the case of the endogenous co-regressor, where X2 is affecting the change on X1, and thus, it will affect the channel effect and amplify the bias if X2 is included in the regression. (This is the case of Model 10 from Cinelli et. al, 2021)

At this point, you may think, “What is this guy talking about? Doesn’t the aggregation of covariates may determine a robust result?” Well, the last is true, but you must be careful to include only sensible covariates and not potential endogenous co-regressors!

In order to demonstrate the effect of this endogenous co-regressor bias amplification, I will follow Cinelli’s Example on R (Cinelli, 2020):

Let’s start creating some observations on R:

`n <- 1e5`

Now let’s create some random disturbances.

`u <- rnorm(n)`

And now let’s create our regressor x2 with also a random behavior:

`x2 <- rnorm(n)`

Let’s create a Data Generating Process -DGP- where we know that Cov(x1,x2) differs from 0, thus, if X2 changes, it will automatically change X1, making it endogenous:

`x <- 2*x2 + u + rnorm(n)`

And let’s create the DGP for the dependent variable.

`y <- x + u + rnorm(n)`

Notice that y is a linear process that depends on x and u within a randomly distributed disturbance. Hence, the component u (as a disturbance) is affecting both x and y. If we run the regression y over x we will have a biased estimator.

`lm(y ~ x)`

```
#> Call:
#> lm(formula = y ~ x)
#>
#> Coefficients:
#> (Intercept) x
#> 0.00338 1.16838
```

And one may think, well, we may improve the estimates if we include x2, but look again!

`lm(y ~ x + x2) # even more biased`

```
#> Call:
#> lm(formula = y ~ x + z)
#>
#> Coefficients:
#> (Intercept) x x2
#> 0.002855 1.495812 -0.985012
```

The coefficient for x has been greatly amplified in the point estimates by the bias-aggregation effect!.

The conclusion of this is that you may double-check again before adding controls on your variables, and be sure to add just sensitive controls, otherwise you will bias the estimates of your regression model!

References:

Cinelli, (2020) Bad Controls and Omitted Variables, Taken from: https://stats.stackexchange.com/questions/196686/bad-controls-and-omitted-variables

Cinelli, C. ; Forney, A. ; Pearl, J. (2021) A Crash Course in Good and Bad Controls, Taken from: https://www.researchgate.net/publication/340082755_A_Crash_Course_in_Good_and_Bad_Controls

Pearl, J. (2009a). Causality. Cambridge University Press

Shrier, I. (2009). Propensity scores. Statistics in Medicine, 28(8):1317–1318.

Pearl, J. (2009c). Myth, confusion, and science in causal analysis. UCLA Cognitive Systems

Laboratory, Technical Report (R-348). URL: https://ucla.in/2EihVyD.

Sjolander, A. (2009). Propensity scores and m-structures. Statistics in medicine,

28(9):1416–1420.

Rubin, D. B. (2009). Should observational studies be designed to allow lack of balance in

covariate distributions across treatment groups? Statistics in Medicine, 28(9):1420–1423.

Ding, P. and Miratrix, L. W. (2015). To adjust or not to adjust? sensitivity analysis of

m-bias and butterfly-bias. Journal of Causal Inference, 3(1):41–57.

Pearl, J. (2015). Comment on ding and miratrix:“to adjust or not to adjust?”. Journal of

Causal Inference, 3(1):59–60. URL: https://ucla.in/2PgOWNd.

In my last recent publication co-authored with Jeisson Riveros, the article entitled “

The article takes as theoretical constructions the statements of Oszlak & Kaufman (2014) from the Open Government features, including their advantages, and innovations from the implementation of policies on this matter. Along with the New Public Management characteristics from Osborne & Gaebler (1994). It is empirically reviewed how the open government policies are correlated with the phenomenons of corruption and transparency for Colombia in two specific years (2014 & 2016). The methodology involved the panel data regression models using Fixed and Random Effects as a form to empirically review these correlations.

The dependent variable to measure the level of corruption involved an Index formulated by the organization “Transparency for Colombia” which belongs to the initiative of “Transparency International” (well-known for the construction of the Global Corruption Barometer). The index, called “

The set of independent variables for the first model were chosen through three components of the Open Government Index published by the National Attorney of Colombia, these components are also found for the same years, in the same local level, and are segmented in

Some of the scatterplots of these independent variables against the Transparent Municipal Index for Colombia in the two years of the study, reflected a positive correlation with the open government policies. And some of visual results can be witnedssed in the following graphs.

The panel data econometric model involved a linear specification of the Transparent Municipal Index (as the measure of the risk of corruption) to be explained through the open government components of OI, EI, and DI. Which derived in the next specification:

Where *i* represents the local entity (from the municipal level) and *t* the year of the observation. A one-way error component model was included as C_{i} and allowed to capture some of the unobserved heterogeneity of the local entities. And in the regression panel data outputs two specifications were included by adding a Visibility variable of the public entities. The results were the following:

From the two models estimated, the inclusion of the Visibility variable improved greatly the specification of the original model, and the results derived in a significant positive impact from the components of Organization of the Information and Exposition of the Information as relevant variables at a 5% level of significance to explain the Transparent Municipal Index as the measure of the risk of corruption. Increases in the components deliver an increase in the transparency at the local levels, and by the index construction, involves a decrease in the risk of corruption practices. The linear explanation of the model (not so important in this context) is also decent between the individuals.

**Concluding remarks** are that practices of open government help to reduce the risk of corruption, by the theoretical constructions in which the public management can be viewed, inspected, and controlled by the society, and in this setting, provided a better set up for the governance in the local entities. Further research is required, but this study supports the traditional idea that a good and efficient government is one that has no secrets, and according to logical deductions, such transparency helps to decrease the risk of corruption, which for the sample of the study, are empirically correlated.

**Reference of the article: **

Riveros Gavilanes, J. M. & Riveros Gavilanes, J. A. (2021) Implicaciones e Incidencias de las Politicas de Gobierno Abierto, Contenido en Retos 2020: Gobierno Abierto. Instituto de Altos Estudios Nacionales IAEN (Ecuador). Recuperated from: https://www.researchgate.net/publication/356507453_Implicaciones_e_incidencias_de_las_politicas_de_gobierno_abierto_el_caso_colombiano_2014_y_2016

**General References**

Corporación Transparencia por Colombia CTC. Resultados 2015-2016. Recuperado de https://indicedetransparencia.org. co/2015-2016/ITM/Alcaldias

Oszlak, O., & Kaufman, E. (2014). Teoría y práctica del gobierno abierto: Lecciones de la experiencia internacional. Buenos Aires: OEA. Recuperado de https://redinpae.org/recursos/kaufman-oszlak.pdf

Osborne, D., y Gaebler, T. (1994). Un nuevo modelo de gobierno: cómo transforma el espíritu empresarial al sector público. Ciudad de México: Gernika.

Procuraduría General de la Nación PGN. Índice de Gobierno Abierto. Recuperado de https://www.procuraduria.gov.co/portal/Indice-de-Gobierno-Abierto.page

We can take the natural logarithm

to show that the natural logarithm of asset prices follows a *random walk* – the best forecast for prices is simply the current price. As such, applying regression methods from basic ARIMA models to advanced neural networks will fail – the models will simply repeat the last observation in the training data.

Instead, we can successfully predict asset prices by assuming their *returns* follow *Geometric Brownian Motion* (GBM):

Here, the change in returns is given by the expected value plus volatility, both multiplied by the last observed price. For the log of returns, and using Ito’s Lemma, one can write the solution to this differential equation as

where *B_t* represents a Brownian motion process. The above formula is how we will forecast liquid asset prices in this article. For models in other asset types (ie illiquid assets), one may simply substitute the GBM equation in Ito’s Lemma and derive a new formula for forecasting.

We first import our packages:

```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy
from fitter import Fitter
```

For today, we forecast Bitcoin using data from August 01, 2020 to November 15, 2021. Our data comes from Yahoo Finance.

```
liquid = pd.read_csv("/path/to/BTC-USD.csv")
liquid_returns = np.log(liquid.Close) - np.log(liquid.Close.shift(1))
```

We split both our returns and prices data into training and testing sets:

```
train, test = pmdarima.model_selection.train_test_split(liquid.Close.dropna(), train_size = 0.8)
training, testing = pmdarima.model_selection.train_test_split(liquid_returns.dropna(), train_size = 0.8)
```

Now, we obtain the distribution of our returns. Note that it is a common and erroneous practice to assume that returns follow a normal distribution in forecasting. This practice yields disastrous results – one needs proper knowledge of the distribution to forecast properly.

```
f = Fitter(training, timeout = 120)
f.fit()
f.summary()
```

Using BIC as our criterion, we get the Laplace distribution as our best distribution.

`f.get_best(method = "bic")`

We now write our main function for performing Monte Carlo Integration. This methods uses random numbers to repeatedly sample future results – in our case, we sample random numbers from a Laplace Distribution, then multiply them to our volatility to obtain our diffusion term.

```
def GBMsimulatorUniVar(So, mu, sigma, T, N):
dim = np.size(So)
S = np.zeros([T + 1, int(N)])
S[0, :] = So
for t in range(1, int(T) + 1):
for i in range(0, int(N)):
drift = (mu - 0.5 * sigma**2)
Z = scipy.stats.laplace.rvs()
diffusion = sigma*Z
S[t][i] = S[t - 1][i]*np.exp(drift + diffusion)
return S[1:]
```

Here, we forecast our prices with 1000 simulations for the length of our testing data. We use the average of simulations as our optimal forecast.

```
prices = GBMsimulatorUniVar(So = liquid.Close.iloc[len(training)], mu = training.mean(), sigma = training.std(), T = len(test), N = 1000)
newpreds = pd.DataFrame(prices).mean(axis = 1)
```

Taking the mean average prediction error (MAPE), we find around 6.8% forecasting error.

```
from sklearn.metrics import mean_absolute_percentage_error as mape
mape(newpreds, test.dropna())
```

We now plot our forecast against the real test values.

```
axis = np.arange(len(train) + len(test))
plt.plot(axis[:len(train)], train, c = "blue")
plt.plot(axis[len(train):], test, c = "blue")
plt.plot(axis[len(train):], np.array(newpreds), c = "green")
```

As one can see, we have relatively good results.

One should note that other assets may have different distributions. For instance, here are distribution fit results for Ethereum:

```
> f.get_best(method = "bic")
{'gennorm': {'beta': 1.126689300086524,
'loc': 0.007308884923027554,
'scale': 0.047110827282059724}}
```

As a rule of thumb, the distribution parameters in the fit function need to be multiplied by 2.5 when sampling random numbers to obtain good forecast results. One must also use common sense in determining which proposed distribution to use – those such as the Gumbel, Logistic, or similar ones (used to model categorical data) are wholly unsuitable for stock price data.

```
def GBMsimulatorUniVar(So, mu, sigma, T, N):
dim = np.size(So)
#t = np.linspace(0., T, int(N))
S = np.zeros([T + 1, int(N)])
S[0, :] = So
for t in range(1, int(T) + 1):
for i in range(0, int(N)):
drift = (mu - 0.5 * sigma**2)
Z = scipy.stats.gennorm.rvs(beta = 1.126689300086524*2.5)
diffusion = sigma*Z
S[t][i] = S[t - 1][i]*np.exp(drift + diffusion)
return S[1:]#, t
```

This forecast obtains around 8.6% forecasting error with Ethereum.

While some asset prices may follow random walks, using the proper tools to model them gives great forecasting results and accuracy. However, even with the best tools and distributions, no forecast will ever be great if a structural break exists in the data. Both our Ethereum and Bitcoin data started and ended during the COVID-19 Pandemic – mixing pre and post pandemic data is always an ill-advised move.

]]>Why do people so easily invest money showing off with new mobile, tablet, shirt, or hand watch

while

they tend to be reluctant or uncertain when paying for training, course, or even buying a book that can pay off and change their lives.

Why do you waste money in decorating what others see “on” you and poorly or never invest in what you see “in” yourself?

86 billion brain cells control what you do, say, think, feel and learn. They control your job, investment, and savings. They control your ability to make the right decisions at the right time. They control how you simply live. Brain muscles are similar to any muscle, if not trained they will shrink.

If you think that what you took from your high school or university is enough for you to rest safely on this knowledge, You are mistaken. Even graduates from top-world universities need to keep building up their knowledge to stay up to date with recent scientific discoveries and market trends in their fields.

Do not get fooled with the consumerism era, they fool you to keep getting richer and successful. Stand up, rationalize, invest in your real property ” Your Brain and Knowledge”.

]]>