Arbitrage Pricing Theory

Автор работы: Пользователь скрыл имя, 06 Ноября 2017 в 21:13, реферат

Описание работы

In this report, we discover APT as a model used to provide pricing strategy for stocks in the best possible way by taking into consideration multiple risks that are associated along with the stocks. From the research carried out, it has been found out that there are several macroeconomic and financial factors that influence stock returns. They are: global, political, cyclical, systemic, synergistic and industry factors, and also the investment characteristics of the issuer's position in the region. However, some of these variables can affect the stock return more than others.

Содержание работы

Executive Summary…………………………………………………………………………….…..………p3
Arbitrage Pricing Theory/Introduction…………………………………………………………….p4
Macroeconomic and Financial Factors affecting the stock price……………………………………………………………………………………………………………….p5-6
Constructing the basic regression model and the assessment of its quality……..…………………………………………………………………………………………………..p7-14
Appendix………………………………………………………………………………………………………p15-21
Group Meeting Minutes………………………………………………………………….……………p22-24
References……………………………………………………………………………………………………p25-26

Файлы: 1 файл

Assignment 1.docx

— 287.76 Кб (Скачать файл)

Arbitrage Pricing Theory

Dr. Albert Danso | Financial Econometrics | ACFI3308


P16211847, p14024789, p14020881, p13232959

 

 

2578 words.

 

Contents:

 

Executive Summary…………………………………………………………………………….…..………p3

Arbitrage Pricing Theory/Introduction…………………………………………………………….p4

Macroeconomic and Financial Factors affecting the stock price……………………………………………………………………………………………………………….p5-6

Constructing the basic regression model and the assessment of its quality……..…………………………………………………………………………………………………..p7-14

Appendix………………………………………………………………………………………………………p15-21

Group Meeting Minutes………………………………………………………………….……………p22-24

References……………………………………………………………………………………………………p25-26


 

 

Executive Summary


In this report, we discover APT as a model used to provide pricing strategy for stocks in the best possible way by taking into consideration multiple risks that are associated along with the stocks. From the research carried out, it has been found out that there are several macroeconomic and financial factors that influence stock returns. They are: global, political, cyclical, systemic, synergistic and industry factors, and also the investment characteristics of the issuer's position in the region. However, some of these variables can affect the stock return more than others.

The APT model is based on the assumption, that the investor aims to increase profitability of the portfolio without an increase in risk every time when there is such an opportunity.

APT is an alternative to a CAPM (Capial Asset Pricing Model) developed by Sharp and based on Markowits's model, APT is based on a smaller amount of assumptions, thus it is less difficult than CAPM, despite the fact that it is based on a more complicated mathematical theory. The CAPM model can be expressed as a simple regression, the standard calculation of the CAPM model describes the interrelation between risk and expected return, whereas the APT model is expressed as multiple linear regression.

 

 

 

 

 

 

 

 

Arbitrage Pricing Theory/Introduction


Arbitrage pricing theory or (APT) is a multifactor mathematical model used to describe the relationship between the risk and expected return of securities in the financial markets. APT is grounded on the idea that the return if an asset can be forecasted using the relationship between the asset in question and many common risk factors associated with it. Therefore computing the expected return on a security and its movements in relation to macroeconomic factors. Accordingly, the results can be used to adjust the price of the security we will be looking at. (Staff, 2017)

There are three assumptions made when using this theory:

1. A factor model can be used to describe the relation between the risk and return of a security

2. Idiosyncratic risk can be diversified away

3. Efficient markets do not allow for persisting arbitrage opportunities 

The arbitrage pricing theory can be set up to consider several risk factors, such as the business cycle, interest rates, inflation rates, and energy prices. These will be discussed further in this report.

The formula includes a variable for each factor, and then a factor beta for each factor, representing the security’s sensitivity to movements. As it includes more variables, the arbitrage pricing theory can be considered more accurate than the capital asset pricing model.  

Example:

r = E(r) + B1F1 + B2F2 + e 

r = return on the security E(r) = expected return on the security F1 = the first factor B1 = the security’s sensitivity to movements in the first factor F2 = the second factor B2 = the security’s sensitivity to movements in the second factor e = the idiosyncratic component of the security’s return  (Wilkinson, 2017)

 

Macroeconomic and Financial Factors affecting the stock price


The impact of general (fundamental) factors does not depend on the species, specific characteristics of shares or the issuer status. They fix the overall macro- and microeconomic environment of the corporate market. These factors can be global, political, cyclical, systemic, synergistic and industry factors.

Global factors are caused by changes in the political and the general economic situation. The greatest impact has a common state of the global financial market, GDP, inflation processes (Maysami and Koh 2000), that generate interest and currency risks, gross national product (GNP) (Al-Qenae, Carmen, and Wearing 2002).

Political factors include the general condition of society, its stability or susceptibility to crises, government ownership (Alchain, 1965).

Cyclical factors reflect phases of the stock market development. The business cycle describes periods of rise and decline in the economy and has a significant impact on the assessment of the value of the shares issuers. It is characterized by: trends in consumer expectations and consumer expenses; actions taken by the Government to reduce or an increase the money supply; the trajectory of interest rate (Perotti 1995).

Systemic factors are system characteristics of the stock market. These factors include the system of regulatory support, management system and regulation, general economic indicators, which states use for the purpose of making operational decisions about the reorientation capital flows from the markets of some countries to the markets of others.

Industry factors include the prospects of the industry, the degree of industry risk, industry profitability, industrial production (Zhao 1999).

Special functional factors are the stock returns also associated with the financial condition of the issuing company (Campbell 2009). A lot of depends on the individual characteristics of the company, for example their credit quality and liquidity of shares. Therefore, the correct behavior of the company with the respect to minority shareholders affects the decrease in the discount rate and a possible increase in share price.

Special technical factors reflect the specific technical characteristics of shares and stock market conditions, as well as individual qualitative and quantitative options. The most important special technical factor - is the environment of the market.

Quantitative factors include the amount of the issue, the average price of supply and demand of shares, dividends (Moldovsky 1995, Docking and Koch 2005), and the amount of stock exchange OTC turnover.

Qualitative factors reflect the qualitative parameters of shares: the dividend increase, the rate of the change of prices and the volume of transactions. 

 

 

 

 

 

 

 

 

 

 

 

Constructing the basic regression model and the assessment of its quality


Descriptive statistics

Descriptive statistics were calculated from our raw data, rerer to table 1 in appendix.  The descriptive statistics are applied to systematise and  describe the data of observations. The description of data is usually the initial stage of the analysis of quantitative data and is a frequent first step to use of the other statistical procedures or tests. (Brooks, 2008) There statistics do not allow us to make conclusions beyond the data that has been analysed in our report, consequently conclusions regarding any hypothesis cannot be drawn. These statistics are simply a way to showcase our data set.

 

Examination the model for multicollinearity

Table 2 (in appendix) shows our results for Multicollinearity. This refers to correlation among the independent variables in a multiple regression model; it is usually invoked when some correlations are ‘large’, but an actual magnitude is not well defined. (Wooldridge, 2011)

For detection for multicollinearity of factors, it is possible to analyse the correlation matrix of these factors. If coefficients of correlation between some factors are close to 1, it indicates the close interrelation between them, so the existence of a multicollinearity. In our sample, it is clearly seen that there is no multicollinearity, correlations between variables are no more than 0.2. (Statistics Solutions, 2017)

 

 

 

 

Least Squares Method

The important task in the research of interrelation of different quantities,  is to answer on a question: how a change of one variable (or several) may influence the value of another. That is why we used the Least Squares Method.(Miller 2009)

According to the table 3 (in appendix) , we received  the following equation of the constructed model:

Stock returns= -0.420469+1.335311 ERSANDP+ (-1.612283) DPROD+(-3.38E-05) DCREDIT+3.425778DINFLATION+(0,020161)DMONEY+7.866621DSPREAD+5.305805RTERM

where:

  • ERSANDP- S and P
  • DPROD- industrial production
  • DCREDIT- consumer credit
  • DINFLATION- inflation
  • DMONEY- money supply
  • DSPREAD- BAA_AAA_SPREAD
  • RTERM- term

 

According to this test, we can make the following  conclusions:

R- squared

It is also known as the coefficient of determination and is ranged from 0 to 1 (it is usually interpreted in %, thus 0% -100%). It describes communication between values of a dependent variable (stock returns) and one or several independent variables (excess market return, industrial production, unexpected inflation, money supply, baa aaa spread, treasury bills). In our case the coefficient of determination is 0.2 (or 20%), which means that the relation between independent variable and dependent variables is low.

T value and P value help to understand which variables are the most significant in our sample, when p value is close to 0, and t value is high. Thus, the most significant variables in our sample are:

• Excess market return, P value is 0 and t value is 8.3

• Unexpected Inflation, P value is 0.1 and t value is 1.5

• Term structure, P value is 0.1 and t value is 1.86.

We can also check our results for significance according to the Stepwise regression. The Stepwise regression is used to determine which variables are “the most important”. The unexpected inflation, excess market return and the term structure have been included. So, we confirm our results due to Least squares, with the help of the t value and p value. We got the same results, refer to to table 3 in appendix.

The benefits of this Least Squares is the relative simplicity and universality of computing procedures. However, in an attempt to describe the studied economic event by means of the mathematical equation, the forecast will be more accurate for the small period of time and the equation of regression should be recalculated in process of receipt of the new information.

 

Examining the model for the existence of autocorrelation

The Durbin Watson Test is a measure of autocorrelation (also called serial correlation) in residuals from regression analysis. Autocorrelation is the similarity of a time series over successive time intervals. It can lead to underestimates of the standard error and can cause you to think predictors are significant when they are not. The Durbin Watson test looks for a specific type of serial correlation, the autoregressive process. (Statistics How To, 2017) However, it tests only for the first-order serial correlation. The test is inconclusive if the computed value lies between upper limits and lower limits. The test cannot be applied in models with lagged dependent variables.

DW ratio can be seen in table 3 in the appendix. DW values lie in an interval from 0 to 4. In case of lack of autocorrelation of DW it is close to 2. The proximity to 0 tells about positive autocorrelation, to 4 - about the negative one. In our sample the DW value is 2,21, it means that there is no autocorrelation.

Autocorrelation is a characteristic of data in which the correlation between the values of the same variables is based on related objects. (Statistics Solutions, 2017). The autocorrelation ( Box and Jenkins,1976) function can be used for the following two purposes: To detect non-randomness in data and identify an appropriate time series model if the data are not random.

 

 

 

 

 

 

 

 

 

 

 

Normality test

 

Figure 1 Normality test

Figure 1 illustrates a non-normal distribution. Our skewness is negative (-2,7), it means that we have the right asymmetry. The largest part of the figure is on the top right, which shows a histogram of a number of columns. As we can see, the tallest columns (which has results of 0 and 2,5) on the right side of our histogram, and the smallest with results of -50 and -65 on the left side. The standard deviation measures how concentrated the data are around the mean; in our sample the standard deviation = 12, 46, it means that the values in the data set are farther away from the mean, on average. Kurtosis is an indicator reflecting the sharpness of the peak, in our case it is positive (14.03), which means it has a sharp peak. The P-value is 0, thus, the null hypothesis is rejected.

 

 

 

 

Examination for heteroscedasticity using the White test

Please refer to table 4 in appendix.

Heteroscedasticity – The variance of the error term, given the explanatory variables, is not constant.

White Test

The advantage of this test is, unlike the others, it does not rely on the normality assumption. It is therefore easy to implement. White (1980) proposed this test for heteroscedasticity that adds the squares and cross products of all the independent variables. It intends to test for forms of heteroscedasticity that invalidate the usual OLS standard errors and test statistics. (Wooldridge, 2011)

Test statistics information to us, we need to determine the homogeneity of variance assumption is valid. The Chi-square test is usually consists of a square error sum, or through the sample variance. Using chi-square test statistics, from an assumption of independent normal distribution of data, it is effective in many cases because of the central limit theorem. Chi-square test data can be used to try to refuse to null hypothesis is independent.

Our Chi-Square is more than 0.05 (it is >0.09), so we do not reject the null hypothesis and it means that there is no heteroscedasticity.

 

Regression model for the period before crisis (2003-2006) and after 2009-2012

Please refer to table 6 and 7 in the appendix.

The "t'' statistic is computed by dividing the estimated value of the parameter by its standard error. The t Statistic of SRSANDP is 6.97. So we can get the conclusion that the actual value of the parameter could be zero.

In statistics, the T-statistic is a ratio to estimate the deviation of the parameters and its nominal value with standard error. The T-Statistic and Standard Error relationship is T-Statistic=Average/Standard Error. Their relationship is an inverse relation. The T - statistic is used to test if there is a significant difference. According the result of two tables. The Std. Error of DSPREAD is more than 40.51 in 2003-2006 and 6.36 in the 2009-2012. So we get the conclusion that is significant. 

The "Prob (t)'' value is the probability of obtaining the estimated value of the parameter if the actual parameter value is zero. There are more significant the parameter when the smaller the value of Prob (t) is zero. In 2009-2012, the Prob (t) Wiof ERSANDP is zero that mean this parameter is very important.

With regards to R-squared, in our findings the coefficient of determination is 0.14 in 2003-2006, this means that the relation between the independent variable and dependent variables is low. Thus showing that the model explains little if not no variability of the response data around its mean. However, it is 0.58 in 2009-2012 which indicates that the relationship between independent variable and dependent variables is above the average.

The parameter of Adjusted R-squared is 0.51 in 2009-2012. It means that, 51% of a variation of a dependent variable is explained by a variety of independent variables. However, the number is 0 in 2003-2006, while means  no variation  in the equation and the number of data observations. Although R-squared provides an estimate of how strong the relationship between the model and the response variable, it does not provide a formal hypothesis test for this relationship.

The F-test of overall significance determines whether this relationship is statistically significant. The Prob (F) statistics test the overall significance of the regression model. Specifically, it tests the null hypothesis that all of the regression coefficients are equal to zero. The result of table in 2009-2012 is 0, that is meant the regression does have some validity in fitting the data. However, the number is 0.47 in 2003-2006, that mean the independent variables are purely random with respect to the dependent variable.

The "Durbin-Watson test for autocorrelation'' is a statistic that indicates the likelihood that the deviation (error) values for the regression have a first-order auto regression component. The regression models assume that the error deviations are uncorrelated. Usually the values of the Durbin-Watson statistic less than 0.8 indicate the presence of autocorrelation. These two result table is more than 0.8 that are 2.16 and 1.99 in 2003-2006 and 2009-2012. The conclusion is that there are not manifest autocorrelation. (Hilmer and Hilmer)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Appendix


Descriptive statistics: Table 1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Multicollinearity: Table 2

 

 

 

DPROD

DCREDIT

DINFLATION

DMONEY

DSPREAD

RTERM

DPROD

1.000000

0.142500

-0.124890

-0.123089

-0.060911

0.004730

DCREDIT

0.142500

1.000000

0.042556

-0.006035

0.015226

0.001192

DINFLATION

-0.124890

0.042556

1.000000

-0.070794

-0.220462

-0.084468

DMONEY

-0.123089

-0.006035

-0.070794

1.000000

0.215841

-0.072985

DSPREAD

-0.060911

0.015226

-0.220462

0.215841

1.000000

0.013277

RTERM

0.004730

0.001192

-0.084468

-0.072985

0.013277

1.000000


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Least Squares Method: Table 3

Dependent Variable: ERAGL

   

Method: Least Squares

   

Date: 02/03/17   Time: 17:38

   

Sample (adjusted): 1987M03 2012M12

 

Included observations: 310 after adjustments

 
         
         

Variable

Coefficient

Std. Error

t-Statistic

Prob.  

         
         

C

-0.420469

0.915971

-0.459042

0.6465

ERSANDP

1.335311

0.161412

8.272707

0.0000

DPROD

-1.612283

1.478829

-1.090243

0.2765

DCREDIT

-3.38E-05

8.52E-05

-0.396829

0.6918

DINFLATION

3.425778

2.225999

1.538985

0.1249

DMONEY

-0.020161

0.039800

-0.506572

0.6128

DSPREAD

7.866621

7.823062

1.005568

0.3154

RTERM

5.305805

2.846990

1.863654

0.0633

         
         

R-squared

0.198732

Mean dependent var

-0.626839

Adjusted R-squared

0.180160

S.D. dependent var

13.92396

S.E. of regression

12.60745

Akaike info criterion

7.931921

Sum squared resid

48002.26

Schwarz criterion

8.028349

Log likelihood

-1221.448

Hannan-Quinn criter.

7.970469

F-statistic

10.70038

Durbin-Watson stat

2.206502

Prob(F-statistic)

0.000000

     
         
         

Информация о работе Arbitrage Pricing Theory