After completing this reading, you should be able to:
 Distinguish between the relative assumptions of single and multiple regression.
 Interpret regression coefficients in a multiple regression.
 Interpret goodness of fit measures for single and multiple regressions, including R^{2} and adjusted R^{2}.
 Construct, apply, and interpret joint hypothesis tests and confidence intervals for multiple coefficients in regression.
Unlike linear regression, multiple regression simultaneously considers the influence of multiple explanatory variables on a response variable Y. In other words, it permits us to evaluate the effect of more than one independent variable on a given dependent variable.
The form of the multiple regression model (equation) is given by:
$$Y_i = β_0 + β_1 X_1i + β_2 X_2i + … + β_k X_ki + ε_i ∀i = 1, 2, … n$$
Intuitively, the multiple regression model has k slope coefficients and k+1 regression coefficient. Normally, statistical software (such as Excel and R) are used to estimate the multiple regression model.
Interpreting the Multiple Regression Coefficients
The slope coefficients \({\beta}_k\)_{ }computes the level of variation of the dependent variable Y when the independent variable \(X_j\)_{ }changes by one unit while holding other independent variables constant. The interpretation of the multiple regression coefficients is quite different compared to linear regression with one independent variable. The effect of one variable is explored while keeping other independent variables constant.
For instance, in a linear regression model with one independent variable could be estimated as \(\hat{Y}=0.6+0.85X_1\). The slope coefficient, in this case, is 0.85, which implies that a 1 unit increase in X_{1 }results in 0.85 units increases independent variable Y.
Now, assume that we had the second independent variable to the regression so that the regression equation is \(\hat{Y}=0.6+0.85X_1+0.65X_2\). A unit increase in X_{1 }will not result in 0.85 unit increase in Y unless X_{1 }and X_{2 }are uncorrelated. Therefore, we will interpret 0.85 as one unit of X_{1 }leads to 0.85 units increase in the dependent variable Y, while keeping X_{2} constant.
OLS Estimators for the Multiple Regression Parameters
Although the multiples regression parameters can be estimated, it is challenging since it involves a huge amount of algebra and the use of matrices. We can, however, build a foundation of understanding using the multiple regression model with two explanatory variables.
Consider the following multiples regression equation.
$$Y_i = β_0 + β_1 X_1i + β_2 X_2i+ε_i$$
The OLS estimator of β_1 is estimated as follows:
The first step is to regress X_{1 }and X_{2 } and to get the residual of \(X_{1i}\) given by:
$${\epsilon}_{X_{1i}}=X_{1i}{\hat{\alpha}}_{0}{\hat{\alpha}}_{1}X_{2i}$$
Where \(\hat{α} _0\) and \(\hat{α} _1\) are the OLS estimators of \(X_{2i}\).
The next step is to regress Y on X_{2 }to get the residuals of Y_{i,} which is intuitively given by:
$${\epsilon}_{Y_i }=Y_i{\hat{y} _0}{\hat{y}} _{1} X_{2i}$$
Where \({\hat{γ}} _0\) and \({\hat{γ}} _1\) are the OLS estimators of \(X_{2i}\). The final step is to regress the residual of X_{1 }and Y \( (ϵ_{X_{1i} } \text{and} ϵ_{Y_i }) \) to get:
$$ϵ_{Y_i }=\hat{β} _1 ϵ_{X_{1i} }+ϵ_i $$
Note that we do not have a constant the expected values of \(ϵ_{Y_i }\) and \(ϵ_{X_i }\) are both 0. Moreover, the main purpose of the first and the second regression is to exclude the effect of X_{2 }from both Y and X_{1} by dividing the variable into the fittest value which is correlated with X_{2 }and the residual error which is uncorrelated with X_{2 }and thus the tworesidual obtained in uncorrelated with X_{2 } by intuition. The last step regression gives the regression between the components of Y and X_{1,} which is uncorrelated with X_{2}.
The OLS estimator for \(β_2\) can be approximated analogously as that of \(β_1\) by exchanging X_{2 }for X_{1} in the process above. By repeating this process, we can estimate a kparameter model such as:
$$Y_i = β_0 + β_1 X_{1i} + β_2 X_{2i} + … + β_k X_{ki} + ε_i ∀i = 1, 2, \dots n$$
Most of the time, this is done using a statistical package such as Excel and R.
Assumptions of the Multiple Regression Model
Suppose that we have n observations of the dependent variable (Y) and the independent variables (X_{1}, X_{2}, . . . , X_{k}), we need to estimate the equation:
$$Y_i = β_0 + β_1 X_{1i} + β_2 X_{2i} + … + β_k X_{ki} + ε_i ∀i = 1, 2, \dots n$$
For us to make valid inference from the above equation, we need to make classical normal multiple linear regression model assumptions as follows:
 The relationship between the dependent variable, Y, and the independent variables, X_{1}, X_{2}, . . . , X_{k}, is linear.
 The independent variables (X_{1}, X_{2}, . . . , X_{k}) are iid. Moreover, there is no definite linear relationship that exists between two or more of the independent variables, X_{1}, X_{2}, . . . , X_{k}.
 The expectation of value of the error term, conditioned on the independent variables, is 0: E(ϵ X_{1}, X_{2}, . . . , X_{k}) = 0
 The variance of the error term is equal for all observations. That is, \(E(ϵ_i^2 )=σ_ϵ^2, i=1,2,…,n\) (homoskedasticity assumption). The assumption enables us to estimate the distribution of the regression coefficients_{.}
 The error term ϵ is uncorrelated in all observations. Put,\(E(ϵ_i ϵ_j )=0 ∀ i≠j\)
 The error term ϵ is normally distributed. This allows us to test the hypothesis about regression analysis.
 There are no outliers so that \(E(X_{ji}^{4} )<∞\) for all j=1,2….k
The assumptions are almost the same as those of linear regression with one independent variable, only that the second assumption is tailored to make sure that there are no linear relationships between the independent variables (multicollinearity).
Measures of Goodness of Fit
Goodness of fit of a regression is a measure using the Coefficient of determination (\(R^2\)) and the adjusted coefficient of determination.
The Coefficient of Determination (\(R^2\))
Recall that the standard error estimate gives a percentage at which we are certain of a forecast made by a regression model. However, it does not tell us how suitable is the independent variable in determining the dependent variable. The coefficient of variation corrects this shortcoming.
The coefficient of variation measures a proportion of the total change in the dependent variable that is explained by the independent variable. We can calculate the coefficient of variation in two ways:
1. Squaring the Correlation Coefficient between the Dependent and Independent Variables.
The coefficient of variation can be computed by squaring the correlation coefficient (r)between the dependent and independent variables. That is:
$$R^{2}=r^{2}$$
Recall that:
$$r=\frac{Cov(X,Y)}{{\sigma}_{X}{\sigma}_{Y}}$$
Where
\(Cov(X,Y)\)covariance between two variables, X and Y
\(σ_X\)standard deviation of X
\(σ_Y\)standard deviation of Y
However, this method only accommodates regression with one independent variable.
Example: Calculating the Coefficient of Determination using Correlation Coefficient
The correlation covariance between the money supply growth rate (dependent, Y) and inflation rates (independent, X) is 0.007565. The standard deviation of the dependent (explained) variable is 0.050, and that of the independent variable is 0.02. Regression analysis for the ten years was conducted on this variable. We need to calculate the coefficient of determination.
Solution
We know that:
$$r=\frac{Cov(X,Y)}{{\sigma}_{X}{\sigma}_{Y}}=\frac{0.0007565}{0.05\times 0.02}=0.7565$$
So, coefficient of determination is given by:
$$r^{2}=0.7565^2 =0.5723=57.23%$$
So, in regression, the money supply growth rate explains roughly 57.23% of the variation in the inflation rate over the ten years.
2. Method for Regression Model with One or more than Independent Variables
If the regression analysis is known, then our best estimate for any observation for the dependent variable would be the mean, . Alternatively, instead of using as an estimate of Y_{i}, we can predict an estimate using the regression equation. The resulting solution will be denoted as:
$$Y_i = β_0 + β_1 X_1i + β_2 X_2i + … + β_k X_ki + ε_i=\hat{Y} _i+\hat{ϵ} _i$$
So that:
$$Y_i=\hat{Y} _i+\hat{ϵ} _i$$
Now if we subtract the mean of the dependent variable, in the above equation and square and sum on both sides so that:
$$\sum_{i=1}^{n}{\left(Y_i \bar{Y}\right)^2}=\sum_{i=1}^{n}{\left(\hat{Y}_i \bar{Y}+\hat{\epsilon}_i\right)^2}$$
$$\sum_{i=1}^{n}{\left(Y_i \bar{Y}\right)^2}=\sum_{i=1}^{n}{\left(\hat{Y}_i \bar{Y}\right)^2}+2\sum_{i=1}^{n}{\hat{\epsilon}_{i} \left(\hat{Y}_i \bar{Y}\right)}+\sum_{i=1}^{n}{\hat{\epsilon}_{i}^2}$$
Note that:
$$2\sum_{i=1}^{n}{\hat{\epsilon}_{i} \left(\hat{Y}_i \bar{Y}\right)}=0$$
Since the sample correlation between \(\hat{Y} _i\) and \(\hat{\epsilon} _i\)is 0. The expression, therefore, reduces to,
$$\sum_{i=1}^{n}{\left(Y_i \bar{Y}\right)^2}=\sum_{i=1}^{n}{\left(\hat{Y}_i \bar{Y}\right)^2}+\sum_{i=1}^{n}{\hat{\epsilon}_{i}^2}$$
But
$$\hat{\epsilon}^{2}_{i}=\left(Y_i \hat{Y}\right)^2$$
So, that
$$\sum_{i=2}^{n}{\hat{\epsilon}_{i}^{2}}=\sum_{i=1}^{n}{\left(Y_i \hat{Y}\right)^2}$$
Therefore,
$$\sum_{i=1}^{n}{\left(Y_i \bar{Y}\right)^2}=\sum_{i=1}^{n}{\left(\hat{Y}_i \bar{Y}\right)^2}+\sum_{i=1}^{n}{\left(Y_i \hat{Y}\right)^2}$$
If the regression analysis useful for predicting Y_{i} using the , then the error should be smaller than predicting Y_{i} using .
Now let:
Explained Sum of Squares (ESS)=\(\sum_{i=1}^{n} {\left(\hat{Y}_i \bar{Y}\right)^2}\)
Residual Sum of Squares (RSS) =\(\sum_{i=1}^{n} {\left(Y_i \hat{Y}\right)^2}$$\)
Total Sum of Squares (TSS)=\(\sum_{i=1}^{n} {\left(Y_i \bar{Y}\right)^2}\)
Then:
$$TSS=ESS +RSS$$
If we divide both sides by TSS, we get:
$$1=\frac{ESS}{TSS} +\frac{RSS}{TSS} $$
$$⇒\frac{ESS}{TSS}=1\frac{RSS}{TSS}$$
Now, recall than the coefficient of determination is the fraction of the overall change that is reflected in the regression. Denoted by R^{2}, coefficient of determination is given by:
$$R^2 =\frac{\text{Explained Variation}}{\text{Total Variation}}=\frac{ESS}{TSS}=1\frac{RSS}{TSS}$$
If a model does not completely explain the observed data, then . On the other hand, if the model perfectly describes the data the .Other values is in the range of 0 and 1 and always positive. For instance, in the above example, the is approximately 1 and thus, the money supply growth rate perfectly explains the level of inflation rates in the countries.
Limitations of \(R^2\)
 When the number of the explanatory variables is increased, the value of \(R^2\) always increases even if the new variable has an insignificant effect on the dependent variable. For instance is a regression model with one explanatory variable is modified to have two explanatory variables, the new \(R^2\) is greater or equal to that of a single explanatory model.
 The Coefficient of Determination \(R^2\) cannot be compared in different dependent variables. For instance, we cannot compare Y_{i } and lnY_{i } \(R^2\).
 There is no standard value of \(R^2\) that is considered good because its values depend on the nature of the data involved.
Considering the first limitation, we now discuss the adjusted \(R^2\).
The Adjusted R^{2}
Denoted by \({R} ^2\) adjusted R^{2} measures the goodness of fit, which does not automatically increase if an independent variable is added to the model; that is, it is adjusted for the degrees of freedom. Note that \(\bar{R} ^2\) is produced by statistical software. The relationship between the R^{2 }and \(\bar{R} ^2\) is given by:
$$\bar{R}^2 =1\frac{\left(\frac{RSS}{nk1}\right)}{\left(\frac{TSS}{n1}\right)}$$
$$=1\left(\frac{n1}{nk1}\right)\left(1R^2\right)$$
Where
n=number of observations
k=number of the independent variables (Slope coefficients)
The adjusted Rsquared can increase, but that happens only if the new variable improves the model more than would be expected by chance. If the added variable improves the model by less than expected by chance, then the adjusted Rsquared decreases.
When k≥ 1, then \(R^2>\bar{R}^2\) since adding an extra new independent variable results in a decrease in \(\bar{R}^2\) if that addition causes a small increase in R^^{2}. This explains the fact that \(\bar{R}^2\) can be a negative though \(R^2\) is always nonnegative.
A point to note is that when we decide to use \(\hat{R}^2\) to compare the regression modes, the dependent variable is defined the same way and that the sample size is same as that of \(R^2\).
The following are the factors to watch out when guarding against applying the \(R^2\) or the \(\hat{R}^2\):
 An added variable doesn’t have to be statistically significant just because of the \(R^2\) or the \(\hat{R}^2\) has increased.
 It is not always true that the regressors are a true cause of the dependent variable, just because there is a high \(R^2 \) or \(\bar{R}^2\)
 It is not necessary that there is no omitted variable bias just because we have a high \R^2\) or \(\hat{R}^2\).
 It is not necessarily true that we have the most appropriate set of regressors just because we have a high \(R^2\) or \(\bar{R}^2\)
 It is not necessarily true that we have an inappropriate set of regressors just because we have a low \(R^2\) or \(\bar{R}^2\)
\(\bar{R}^2\) does not automatically indicate that regression is well specified due to its inclusion of a right set of variables since high R ̅^2 could reflect other uncertainties in the data in the analysis. Moreover, \(\bar{R}^2\) can be negative if the regression model produces an extremely poor fit.
Joint Hypothesis Test on Multiple Regression Parameters
Previously, we had conducted hypothesis tests on individual regression coefficients using the ttest. We need to perform a joint hypothesis test on the multiple regression coefficients using the Ftest based on the Fstatistic.
In multiple regression, we are not able to test the null hypothesis that all the slope coefficients are equal to 0 using the ttest. This is because an individual test on the coefficient does not accommodate the effect of interactions among the independent variables (multicollinearity).
Ftest (test of regression’s generalized significance) determines whether the slope coefficients in multiple linear regression are all equal to 0. That is, the null hypothesis is stated as: \(H__{0}:β__{1} =β_2 = … =β__{k} = 0\) against the alternative hypothesis that at least one slope coefficient is not equal to 0.
To accurately compute the test statistic for the null hypothesis that the slope is equal to 0, we need to identify the following:
I. The sum of squared Residuals (SSR) given by:
$$\sum_{i=1}^{n} {\left(Y_i \hat{Y}_i\right)^2}$$
This is also called the residual sum of squares.
II.Explained Sum of Squares (ESS) given by:
$$\sum_{i=1}^{n} {\left(\hat{Y}_i \bar{Y}_i\right)^2}$$
III. The total number of observations (n)
III. The number of the parameters to be estimated. For example, in a regression analysis with one independent variable, there are two parameters: the slope and the intercept coefficients.
Using the above four requirements, we can determine the Fstatistic. The Fstatistic measures how effective the regression equation explains the changes in the dependent variable. The Fstatistic is denoted by F_{ (Number of slope parameters, n (number of parameters))}. For instance, the Fstatistic for a multiple regression with two slope coefficients (and one intercept coefficient) is denoted as F_{2, n3}. The value n3 represents the degrees of freedom for the Fstatistic.
The Fstatistic is the ratio of the average regression sum of squares to the average amount of squared errors. The average regression sum of squares is the regression sum of squares divided by the number of slope parameters (k) estimated. The average sum of squared errors is the sum of squared errors divided by the number of observations (n) less a total number of parameters estimated ((n – (k + 1)). Mathematically:
$$F=\frac{\text{Average regression sum of squares}}{\text{The average sum of squared errors}}$$
$$=\frac{\frac{Explained sum of squares}{ESSSlope parameters estimated}}{\frac{Sum of squared residuals (SSR)}{nnumber of parameters estimated}}$$
In this case, we are dealing with multiple linear regression model with k independent variable whose Fstatistic is given by:
$$F=\frac{\left(\frac{ESS}{k}\right)}{\left(\frac{SSR}{n(k+1)}\right)}$$
In regression analysis output (ANOVA part), MSR and MSE are displayed as the first and the second quantities under the MSS (mean sum of the squares) column, respectively. If the overall regression’s significance is high, then the ratio will be large.
If the independent variables do not explain any of the variations in the dependent variable, each predicted independent variable \(\hat{Y}_i )\)possess the mean value of the dependent variable \((Y)\) . Consequently, the regression sum of squares is 0 implying the Fstatistic is 0.
So, how do we decide Ftest? We reject the null hypothesis at α significance level if the computed Fstatistic is greater the upper α critical value of the Fdistribution with the provided numerator and denominator degrees of freedom (Ftest is always a onetailed test).
Example: Conducting Ftest
An analyst runs a regression of monthly valuestock returns on four independent variables over 48 months.
The total sum of squares for the regression is 360, and the sum of squared errors is 120.
Test the null hypothesis at the 5% significance level (95% confidence) that all the four independent variables are equal to zero.
Solution
$$H_0:β_1=0,β_2=0,…,β_4=0$$
Versus
$$ H_1:β_j≠0 \text{(at least one j is not equal to zero, j=1,2… k )} $$
ESS = TSS – SSR = 360 – 120 = 240
The calculated test statistic
$$F=\frac{\left(\frac{ESS}{k}\right)}{\left(\frac{SSR}{n(k+1)}\right)}=\frac{\frac{240}{4}}{\frac{120}{45}}=21.5$$
\(F_{44}^3\) is approximately 2.44 at 5% significance level.
Decision: Reject H_{0}.
Conclusion: at least one of the 4 independents is significantly different than zero.
Example: Calculating Fstatistic and Conducting the Ftest
An investment analyst wants to determine whether the natural log of the ratio of bidoffer spread to the price of a stock can be explained by the natural log of the number of market participants and the amount of market capitalization. He assumes a 5% significance level. The following is the result of the regression analysis.

Coefficient 
Standard Error 
tStatistic 
Intercept 
1.6959 
0.2375 
7.0206 
Number of Market Participants 
1.6168 
0.0708 
22.8361 
Amount of Market Capitalization 
0.4709 
0.0205 
22.9707 
ANOVA 
df 
SS 
MSS 
F 
Significance F 
Regression 
2 
3730.1534 
1,865.0767 
2217.95 
0.00 
Residual 
2,797 
2351.9973 
0.8409 


Total 
2,799 
5,801.2051 



Residual standard error 
0.9180 
Multiple Rsquared 
0.6418 
Observations 
2,800 
We are concerned with the ANOVA (Analysis of variance) results. We need to conduct Ftest to determine the significance of regression analysis.
Solution
So, the hypothesis is stated as:
$$H_0 :\hat{\beta}_1 =\hat{\beta}_2$$
vs
$$H_1: \text{At least 1} \hat{β} _j≠0, ∀ j=1,2$$
There are two slope coefficients, k=2 (coefficients on the natural log of the number of market participants and the amount of market capitalization), which is degrees of freedom for the numerator of the Fstatistic formula. For the denominator, the degrees of freedom are n (k + 1) =28003= 2,797.
The sum of the squared errors is 2,351.9973, while the regression sum of squares is 3,730.1534. Therefore, the Fstatistic is:
$$F_{2,2,797}=\frac{\left(\frac{ESS}{k}\right)}{\left(\frac{SSR}{n(k+1)}\right)}=\frac{\frac{3730.1534}{2}}{\frac{2351.9973}{2797}}=2217.9530$$
Since we are working at 5% (0.05) significance level, we look at the Fdistribution table on the second column which displays the Fdistributions with degrees of freedom in the numerator of the Fstatistic formula as seen below:
As seen from the table, the critical value of the Ftest for the null hypothesis to be rejected is between 3.00 and 3.07. The actual Fstatistic is 2217.95, which is far higher than the Ftest critical value, and thus we reject the null hypothesis that all the slope coefficients are equal to 0.
Calculating the Confidence Interval for the Regression Coefficient
Confidence interval (CI) is a closed interval in which the actual parameter is believed to lie with some degrees of confidence. Confidence intervals are used to perform hypothesis tests. For instance, we may want to ascertain stock valuation using the capital asset pricing model (CAPM). In this case, we may wish to hypothesize that the beta possesses the market’s systematic risk or averaged beta.
The same analogy used in the regression analysis with one explanatory variable is also used in a multiple regression model using the ttest.
Example: Calculating the Confidence Interval (CI)
An economist tests the hypothesis that interest rates and inflation can explain GDP growth in a country. Using some 73 observations, the analyst formulates the following regression equation:
$$\text{GDP growth}=\hat{b} _0+\hat{b}_1 (\text{Interest})+\hat{b}_2 (\text{Inflation})$$
Regression estimates are as follows:

Coefficient 
Standard error 
Intercept 
0.04 
0.6% 
Interest rates 
0.25 
6% 
Inflation 
0.20 
4% 
What is the 95% confidence interval for the coefficient on the inflation rate?
A. 0.12024 to 0.27976
B. 0.13024 to 0.37976
C. 0.12324 to 0.23976
D.0.11324 to 0.13976
Solution
The correct answer is A
From the regression analysis, \(\hat{β}\) =0.20 and the estimated standard error, \(s_\hat{β}\) _{ }=0.04. The number of degrees of freedom is 733=70. So, the tcritical value at the 0.05 significance level is =\(t_{\frac{0.05}{2} , 7321}=t_{0.025,70}=1.994\). Therefore, the 95% confidence level for the stock return is:
$$\hat{β}±t_c s_{\hat{β}}=0.2±1.994×0.04$$
_{=0.12024 to 0.27976}
Question 1
An analyst runs a regression of monthly valuestock returns on four independent variables over 48 months. The total sum of squares for the regression is 360 and the sum of squared errors is 120. Calculate the R^{2}.
 42.1%
 50%
 33.3%
 66.7%
The correct answer is D.
$$ { R }^{ 2 }=\frac { ESS }{ TSS } =\frac { 360 – 120 }{ 360 } = 66.7%$$
Question 2
Refer to the previous problem and calculate the adjusted R^{2}.
 27.1%
 63.6%
 72.9%
 36.4%
The correct answer is B.
$$ { \bar { R } }^{ 2 }=1\frac { n1 }{ nk1 } \times (1 – { R }^{ 2 } ) $$
$$ { \bar { R } }^{ 2 }=1\frac { 481 }{ 4841 } \times (1 – 0.667 ) = 63.6 \% $$
Question 3
Refer to the previous problem. The analyst now adds four more independent variables to the regression and the new R^{2} increases to 69%. What is the new adjusted R^{2} and which model would the analyst prefer?
 The analyst would prefer the model with four variables because its adjusted R^{2} is higher.
 The analyst would prefer the model with four variables because its adjusted R^{2} is lower.
 The analyst would prefer the model with eight variables because its adjusted R^{2} is higher.
 The analyst would prefer the model with eight variables because its adjusted R^{2} is lower.
The correct answer is A.
$$ { { New \quad R } }^{ 2 }=69 \% $$
$$ { { New \quad adjusted \quad R } }^{ 2 }=1\frac { 481 }{ 4881 } \times (1 – 0.69 ) = 62.6 \% $$
The analyst would prefer the first model because it has a higher adjusted R^{2} and the model has four independent variables as opposed to eight.
Question 3
An economist tests the hypothesis that GDP growth in a certain country can be explained by interest rates and inflation.
Using some 30 observations, the analyst formulates the following regression equation:
$$ GDP growth = { \hat { \beta } }_{ 0 } + { \hat { \beta } }_{ 1 } Interest+ { \hat { \beta } }_{ 2 } Inflation $$
Regression estimates are as follows:
Coefficient 
Standard error 

Intercept 
0.10 
0.5% 
Interest rates 
0.20 
0.05 
Inflation 
0.15 
0.03 
Is the coefficient for interest rates significant at 5%?
 Since the test statistic < tcritical, we accept H_{0}; the interest rate coefficient is not significant at the 5% level.
 Since the test statistic > tcritical, we reject H_{0}; the interest rate coefficient is not significant at the 5% level.
 Since the test statistic > tcritical, we reject H_{0}; the interest rate coefficient is significant at the 5% level.
 Since the test statistic < tcritical, we accept H_{1}; the interest rate coefficient is significant at the 5% level.
The correct answer is C.
We have GDP growth = 0.10 + 0.20(Int) + 0.15(Inf)
Hypothesis:
$$ { H }_{ 0 }:{ \hat { \beta } }_{ 1 } = 0 \quad vs \quad { H }_{ 1 }:{ \hat { \beta } }_{ 1 }≠0 $$
The test statistic is:
$$ t = \left( \frac { 0.20 – 0 }{ 0.05 } \right) = 4 $$
The critical value is t_{(α/2, nk1)} = t_{0.025,27 }= 2.052 (which can be found on the ttable).
Decision: Since test statistic > tcritical, we reject H_{0}.
Conclusion: The interest rate coefficient is significant at the 5% level.