{"id":1267,"date":"2019-10-10T12:45:00","date_gmt":"2019-10-10T12:45:00","guid":{"rendered":"https:\/\/analystprep.com\/study-notes\/?p=1267"},"modified":"2026-05-04T19:02:32","modified_gmt":"2026-05-04T19:02:32","slug":"linear-regression-with-one-regressor","status":"publish","type":"post","link":"https:\/\/analystprep.com\/study-notes\/frm\/part-1\/quantitative-analysis\/linear-regression-with-one-regressor\/","title":{"rendered":"Linear Regression"},"content":{"rendered":"<p><script type=\"application\/ld+json\">\n{\n  \"@context\": \"https:\/\/schema.org\",\n  \"@type\": \"QAPage\",\n  \"mainEntity\": {\n    \"@type\": \"Question\",\n    \"name\": \"Assume that you have carried out a regression analysis (to determine whether the slope is different from 0) and found out that the slope \u03b2\u0302 = 1.156. Moreover, you have constructed a 95% confidence interval of [0.550, 1.762]. What is the likely value of your test statistic?\",\n    \"text\": \"Assume that you have carried out a regression analysis (to determine whether the slope is different from 0) and found out that the slope \u03b2\u0302 = 1.156. Moreover, you have constructed a 95% confidence interval of [0.550, 1.762]. What is the likely value of your test statistic?\",\n    \"answerCount\": 4,\n    \"acceptedAnswer\": {\n      \"@type\": \"Answer\",\n      \"text\": \"3.7387\"\n    },\n    \"suggestedAnswer\": [\n      {\n        \"@type\": \"Answer\",\n        \"text\": \"4.356\"\n      },\n      {\n        \"@type\": \"Answer\",\n        \"text\": \"3.7387\"\n      },\n      {\n        \"@type\": \"Answer\",\n        \"text\": \"0.7845\"\n      },\n      {\n        \"@type\": \"Answer\",\n        \"text\": \"0.6545\"\n      }\n    ]\n  }\n}\n<\/script><\/p>\n<p><script type=\"application\/ld+json\">\n{\n  \"@context\": \"https:\/\/schema.org\",\n  \"@type\": \"QAPage\",\n  \"mainEntity\": {\n    \"@type\": \"Question\",\n    \"name\": \"A trader develops a simple linear regression model to predict the price of a stock. The estimated slope coefficient is 0.60, the standard error is 0.25, and the sample has 30 observations. Determine if the estimated slope coefficient is significantly different than zero at a 5% level of significance by correctly stating the decision rule.\",\n    \"text\": \"A trader develops a simple linear regression model to predict the price of a stock. The estimated slope coefficient is 0.60, the standard error is 0.25, and the sample has 30 observations. Determine if the estimated slope coefficient is significantly different than zero at a 5% level of significance by correctly stating the decision rule.\",\n    \"answerCount\": 4,\n    \"acceptedAnswer\": {\n      \"@type\": \"Answer\",\n      \"text\": \"Reject H0; The slope coefficient is statistically significant.\"\n    },\n    \"suggestedAnswer\": [\n      {\n        \"@type\": \"Answer\",\n        \"text\": \"Accept H1; The slope coefficient is statistically significant.\"\n      },\n      {\n        \"@type\": \"Answer\",\n        \"text\": \"Reject H0; The slope coefficient is statistically significant.\"\n      },\n      {\n        \"@type\": \"Answer\",\n        \"text\": \"Reject H0; The slope coefficient is not statistically significant.\"\n      },\n      {\n        \"@type\": \"Answer\",\n        \"text\": \"Accept H1; The slope coefficient is not statistically significant.\"\n      }\n    ]\n  }\n}\n<\/script><br \/>\n<script type=\"application\/ld+json\">\n{\n  \"@context\": \"https:\/\/schema.org\",\n  \"@type\": \"ImageObject\",\n  \"url\": \"https:\/\/analystprep.com\/study-notes\/wp-content\/uploads\/2019\/10\/page-138.jpg\",\n  \"caption\": \"Ordinary Least Squares\",\n  \"width\": 1434,\n  \"height\": 1151,\n  \"copyrightNotice\": \"\u00a9 2024 AnalystPrep\",\n  \"acquireLicensePage\": \"https:\/\/analystprep.com\/license-info\",\n  \"creditText\": \"AnalystPrep Design Team\",\n  \"creator\": {\n    \"@type\": \"Organization\",\n    \"name\": \"AnalystPrep\"\n  }\n}\n<\/script><\/p>\n<p><script type=\"application\/ld+json\">\n{\n  \"@context\": \"https:\/\/schema.org\",\n  \"@type\": \"ImageObject\",\n  \"url\": \"https:\/\/analystprep.com\/study-notes\/wp-content\/uploads\/2019\/10\/page-135.jpg\",\n  \"caption\": \"Linear Regression\",\n  \"width\": 1463,\n  \"height\": 1151,\n  \"copyrightNotice\": \"\u00a9 2024 AnalystPrep\",\n  \"acquireLicensePage\": \"https:\/\/analystprep.com\/license-info\",\n  \"creditText\": \"AnalystPrep Design Team\",\n  \"creator\": {\n    \"@type\": \"Organization\",\n    \"name\": \"AnalystPrep\"\n  }\n}\n<\/script><\/p>\n<p><strong>After completing this reading, you should be able to:<\/strong><\/p>\n<ul>\n<li>Describe the models that can be estimated using linear regression and differentiate them from those which cannot.<\/li>\n<li>Interpret the results of an OLS regression with a single explanatory variable.<\/li>\n<li>Describe the key assumptions of OLS parameter estimation.<\/li>\n<li>Characterize the properties of OLS estimators and their sampling distributions.<\/li>\n<li>Construct, apply, and interpret hypothesis tests and confidence intervals for a single regression coefficient in a regression.<\/li>\n<li>Explain the steps needed to perform a hypothesis test in linear regression.<\/li>\n<li>Describe the relationship between a t-statistic, its p-value, and a confidence interval.<\/li>\n<\/ul>\n<p>Linear regression is a statistical tool for modeling the relationship between two random variables. This chapter will concentrate on the linear regression model (regression model with one explanatory variable).<\/p>\n<div style=\"background: #f3f4f6; padding: 16px 14px; border-radius: 12px; margin: 20px 0; text-align: center;\"><a style=\"display: inline-flex; align-items: center; justify-content: center; padding: 12px 18px; border: 2px solid #1d4ed8; border-radius: 999px; color: #1d4ed8; text-decoration: none; font-weight: 600; font-size: 16px; line-height: 1; background: #ffffff; white-space: nowrap;\" href=\"https:\/\/analystprep.com\/free-trial\/\" target=\"_blank\" rel=\"noopener noreferrer\"> Build confidence with FRM simple regression concepts <\/a><\/div>\n<h2>The Linear Regression Model<\/h2>\n<p>As stated earlier, linear regression determines the relationship between the dependent variable Y and the independent (explanatory) variable X.\u00a0 The linear regression with a single explanatory variable is given by:<\/p>\n<p>$$Y={\\beta}_0 +\\beta X +\\epsilon$$<\/p>\n<p>Where:<\/p>\n<p>\\(\u03b2_0\\)=constant intercept (the value of Y when X=0)<\/p>\n<p>\\(\u03b2\\)=the Slope which measures the sensitivity of Y to variation in X.<\/p>\n<p>\\(\u03f5\\)=error(sometimes referred to as shock). It represents the portion of Y that cannot be explained by X.<\/p>\n<p>The assumption is that the expectation of the error is 0. That is, \\(E(\u03f5)=0\\) and thus,<\/p>\n<p>$$E[Y]=E[\u03b2_0 ]+\u03b2E[X]+E[\u03f5]$$<\/p>\n<p>$$\u21d2E[Y]=\u03b2_0+\u03b2E[X]$$<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-6973\" style=\"max-width: 100%;\" src=\"https:\/\/analystprep.com\/study-notes\/wp-content\/uploads\/2019\/10\/page-135.jpg\" sizes=\"auto, (max-width: 1463px) 100vw, 1463px\" srcset=\"https:\/\/analystprep.com\/study-notes\/wp-content\/uploads\/2019\/10\/page-135.jpg 1463w, https:\/\/analystprep.com\/study-notes\/wp-content\/uploads\/2019\/10\/page-135-300x236.jpg 300w, https:\/\/analystprep.com\/study-notes\/wp-content\/uploads\/2019\/10\/page-135-768x604.jpg 768w, https:\/\/analystprep.com\/study-notes\/wp-content\/uploads\/2019\/10\/page-135-1024x806.jpg 1024w, https:\/\/analystprep.com\/study-notes\/wp-content\/uploads\/2019\/10\/page-135-400x315.jpg 400w, https:\/\/analystprep.com\/study-notes\/wp-content\/uploads\/2019\/10\/page-135-600x472.jpg 600w\" alt=\"Linear Regression\" width=\"1463\" height=\"1151\" \/>Note that \\(\u03b2_0\\)\u00a0 is the value of Y when \\(X=0\\).\u00a0 However, there are cases when the explanatory variable is not equal to 0. In this case, \\(\u03b2_0\\) is interpreted as the value that ensures that the \\(\\bar{Y}\\) in the regression line \\(\\bar{Y}=\\hat{\\beta}_0 +\\hat{\\beta}\\bar{X}\\)\u00a0where\u00a0\u00a0\\(\\bar{Y}\\) and \\(\\bar{X}\\)\u00a0are the mean of \\(y_i\\) and \\(x_i\\)<sub>\u00a0<\/sub>random variables.<\/p>\n<h2>The Linearity of a Regression<\/h2>\n<p>The independent variable can be continuous, discrete or even functions. Above the diversity of the explanatory variables, they must satisfy the following conditions:<\/p>\n<ol>\n<li>The relationship between the dependent variable Y and the explanatory variables \\((X_1, X_2,\u2026, X_n)\\) must be linear.<\/li>\n<li>The error term must be additive except where the variance of the error term depends on the explanatory variables.<\/li>\n<li>The independent (explanatory variables) must be observables. This ensures that a linear regression with missing data is not developed.<\/li>\n<\/ol>\n<p>A good example of a violation of the linearity principle is:<\/p>\n<p>$$Y={\\beta}_0 +\\beta X^k +\\epsilon$$<\/p>\n<p>This model cannot be estimated using linear regression due to the presence of the unknown parameter <em>k,<\/em> which violates the first restriction (it is non-linear regression function). This kind of nonlinearity can be corrected through transformation.<\/p>\n<h2>Transformations<\/h2>\n<p>When a linear regression model does not satisfy the linearity conditions stated above, we can reverse the violation of the restrictions by transforming the model. Consider the model:<\/p>\n<p>$$Y={\\beta}_0 X^{\\beta} \\epsilon$$<\/p>\n<p>Where \u03f5 is the positive error term (shock). Clearly, this model violates the condition of the restriction since X is raised to an unknown parameter \u03b2, and the error term is not additive. However, we can make this model linear by taking natural logarithm on both sides of the equation so that:<\/p>\n<p>$$ln (Y) =\\left({\\beta}_0 X^{\\beta} \\epsilon \\right)$$<\/p>\n<p>$$ln (Y)=ln\u00a0{\\beta}_0 + \\beta ln X +ln \\epsilon$$<\/p>\n<p>The last equation can be written as:<\/p>\n<p>$$Y=\\hat{\\beta}_0 +\\beta \\hat{X}^k +\\hat{\\epsilon}$$<\/p>\n<p>Clearly, this equation satisfies the three linearity conditions. It is worth noting that when we are interpreting the parameters of the transformed model, we measure the change of the transformed independent variable X on the transformed variable Y.<\/p>\n<p>For instance, \\(ln (Y)=ln\u00a0{\\beta}_0 +\u00a0\\beta ln X +ln \\epsilon\\) implies that \u03b2 represents the change in lnY corresponding to a unit change in lnX.<\/p>\n<h2>The Use of the Dummy Variables<\/h2>\n<p>There are cases where the explanatory variables are binary numbers (0 and 1) representing the occurrences of an event. These binary numbers are called dummies. For instance,<\/p>\n<p>Assuming \\({ D }_{ i }\\) is a variable such that:<\/p>\n<p>$$ { D }_{ i }=\\begin{cases} 1 \\quad \\text{ The student-teacher ratio in ith school}&lt;20 \\\\ 0\\quad \\text{ The student-teacher ratio in ith school}\\ge 20 \\end{cases} $$<\/p>\n<p>The following is the population regression model whose regressor \\({ D }_{ i }\\):<\/p>\n<p>$$Y_i=\u03b2_0+\u03b2D_i+\u03f5_i\u00a0 ,\u2200i=0,\\dots,n$$<\/p>\n<p>\\(\\beta \\) is the coefficient on \\({ D }_{ i }\\).<\/p>\n<p>The equation will change to the one written below under the condition that \\({ D }_{ i }=0\\):<\/p>\n<p>$$ { Y }_{ i }={ \\beta }_{ 0 }+{ \\epsilon }_{ i } $$<\/p>\n<p>When \\({ D }_{ i }=1\\):<\/p>\n<p>$$ { Y }_{ i }={ \\beta }_{ 0 }+ \\beta +{ \\epsilon}_{ i } $$<\/p>\n<p>This implies that when \\({ D }_{ i }=1\\),\\(E\\left( { Y }_{ i }|{ D }_{ i }=1 \\right) ={ \\beta }_{ 0 }+{ \\beta }_{ 1 }\\). The test scores will have a population mean value of \\({ \\beta }_{ 0 }+{ \\beta }_{ 1 }\\) when the ratio of students to teachers is low. The conditional expectations of \\({ Y }_{ i }\\) when \\({ D }_{ i }=1 \\) and when \\({ D }_{ i }=0 \\) will have a difference of \\({ \\beta }_{ 1 }\\) between them written as:<\/p>\n<p>$$ \\left( { \\beta }_{ 0 }+\\beta\u00a0 \\right) -{ \\beta }_{ 0 }= \\beta $$<\/p>\n<p>This makes \\( \\beta \\) to be the difference between population means.<\/p>\n<h2>The Ordinary Least Squares<\/h2>\n<p>The Ordinary Least Squares (OLS) is a method of estimating the linear regression parameters by minimizing the sum of squared deviations. The regression coefficients chosen by the OLS estimators are such that the observed data and the regression line are as close as possible.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-6974\" style=\"max-width: 100%;\" src=\"https:\/\/analystprep.com\/study-notes\/wp-content\/uploads\/2019\/10\/page-138.jpg\" sizes=\"auto, (max-width: 1434px) 100vw, 1434px\" srcset=\"https:\/\/analystprep.com\/study-notes\/wp-content\/uploads\/2019\/10\/page-138.jpg 1434w, https:\/\/analystprep.com\/study-notes\/wp-content\/uploads\/2019\/10\/page-138-300x241.jpg 300w, https:\/\/analystprep.com\/study-notes\/wp-content\/uploads\/2019\/10\/page-138-768x616.jpg 768w, https:\/\/analystprep.com\/study-notes\/wp-content\/uploads\/2019\/10\/page-138-1024x822.jpg 1024w, https:\/\/analystprep.com\/study-notes\/wp-content\/uploads\/2019\/10\/page-138-400x321.jpg 400w, https:\/\/analystprep.com\/study-notes\/wp-content\/uploads\/2019\/10\/page-138-600x482.jpg 600w\" alt=\"Ordinary Least Squares\" width=\"1434\" height=\"1151\" \/>Consider a regression equation:<\/p>\n<p>$$Y=\u03b2_0+\u03b2X+\u03f5$$<\/p>\n<p>Where each of X and Y consists of n observations each \\( (X=x_1,x_2,\u2026, n) \\) and \\( (Y=y_1,y_2,\u2026,y_n) \\). Assume that each of x<sub>i <\/sub>and y\u00ad<sub>i<\/sub> are linearly related, then the parameters can be estimated using the OLS. The estimators minimize the residual sum of squares such that:<\/p>\n<p>$$\\sum_{i=1}^n {\\left(y_i -\\hat{\\beta_0} -\\hat{ \\beta } x_i\\right)^2}=\\sum_{i=1}^n {\\hat{\\epsilon}_{i}^{2}}$$<\/p>\n<p>Where the \\(\\hat{\\beta}_0\\) and \\(\\hat{\\beta}\\)\u00a0 are parameter estimators (intercept and the slope respectively) which minimizes the squared deviations between the line \\(\\hat{\\beta_0} +\\hat{\\beta} x_i\\) and \\(y_i\\) so that:<\/p>\n<p>$$\\hat{\\beta}_0=\\bar{Y}-\\hat{\\beta} \\bar{X} $$<\/p>\n<p>and<\/p>\n<p>$$\\hat{\\beta}=\\frac{\\sum_{i=1}^{n} {\\left(x_i -\\bar{X}\\right) \\left(y_i -\\bar{Y}\\right)}}{\\sum_{i=1}^{n} {\\left(x_i -\\bar{X}\\right)^2}}$$<\/p>\n<p>Where \\(\\bar{X}\\) and \\(\\bar{Y}\\)\u00a0are the means of X and Y respectively.<\/p>\n<p>After the estimation of the parameters, the estimated regression line is given by:<\/p>\n<p>$$\\hat{y}_i\u00a0 =\\hat{\\beta}_0 +\\hat{\\beta}x_i $$<\/p>\n<p>And the linear regression residual error term is given by:<\/p>\n<p>$$\\hat{\\epsilon}_i =y_i -\\hat{y}_i =y_i \u2013 \\hat{\\beta}_0 -\\hat{\\beta}x_i$$<\/p>\n<p>The variance of the error term is approximated as:<\/p>\n<p>$$s^2=\\frac{1}{n-2} \\sum_{i=1}^{n}{\\hat{\\epsilon}_{i}^{2}}$$<\/p>\n<p>It can also\u00a0 be shown that:<\/p>\n<p>$$s^2=\\frac{n}{n-2} \\hat{\\sigma}_{Y}^{2} \\left(1-\\hat{\\rho}_{XY}^{2} \\right)$$<\/p>\n<p>Note that n-2 implies that two parameters are estimated and that \\(s^2\\)\u00a0 is an unbiased estimator of \\(\u03c3^2\\). Moreover, it is assumed that the mean of the residuals is zero and uncorrelated with the explanatory variables \\(X_i\\).<\/p>\n<p>Now, consider the formula:<\/p>\n<p>$$\\hat{\\beta}=\\frac{\\sum_{i=1}^{n} {\\left(x_i -\\bar{X}\\right)\\left(y_i -\\bar{Y}\\right)}}{\\sum_{i=1}^{n} {\\left(x_i -\\bar{X}\\right)^2}}$$<\/p>\n<p>If we multiply both the numerator and the denominator by \\(\\frac{1}{n}\\), we have:<\/p>\n<p>$$\\hat{\\beta}=\\frac{\\frac{1}{n} \\sum_{i=1}^{n} {\\left(x_i -\\bar{X}\\right)\\left(y_i -\\bar{Y}\\right)}}{\\frac{1}{n} \\sum_{i=1}^{n} {\\left(x_i -\\bar{X}\\right)^2}}$$<\/p>\n<p>Note that the numerator is the covariance between X and Y, and the denominator is the variance of X. So that we can write:<\/p>\n<p>$$\\hat{\\beta}=\\frac{\\frac{1}{n} \\sum_{i=1}^{n} {\\left(x_i -\\bar{X}\\right) \\left(y_i -\\bar{Y}\\right)}}{\\frac{1}{n} \\sum_{i=1}^{n} {\\left(x_i -\\bar{X}\\right)^2}}=\\frac{\\hat{\\sigma}_{XY}}{{\\sigma}_X^{2}}$$<\/p>\n<p>Also recall that:<\/p>\n<p>$$Corr(X,Y) ={\\rho}_{XY}=\\frac{Cov(X,Y)}{{\\sigma}_X{{\\sigma}_Y}}\u00a0$$<\/p>\n<p>$$\\Rightarrow {\\sigma}_{XY} = {\\rho}_{XY} {\\sigma}_{X} {{\\sigma}_{Y}}$$<\/p>\n<p>So,<\/p>\n<p>$$\\hat{\\beta}=\\frac{{\\rho}_{XY}\u00a0{\\sigma}_{X} {{\\sigma}_{Y}}}{\\hat{\\sigma}_{X}^{2}}$$<\/p>\n<p>$$\\therefore \\hat{\\beta}=\\frac{\\hat{\\rho}_{XY}\\hat{\\sigma}_Y}{\\hat{\\sigma}_X}$$<\/p>\n<h4><strong>Example: Estimating the Linear Regression Parameters<\/strong><\/h4>\n<p>An investment analyst wants to explain the return from the portfolio (Y) using the prevailing interest rates (X) over the past 30 years. The mean interest rate is 7%, and the return from the portfolio is 14%. The covariance matrix is given by:<\/p>\n<p>$$\\left[\\begin{matrix}\\hat{\\sigma}_{Y}^{2}&amp;\\hat{\\sigma}_{XY}\\\\ \\hat{\\sigma}_{XY}&amp;\\hat{\\sigma}_{X}^{2}\\end{matrix}\\right]=\\left[\\begin{matrix}1600&amp;500\\\\ 500&amp;338\u00a0\\end{matrix}\\right]$$<\/p>\n<p>Assume that the analyst wants to estimate the linear regression equation:<\/p>\n<p>$$\\hat{Y}_{i}=\\hat{\\beta}_{0} +\\hat{\\beta} X_{i}$$<\/p>\n<p>Estimate the parameters and, thus, the model equation.<\/p>\n<p><strong>Solution<\/strong><\/p>\n<p>Now,<\/p>\n<p>$$\\hat{\\beta}=\\frac{\\hat{\\sigma}_{XY}}{\\hat{\\sigma}_{X}^{2}}=\\frac{500}{338}=1.4793$$<\/p>\n<p>and<\/p>\n<p>$$\\hat{\\beta}_{0}=\\bar{Y}-\\hat{\\beta}\\bar{X}=0.14-1.4793\\times 0.07=0.0364$$<\/p>\n<p>So, the estimated equation is given by:<\/p>\n<p>$$\\hat{Y}_{i}=0.0364 +1.4793 X_{i}$$<\/p>\n<h2>Assumptions of OLS<\/h2>\n<p>The OLS estimators assume the following:<\/p>\n<ol>\n<li>The conditional distribution of the error term given the independent variables \\(X_i\\) is 0. More precisely \\(E(\u03f5_i |X_i)=0\\). This also implies that the independent variables and the error term are uncorrelated and that \\(E(\u03f5_i)=0\\).<\/li>\n<li>Both the dependent and independent variables are i.i.d. This assumption concerns the drawing of the sample. According to this assumption, \\((X_i, Y_i),i=1,\u2026,n\\) are\u00a0i.i.d in case a simple random sampling is applied when drawing observations from a single large population. Despite the\u00a0i.i.d assumption being a reasonable assumption for many data collection schemes, all sampling schemes do not produce\u00a0i.i.d observations on\u00a0\\((X_i, Y_i)\\).<\/li>\n<li>Large outliers are unlikely. In this assumption, observations whose values of\u00a0\\(X_i\\) and\/or \\(Y_i\\) fall far outside the usual range of the data, are unlikely. These observations are known as significant outliers. Results of OLS regression can be misleading due to large outliers. <a href=\"https:\/\/www.colombia.co\/\">www.colombia.co<\/a><\/li>\n<li>The variance of the independent variable is strictly nonnegative. That is, \\(\u03c3_X^2&gt;0\\). This is essential in estimating the regression parameters.<\/li>\n<li>The variance of the error term is independent of the explanatory variables and that \\(V(\u03f5_i\u2502X)=\u03c3^2&lt;\u221e\\) and that the variance of all the error terms (shocks) is equal. This assumption is termed as the homoskedasticity assumption.<\/li>\n<\/ol>\n<p>The OLS estimators imply that the parameter estimators are unbiased estimators. That is,\\( E(\\hat{\u03b1})=\u03b1\\) and \\(E(\\hat{\u03b2})=\u03b2\\). This is actually true for large sample sizes or rather as the sample sizes increases.<\/p>\n<p>Lastly, the assumptions ensure that that the estimated parameters are normally distributed.\u00a0 The asymptotic distribution of the slope is given by:<\/p>\n<p>$$\\sqrt{n}\\left(\\hat{\\beta}-\\beta \\right) \\sim N\\left(0,\\frac{{\\sigma}^{2}}{{\\sigma}_{X}^{2}}\\right)$$<\/p>\n<p>Where \\(\u03c3^2\\) is the variance of the error term and \\(\u03c3_X^2\\) is the variance of X. It is easy to see that the variance of \\(\\hat{\u03b2}\\)\u00a0 increases as \\(\u03c3^2\\) increases.<\/p>\n<p>For the intercept, the asymptotic distribution is defined as:<\/p>\n<p>$$\\sqrt{n}\\left(\\hat{\\beta}_{0}-{\\beta}_{0} \\right) \\sim N\\left(0,\\frac{{\\sigma}^{2}\\left({\\mu}_{X}^{2}-{\\sigma}_{X}^{2}\\right)}{{\\sigma}_{X}^{2}}\\right)$$<\/p>\n<p>According to the central limit theorem (CLT), \\(\\hat{\\beta}\\) can be treated as the standard random variable with the mean as the true value \\(\\beta\\) and the variance \\(\\frac{{\\sigma}^{2}}{n {\\sigma}_{X}^{2}}\\). That is:<\/p>\n<p>$$\\hat{\\beta} \\sim N\\left(\\beta, \\frac{{\\sigma}^2}{n\u00a0{\\sigma}_{X}^{2} }\\right)$$<\/p>\n<p>However, we cannot use this value in hypothesis testing. We need to use the variance estimators such that:<\/p>\n<p>$$\u03c3^2=s^2$$<\/p>\n<p>So, recall that for a large sample size:<\/p>\n<p>$$\\hat{\\sigma}_{X}=\\frac{1}{n}\\sum_{i=1}^{n}{\\left(x_i \u2013 \\bar{X}\\right)^{2}}$$<\/p>\n<p>$$\\Rightarrow n \\hat{\\sigma}_{X}=\\sum_{i=1}^{n}{\\left(x_i \u2013 \\bar{X}\\right)^{2}}$$<\/p>\n<p>Therefore, the variance of the parameter \\(\u03b2\\) can be written as:<\/p>\n<p>$$\\hat{\\sigma}_{\\beta}^{2} =\\frac{\\hat{\\sigma}^{2}}{\\sum_{i=1}^{n}{\\left(x_i \u2013 \\bar{X}\\right)^{2}}}=\\frac{s^{2}}{n \\hat{\\sigma}_{X}^{2}}$$<\/p>\n<p>The standard error estimate of the \\(\u03b2\\) denoted as \\(\\text{SEE}_{\u03b2}\\) is equivalent to the square root of its variance, so:<\/p>\n<p>$$\\text{SEE}_{\u03b2}=\\sqrt{\\frac{s^{2}}{n \\hat{\\sigma}_{X}^{2}}} = \\frac{s}{\\sqrt{n}\\hat{\\sigma}_{X}}$$<\/p>\n<p>Analogously, the variance of the intercept:<\/p>\n<p>$$\\hat{\\sigma}_{{\\beta}_{0}}^{2}=\\frac{s^2 \\left(\\hat{\\mu}_{X}^{2}+ \\hat{\\sigma}_{X}^{2}\\right)}{n \\hat{\\sigma}_{X}^{2}}$$<\/p>\n<h2>Hypothesis Testing on the Linear Regression Parameters<\/h2>\n<p>When the OLS assumptions are met, the parameters are assumed to be normally distributed when large samples are used. Therefore, we can run a hypothesis tests on the parameters just like the random variable.<\/p>\n<p>A hypothesis is a statistical procedure where an analyst tests an assumption on the population parameters. For instance, we may want to test the significance of a <strong>single<\/strong> regression coefficient in a simple linear regression. Most of the hypothesis tests are t-tests.<\/p>\n<p>Whenever a statistical test is being performed, the following procedure is generally considered ideal:<\/p>\n<ol>\n<li>Statement of both the null and the alternative hypothesis;<\/li>\n<li>Select the appropriate test statistic, i.e., what\u2019s being tested, e.g., the population means, the difference between sample means, or variance;<\/li>\n<li>Specify the level of significance;<\/li>\n<li>Clearly, state the decision rule to guide you in choosing whether to reject or not to reject the null hypothesis;<\/li>\n<li>Calculate the sample statistic, and finally<\/li>\n<li>Make a decision based on the sample results.<\/li>\n<\/ol>\n<p>For instance, assume we are testing the null hypothesis that:<\/p>\n<p>$$ H_0:\u03b2=\u03b2_{H_0} vs. H_1:\u03b2\u2260\u03b2_{H_0} $$<\/p>\n<p>Where \\(\u03b2_{H_0}\\) is the hypothesized slope parameter.<\/p>\n<p>Then the test statistic will be:<\/p>\n<p>$$T=\\frac{\\hat{\\beta} -{\\beta}_{H_0}}{\\text{SEE}_{\u03b2}}$$<\/p>\n<p>This statistic possesses asymptotic normal distribution, which is then compared to a critical value \\(C_t\\). The null hypothesis is rejected if:<\/p>\n<p>$$|T|&gt;C_t$$<\/p>\n<p>For instance, if we assume a 5% significance level in this case, then the critical value is 1.96.<\/p>\n<p>We can also evaluate the p-values. For one-tailed tests, the\u00a0p-value<em>\u00a0<\/em>is given by the probability that lies below the calculated test statistic for left-tailed tests. Similarly, the likelihood that lies above the test statistic in right-tailed tests gives the\u00a0p-value<em>.<\/em><\/p>\n<p>Denoting the test statistic by T, the p-value for \\(H_{1 }:\\hat{\\beta}&gt;0\\) is given by:<\/p>\n<p>$$P(Z&gt;|T|)=1-P(Z\u2264|T|)=1-\\Phi (|T|)$$<\/p>\n<p>Conversely, for \\(H_{1 }:\\hat{\\beta}\u22640\\)\u00a0the p-value is given by:<\/p>\n<p>$$P(Z\u2264|T|)=\\Phi (|T|)$$<\/p>\n<p>Where z is a standard normal random variable, the absolute value of T (|T|) ensures that the right tail is measured whether T is negative or positive.<\/p>\n<p>If the test is two-tailed, this value is given by the sum of the probabilities in the two tails. We start by determining the probability lying below the negative value of the test statistic. Then, we add this to the probability lying above the positive value of the test statistic. That is the p-value for the two-tailed hypothesis test is given by:<\/p>\n<p>$$2[1-\\Phi (|T|)]$$<\/p>\n<p>We can also construct confidence intervals (discussed in detail in the previous chapter). Recall that a confidence interval can be defined as the range of parameters at which the true parameter can be found at a confidence level. For instance, a 95% confidence interval constitutes that the set of parameter values where the null hypothesis cannot be rejected when using a 5% test size.<\/p>\n<p>For instance, if we are performing the two-tailed hypothesis tests, then the confidence interval is given by:<\/p>\n<p>$$\\left[\\hat{\\beta}-C_t \\times \\text{SEE}_{\u03b2},\\hat{\\beta}+C_t \\times\u00a0\\text{SEE}_{\u03b2}\\right]$$<\/p>\n<h4><strong>Example: Hypothesis Test on the Linear Regression Parameters<\/strong><\/h4>\n<p>An investment analyst wants to explain the return from the portfolio (Y) using the prevailing interest rates (X) over the past 30 years. The mean interest rate is 7%, and the return from the portfolio is 14%. The covariance matrix is given by:<\/p>\n<p>$$\\left[\\begin{matrix}\\hat{\\sigma}_{Y}^{2}&amp;\\hat{\\sigma}_{XY}\\\\ \\hat{\\sigma}_{XY}&amp;\\hat{\\sigma}_{X}^{2}\\end{matrix}\\right]=\\left[\\begin{matrix}1600&amp;500\\\\ 500&amp;338\u00a0\\end{matrix}\\right]$$<\/p>\n<p>Assume that the analyst wants to estimate the linear regression equation:<\/p>\n<p>$$\\hat{Y}_{i}=\\hat{\\beta}_{0} +\\hat{\\beta} X_{i}$$<\/p>\n<p>Test whether the slope coefficient is equal to zero and construct a 95% confidence interval for the slope of the coefficient.<\/p>\n<h4><strong>Solution<\/strong><\/h4>\n<p>We start by stating the hypothesis:<\/p>\n<p>$$T=\\frac{\\hat{\\beta} -{\\beta}_{H_0}}{\\text{SEE}_{\u03b2}}$$<\/p>\n<p>We had calculated the slope from the matrix as:<\/p>\n<p>$$\\hat{\\beta}=\\frac{\\hat{\\sigma}_{XY}}{\\hat{\\sigma}_{X}^{2}}=\\frac{500}{338}=1.4793$$<\/p>\n<p>Now, recall that:<\/p>\n<p>$$\\text{SEE}_{\\hat{\\beta}}=\\frac{s}{\\sqrt{n} \\hat{\\sigma}_{X}}$$<\/p>\n<p>But<\/p>\n<p>$$s^2=\\frac{n}{n-2} \\hat{\\sigma}_{Y}^{2} \\left(1-\\hat{\\rho}_{XY} \\right)$$<\/p>\n<p>So, in this case:<\/p>\n<p>$$s^2=\\frac{30}{30-2}\\times 1600\\left(1-\\frac{500}{\\sqrt{338}\\sqrt{1600}}\\right)=548.7251$$<\/p>\n<p>(Note that for \\(\\hat{\\rho}_{XY}\\) we have used the relationship \\(\\hat{\\rho}_{XY}=\\frac{\\hat{\\sigma}_{XY}}{\\hat{\\sigma}_{X}\\hat{\\sigma}_{Y}}\\). )<\/p>\n<p>Therefore,<\/p>\n<p>$$s=\\sqrt{s^2}=\\sqrt{548.7251}=23.4249$$<\/p>\n<p>So,<\/p>\n<p>$$\\text{SEE}_{\\hat{\\beta}}=\\frac{s}{\\sqrt{n} \\hat{\\sigma}_{X}}=\\frac{23.4249}{\\sqrt{30}\\sqrt{338}}=0.23263$$<\/p>\n<p>Therefore the t-statistic is given by:<\/p>\n<p>$$T=\\frac{\\hat{\\beta} -{\\beta}_{H_0}}{\\text{SEE}_{\u03b2}}=\\frac{1.4793}{0.23263}=6.3590$$<\/p>\n<p>For the two-tailed test, the critical value is 1.96, and since the t-statistic here is greater than the significant value, then we reject the null hypothesis.<\/p>\n<p>For the 95% CI, we know it is given by:<\/p>\n<p>$$\\hat{\\beta}-C_t \\times \\text{SEE}_{\u03b2},\\hat{\\beta}+C_t \\times\u00a0\\text{SEE}_{\u03b2} $$<\/p>\n<p>$$=\\left[1.4793-1.96\u00d70.23263 ,1.4793+1.96\u00d70.23263 \\right]$$<\/p>\n<p>$$=[1.0233,1.9353]$$<\/p>\n<blockquote>\n<h2>Practice Question 1<\/h2>\n<p>Assume that you have carried out a regression analysis (to determine whether the slope is different from 0) and found out that the slope \\(\\hat{\u03b2}=1.156\\). Moreover, you have constructed a 95% confidence interval of [0.550, 1.762]. What is the likely value of your test statistic?<\/p>\n<p>A. 4.356<\/p>\n<p>B. 3.7387<\/p>\n<p>C. 0.7845<\/p>\n<p>D. 0.6545<\/p>\n<h4><strong>Solution<\/strong><\/h4>\n<p>The Correct answer is B<\/p>\n<p>This is a two-tailed test since we\u2019re asked to determine if the slope is different from zero. We know that:<\/p>\n<p>$$\\left[\\hat{\\beta}-C_t \\times \\text{SEE}_{\u03b2},\\hat{\\beta}+C_t \\times\u00a0\\text{SEE}_{\u03b2} \\right]$$<\/p>\n<p>Which in this case is [0.550, 1.762].<\/p>\n<p>We need to find the value of \\(\\text{SEE}_{\u03b2} \\). That is:<\/p>\n<p>$$1.156-1.96\u00d7\\text{SEE}_{\u03b2}=0.550\u21d2\\text{SEE}_{\u03b2}=\\frac{1.156-0.550}{1.96}=0.3092$$<\/p>\n<p>And we know that:<\/p>\n<p>$$T=\\frac{\\hat{\\beta} -{\\beta}_{H_0}}{\\text{SEE}_{\u03b2}}=\\frac{1.156-0}{0.3092}=3.7387$$<\/p>\n<h2>\u00a0<\/h2>\n<h2>Practice Question 2<\/h2>\n<p>A trader develops a simple linear regression model to predict the price of a stock. The estimated slope coefficient for the regression is\u00a00.60, the standard error is equal to\u00a00.25, and the sample has\u00a030 observations.\u00a0Determine if the estimated slope coefficient is significantly\u00a0different than zero\u00a0at a\u00a05% level of significance by correctly stating the decision rule.<\/p>\n<p>A. Accept\u00a0H<sub>1<\/sub>; The slope coefficient is\u00a0statistically\u00a0significant.<\/p>\n<p>B. Reject H<sub>0<\/sub>; The slope coefficient is statistically\u00a0significant.<\/p>\n<p>C. Reject H<sub>0<\/sub>; The slope coefficient is not statistically\u00a0significant.<\/p>\n<p>D. Accept\u00a0H<sub>1<\/sub>; The slope coefficient is not statistically\u00a0significant.<\/p>\n<p><strong>Solution<\/strong><\/p>\n<p>The correct answer is\u00a0<strong>B<\/strong>.<\/p>\n<p><b>Step 1: State the hypothesis<\/b><\/p>\n<p><b>\u00a0<\/b>H<sub>0<\/sub>:\u03b2<sub>1<\/sub>=0<\/p>\n<p>\u00a0H<sub>1<\/sub>:\u03b2<sub>1<\/sub>\u22600<\/p>\n<p><b>Step 2: Compute the test statistic<\/b><\/p>\n<p>$$ \\frac { \u03b2_1\u00a0 \u2013 \u03b2_{H0} } { S_{\u03b21} } = \\frac { 0.60 \u2013 0} { 0.25 } = 2.4 $$<\/p>\n<p><b>Step 3:\u00a0<\/b><b>Find the <\/b><b>critical value, t<sub>c<\/sub><\/b><\/p>\n<p>From the t table, we can find t<sub>0.025,28<\/sub>\u00a0is 2.048<\/p>\n<p><b>Step 4: State the decision rule<\/b><\/p>\n<p>Reject H<sub>0<\/sub>; The slope coefficient is statistically\u00a0significant since 2.048 &lt; 2.4.<\/p>\n<\/blockquote>\n<div style=\"background: #f3f4f6; padding: 22px 18px; border-radius: 14px; margin: 30px 0; text-align: center;\">\n<div style=\"font-size: 17px; line-height: 1.4; margin-bottom: 14px; color: #111827; font-weight: 600;\">Ready to estimate coefficients and test hypotheses under FRM exam conditions?<\/div>\n<div style=\"text-align: center; margin: 30px 0;\"><a style=\"display: inline-flex; align-items: center; justify-content: center; padding: 12px 26px; border-radius: 9999px; background: #1e5bd8; color: #ffffff; font-weight: bold; text-decoration: none;\" href=\"https:\/\/analystprep.com\/free-trial\/\" target=\"_blank\" rel=\"noopener noreferrer\"> Start Free Trial \u2192 <\/a> <\/p>\n<p style=\"margin-top: 12px; font-size: 16px; line-height: 1.5;\">Practice FRM Part I quantitative methods questions covering simple regression, interpretation, assumptions, and exam-style calculations.   <\/p>\n<\/p><\/div>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>After completing this reading, you should be able to: Describe the models that can be estimated using linear regression and differentiate them from those which cannot. Interpret the results of an OLS regression with a single explanatory variable. Describe the&#8230;<\/p>\n","protected":false},"author":3,"featured_media":1508,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[7,16],"tags":[],"class_list":["post-1267","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-part-1","category-quantitative-analysis","blog-post","animate"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Simple Linear Regression | AnalystPrep<\/title>\n<meta name=\"description\" content=\"Learn simple linear regression with one regressor and how it measures the relationship between dependent and independent variables.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/analystprep.com\/study-notes\/frm\/part-1\/quantitative-analysis\/linear-regression-with-one-regressor\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Simple Linear Regression | AnalystPrep\" \/>\n<meta property=\"og:description\" content=\"Learn simple linear regression with one regressor and how it measures the relationship between dependent and independent variables.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/analystprep.com\/study-notes\/frm\/part-1\/quantitative-analysis\/linear-regression-with-one-regressor\/\" \/>\n<meta property=\"og:site_name\" content=\"CFA, FRM, and Actuarial Exams Study Notes\" \/>\n<meta property=\"article:published_time\" content=\"2019-10-10T12:45:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-05-04T19:02:32+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/analystprep.com\/study-notes\/wp-content\/uploads\/2018\/09\/rawpixel-559744-unsplash-1024x624.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1024\" \/>\n\t<meta property=\"og:image:height\" content=\"624\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Nicolas Joyce\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Nicolas Joyce\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"11 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/analystprep.com\\\/study-notes\\\/frm\\\/part-1\\\/quantitative-analysis\\\/linear-regression-with-one-regressor\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/analystprep.com\\\/study-notes\\\/frm\\\/part-1\\\/quantitative-analysis\\\/linear-regression-with-one-regressor\\\/\"},\"author\":{\"name\":\"Nicolas Joyce\",\"@id\":\"https:\\\/\\\/analystprep.com\\\/study-notes\\\/#\\\/schema\\\/person\\\/393e8b0a7757cde1d197fb0c060af25f\"},\"headline\":\"Linear Regression\",\"datePublished\":\"2019-10-10T12:45:00+00:00\",\"dateModified\":\"2026-05-04T19:02:32+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/analystprep.com\\\/study-notes\\\/frm\\\/part-1\\\/quantitative-analysis\\\/linear-regression-with-one-regressor\\\/\"},\"wordCount\":3165,\"commentCount\":0,\"image\":{\"@id\":\"https:\\\/\\\/analystprep.com\\\/study-notes\\\/frm\\\/part-1\\\/quantitative-analysis\\\/linear-regression-with-one-regressor\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/analystprep.com\\\/study-notes\\\/wp-content\\\/uploads\\\/2018\\\/09\\\/rawpixel-559744-unsplash.jpg\",\"articleSection\":[\"Part 1\",\"Quantitative Analysis\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/analystprep.com\\\/study-notes\\\/frm\\\/part-1\\\/quantitative-analysis\\\/linear-regression-with-one-regressor\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/analystprep.com\\\/study-notes\\\/frm\\\/part-1\\\/quantitative-analysis\\\/linear-regression-with-one-regressor\\\/\",\"url\":\"https:\\\/\\\/analystprep.com\\\/study-notes\\\/frm\\\/part-1\\\/quantitative-analysis\\\/linear-regression-with-one-regressor\\\/\",\"name\":\"Simple Linear Regression | AnalystPrep\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/analystprep.com\\\/study-notes\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/analystprep.com\\\/study-notes\\\/frm\\\/part-1\\\/quantitative-analysis\\\/linear-regression-with-one-regressor\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/analystprep.com\\\/study-notes\\\/frm\\\/part-1\\\/quantitative-analysis\\\/linear-regression-with-one-regressor\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/analystprep.com\\\/study-notes\\\/wp-content\\\/uploads\\\/2018\\\/09\\\/rawpixel-559744-unsplash.jpg\",\"datePublished\":\"2019-10-10T12:45:00+00:00\",\"dateModified\":\"2026-05-04T19:02:32+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/analystprep.com\\\/study-notes\\\/#\\\/schema\\\/person\\\/393e8b0a7757cde1d197fb0c060af25f\"},\"description\":\"Learn simple linear regression with one regressor and how it measures the relationship between dependent and independent variables.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/analystprep.com\\\/study-notes\\\/frm\\\/part-1\\\/quantitative-analysis\\\/linear-regression-with-one-regressor\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/analystprep.com\\\/study-notes\\\/frm\\\/part-1\\\/quantitative-analysis\\\/linear-regression-with-one-regressor\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/analystprep.com\\\/study-notes\\\/frm\\\/part-1\\\/quantitative-analysis\\\/linear-regression-with-one-regressor\\\/#primaryimage\",\"url\":\"https:\\\/\\\/analystprep.com\\\/study-notes\\\/wp-content\\\/uploads\\\/2018\\\/09\\\/rawpixel-559744-unsplash.jpg\",\"contentUrl\":\"https:\\\/\\\/analystprep.com\\\/study-notes\\\/wp-content\\\/uploads\\\/2018\\\/09\\\/rawpixel-559744-unsplash.jpg\",\"width\":6000,\"height\":3657},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/analystprep.com\\\/study-notes\\\/frm\\\/part-1\\\/quantitative-analysis\\\/linear-regression-with-one-regressor\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/analystprep.com\\\/study-notes\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Linear Regression\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/analystprep.com\\\/study-notes\\\/#website\",\"url\":\"https:\\\/\\\/analystprep.com\\\/study-notes\\\/\",\"name\":\"CFA, FRM, and Actuarial Exams Study Notes\",\"description\":\"Question Bank and Study Notes for the CFA, FRM, and Actuarial exams\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/analystprep.com\\\/study-notes\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/analystprep.com\\\/study-notes\\\/#\\\/schema\\\/person\\\/393e8b0a7757cde1d197fb0c060af25f\",\"name\":\"Nicolas Joyce\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/684508c19e959bb01da12a9dc741428f559e4e5df43fc41ed68efa7f2d3b2b9d?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/684508c19e959bb01da12a9dc741428f559e4e5df43fc41ed68efa7f2d3b2b9d?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/684508c19e959bb01da12a9dc741428f559e4e5df43fc41ed68efa7f2d3b2b9d?s=96&d=mm&r=g\",\"caption\":\"Nicolas Joyce\"},\"url\":\"https:\\\/\\\/analystprep.com\\\/study-notes\\\/author\\\/kajal\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Simple Linear Regression | AnalystPrep","description":"Learn simple linear regression with one regressor and how it measures the relationship between dependent and independent variables.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/analystprep.com\/study-notes\/frm\/part-1\/quantitative-analysis\/linear-regression-with-one-regressor\/","og_locale":"en_US","og_type":"article","og_title":"Simple Linear Regression | AnalystPrep","og_description":"Learn simple linear regression with one regressor and how it measures the relationship between dependent and independent variables.","og_url":"https:\/\/analystprep.com\/study-notes\/frm\/part-1\/quantitative-analysis\/linear-regression-with-one-regressor\/","og_site_name":"CFA, FRM, and Actuarial Exams Study Notes","article_published_time":"2019-10-10T12:45:00+00:00","article_modified_time":"2026-05-04T19:02:32+00:00","og_image":[{"width":1024,"height":624,"url":"https:\/\/analystprep.com\/study-notes\/wp-content\/uploads\/2018\/09\/rawpixel-559744-unsplash-1024x624.jpg","type":"image\/jpeg"}],"author":"Nicolas Joyce","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Nicolas Joyce","Est. reading time":"11 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/analystprep.com\/study-notes\/frm\/part-1\/quantitative-analysis\/linear-regression-with-one-regressor\/#article","isPartOf":{"@id":"https:\/\/analystprep.com\/study-notes\/frm\/part-1\/quantitative-analysis\/linear-regression-with-one-regressor\/"},"author":{"name":"Nicolas Joyce","@id":"https:\/\/analystprep.com\/study-notes\/#\/schema\/person\/393e8b0a7757cde1d197fb0c060af25f"},"headline":"Linear Regression","datePublished":"2019-10-10T12:45:00+00:00","dateModified":"2026-05-04T19:02:32+00:00","mainEntityOfPage":{"@id":"https:\/\/analystprep.com\/study-notes\/frm\/part-1\/quantitative-analysis\/linear-regression-with-one-regressor\/"},"wordCount":3165,"commentCount":0,"image":{"@id":"https:\/\/analystprep.com\/study-notes\/frm\/part-1\/quantitative-analysis\/linear-regression-with-one-regressor\/#primaryimage"},"thumbnailUrl":"https:\/\/analystprep.com\/study-notes\/wp-content\/uploads\/2018\/09\/rawpixel-559744-unsplash.jpg","articleSection":["Part 1","Quantitative Analysis"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/analystprep.com\/study-notes\/frm\/part-1\/quantitative-analysis\/linear-regression-with-one-regressor\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/analystprep.com\/study-notes\/frm\/part-1\/quantitative-analysis\/linear-regression-with-one-regressor\/","url":"https:\/\/analystprep.com\/study-notes\/frm\/part-1\/quantitative-analysis\/linear-regression-with-one-regressor\/","name":"Simple Linear Regression | AnalystPrep","isPartOf":{"@id":"https:\/\/analystprep.com\/study-notes\/#website"},"primaryImageOfPage":{"@id":"https:\/\/analystprep.com\/study-notes\/frm\/part-1\/quantitative-analysis\/linear-regression-with-one-regressor\/#primaryimage"},"image":{"@id":"https:\/\/analystprep.com\/study-notes\/frm\/part-1\/quantitative-analysis\/linear-regression-with-one-regressor\/#primaryimage"},"thumbnailUrl":"https:\/\/analystprep.com\/study-notes\/wp-content\/uploads\/2018\/09\/rawpixel-559744-unsplash.jpg","datePublished":"2019-10-10T12:45:00+00:00","dateModified":"2026-05-04T19:02:32+00:00","author":{"@id":"https:\/\/analystprep.com\/study-notes\/#\/schema\/person\/393e8b0a7757cde1d197fb0c060af25f"},"description":"Learn simple linear regression with one regressor and how it measures the relationship between dependent and independent variables.","breadcrumb":{"@id":"https:\/\/analystprep.com\/study-notes\/frm\/part-1\/quantitative-analysis\/linear-regression-with-one-regressor\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/analystprep.com\/study-notes\/frm\/part-1\/quantitative-analysis\/linear-regression-with-one-regressor\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/analystprep.com\/study-notes\/frm\/part-1\/quantitative-analysis\/linear-regression-with-one-regressor\/#primaryimage","url":"https:\/\/analystprep.com\/study-notes\/wp-content\/uploads\/2018\/09\/rawpixel-559744-unsplash.jpg","contentUrl":"https:\/\/analystprep.com\/study-notes\/wp-content\/uploads\/2018\/09\/rawpixel-559744-unsplash.jpg","width":6000,"height":3657},{"@type":"BreadcrumbList","@id":"https:\/\/analystprep.com\/study-notes\/frm\/part-1\/quantitative-analysis\/linear-regression-with-one-regressor\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/analystprep.com\/study-notes\/"},{"@type":"ListItem","position":2,"name":"Linear Regression"}]},{"@type":"WebSite","@id":"https:\/\/analystprep.com\/study-notes\/#website","url":"https:\/\/analystprep.com\/study-notes\/","name":"CFA, FRM, and Actuarial Exams Study Notes","description":"Question Bank and Study Notes for the CFA, FRM, and Actuarial exams","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/analystprep.com\/study-notes\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/analystprep.com\/study-notes\/#\/schema\/person\/393e8b0a7757cde1d197fb0c060af25f","name":"Nicolas Joyce","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/684508c19e959bb01da12a9dc741428f559e4e5df43fc41ed68efa7f2d3b2b9d?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/684508c19e959bb01da12a9dc741428f559e4e5df43fc41ed68efa7f2d3b2b9d?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/684508c19e959bb01da12a9dc741428f559e4e5df43fc41ed68efa7f2d3b2b9d?s=96&d=mm&r=g","caption":"Nicolas Joyce"},"url":"https:\/\/analystprep.com\/study-notes\/author\/kajal\/"}]}},"_links":{"self":[{"href":"https:\/\/analystprep.com\/study-notes\/wp-json\/wp\/v2\/posts\/1267","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/analystprep.com\/study-notes\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/analystprep.com\/study-notes\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/analystprep.com\/study-notes\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/analystprep.com\/study-notes\/wp-json\/wp\/v2\/comments?post=1267"}],"version-history":[{"count":75,"href":"https:\/\/analystprep.com\/study-notes\/wp-json\/wp\/v2\/posts\/1267\/revisions"}],"predecessor-version":[{"id":43264,"href":"https:\/\/analystprep.com\/study-notes\/wp-json\/wp\/v2\/posts\/1267\/revisions\/43264"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/analystprep.com\/study-notes\/wp-json\/wp\/v2\/media\/1508"}],"wp:attachment":[{"href":"https:\/\/analystprep.com\/study-notes\/wp-json\/wp\/v2\/media?parent=1267"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/analystprep.com\/study-notes\/wp-json\/wp\/v2\/categories?post=1267"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/analystprep.com\/study-notes\/wp-json\/wp\/v2\/tags?post=1267"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}