## 10.5 Conclusion

### 10.5.1 Theory-based inference for regression

Recall in Subsection 10.2.5 when we interpreted the regression table in Table 10.1, we mentioned that R does not compute its values using simulation-based methods for constructing confidence intervals and conducting hypothesis tests as we did in Chapters 8 and 9 using the infer package. Rather, R uses a theory-based approach using mathematical formulas, much like the theory-based confidence intervals you saw in Subsection 8.7.2 and the theory-based hypothesis tests you saw in Subsection 9.6.1. These formulas were derived in a time when computers didn’t exist, so it would’ve been incredibly labor intensive to run extensive simulations.

In particular, there is a formula for the standard error of the fitted slope $$b_1$$:

$\text{SE}_{b_1} = \dfrac{\dfrac{s_y}{s_x} \cdot \sqrt{1-r^2}}{\sqrt{n-2}}$

As with many formulas in statistics, there’s a lot going on here, so let’s first break down what each symbol represents. First $$s_x$$ and $$s_y$$ are the sample standard deviations of the explanatory variable bty_avg and the response variable score, respectively. Second, $$r$$ is the sample correlation coefficient between score and bty_avg. This was computed as 0.187 in Chapter 5. Lastly, $$n$$ is the number of pairs of points in the evals_ch5 data frame, here 463.

To put this formula into words, the standard error of $$b_1$$ depends on the relationship between the variability of the response variable and the variability of the explanatory variable as measured in the $$s_y / s_x$$ term. Next, it looks into how the two variables relate to each other in the $$\sqrt{1-r^2}$$ term.

However, the most important observation to make in the previous formula is that there is an $$n - 2$$ in the denominator. In other words, as the sample size $$n$$ increases, the standard error $$\text{SE}_{b_1}$$ decreases. Just as we demonstrated in Subsection 7.3.3 when we used shovels with $$n$$ = 25, 50, and 100 slots, the amount of sampling variation of the fitted slope $$b_1$$ will depend on the sample size $$n$$. In particular, as the sample size increases, both the sampling and bootstrap distributions narrow and the standard error $$\text{SE}_{b_1}$$ decreases. Hence, our estimates of $$b_1$$ for the true population slope $$\beta_1$$ get more and more precise.

R then uses this formula for the standard error of $$b_1$$ in the third column of the regression table and subsequently to construct 95% confidence intervals. But what about the hypothesis test? Much like with our theory-based hypothesis test in Subsection 9.6.1, R uses the following $$t$$-statistic as the test statistic for hypothesis testing:

$t = \dfrac{ b_1 - \beta_1}{ \text{SE}_{b_1}}$

And since the null hypothesis $$H_0: \beta_1 = 0$$ is assumed during the hypothesis test, the $$t$$-statistic becomes

$t = \dfrac{ b_1 - 0}{ \text{SE}_{b_1}} = \dfrac{ b_1 }{ \text{SE}_{b_1}}$

What are the values of $$b_1$$ and $$\text{SE}_{b_1}$$? They are in the estimate and std_error column of the regression table in Table 10.1. Thus the value of 4.09 in the table is computed as 0.067/0.016 = 4.188. Note there is a difference due to some rounding error here.

Lastly, to compute the $$p$$-value, we need to compare the observed test statistic of 4.09 to the appropriate null distribution. Recall from Section 9.2, that a null distribution is the sampling distribution of the test statistic assuming the null hypothesis $$H_0$$ is true. Much like in our theory-based hypothesis test in Subsection 9.6.1, it can be mathematically proven that this distribution is a $$t$$-distribution with degrees of freedom equal to $$df = n - 2 = 463 - 2 = 461$$.

Don’t worry if you’re feeling a little overwhelmed at this point. There is a lot of background theory to understand before you can fully make sense of the equations for theory-based methods. That being said, theory-based methods and simulation-based methods for constructing confidence intervals and conducting hypothesis tests often yield consistent results. As mentioned before, in our opinion, two large benefits of simulation-based methods over theory-based are that (1) they are easier for people new to statistical inference to understand, and (2) they also work in situations where theory-based methods and mathematical formulas don’t exist.

### 10.5.2 Summary of statistical inference

We’ve finished the last two scenarios from the “Scenarios of sampling for inference” table in Subsection 7.5.1, which we re-display in Table 10.4.

TABLE 10.4: Scenarios of sampling for inference
Scenario Population parameter Notation Point estimate Symbol(s)
1 Population proportion $$p$$ Sample proportion $$\widehat{p}$$
2 Population mean $$\mu$$ Sample mean $$\overline{x}$$ or $$\widehat{\mu}$$
3 Difference in population proportions $$p_1 - p_2$$ Difference in sample proportions $$\widehat{p}_1 - \widehat{p}_2$$
4 Difference in population means $$\mu_1 - \mu_2$$ Difference in sample means $$\overline{x}_1 - \overline{x}_2$$
5 Population regression slope $$\beta_1$$ Fitted regression slope $$b_1$$ or $$\widehat{\beta}_1$$

Armed with the regression modeling techniques you learned in Chapters 5 and 6, your understanding of sampling for inference in Chapter 7, and the tools for statistical inference like confidence intervals and hypothesis tests in Chapters 8 and 9, you’re now equipped to study the significance of relationships between variables in a wide array of data! Many of the ideas presented here can be extended into multiple regression and other more advanced modeling techniques.

You’ve now concluded the last major part of the book on “Statistical Inference with infer.” The closing Chapter 11 concludes this book with various short case studies involving real data, such as house prices in the city of Seattle, Washington in the US. You’ll see how the principles in this book can help you become a great storyteller with data!