## 9.2 Understanding hypothesis tests

Much like the terminology, notation, and definitions relating to sampling you saw in Section 7.3, there are a lot of terminology, notation, and definitions related to hypothesis testing as well. Learning these may seem like a very daunting task at first. However, with practice, practice, and more practice, anyone can master them.

First, a hypothesis is a statement about the value of an unknown population parameter. In our résumé activity, our population parameter of interest is the difference in population proportions $$p_{m} - p_{f}$$. Hypothesis tests can involve any of the population parameters in Table 7.5 of the five inference scenarios we’ll cover in this book and also more advanced types we won’t cover here.

Second, a hypothesis test consists of a test between two competing hypotheses: (1) a null hypothesis $$H_0$$ (pronounced “H-naught”) versus (2) an alternative hypothesis $$H_A$$ (also denoted $$H_1$$).

Generally the null hypothesis is a claim that there is “no effect” or “no difference of interest.” In many cases, the null hypothesis represents the status quo or a situation that nothing interesting is happening. Furthermore, generally the alternative hypothesis is the claim the experimenter or researcher wants to establish or find evidence to support. It is viewed as a “challenger” hypothesis to the null hypothesis $$H_0$$. In our résumé activity, an appropriate hypothesis test would be:

\begin{aligned} H_0 &: \text{men and women are promoted at the same rate}\\ \text{vs } H_A &: \text{men are promoted at a higher rate than women} \end{aligned}

Note some of the choices we have made. First, we set the null hypothesis $$H_0$$ to be that there is no difference in promotion rate and the “challenger” alternative hypothesis $$H_A$$ to be that there is a difference. While it would not be wrong in principle to reverse the two, it is a convention in statistical inference that the null hypothesis is set to reflect a “null” situation where “nothing is going on.” As we discussed earlier, in this case, $$H_0$$ corresponds to there being no difference in promotion rates. Furthermore, we set $$H_A$$ to be that men are promoted at a higher rate, a subjective choice reflecting a prior suspicion we have that this is the case. We call such alternative hypotheses one-sided alternatives. If someone else however does not share such suspicions and only wants to investigate that there is a difference, whether higher or lower, they would set what is known as a two-sided alternative.

We can re-express the formulation of our hypothesis test using the mathematical notation for our population parameter of interest, the difference in population proportions $$p_{m} - p_{f}$$:

\begin{aligned} H_0 &: p_{m} - p_{f} = 0\\ \text{vs } H_A&: p_{m} - p_{f} > 0 \end{aligned}

Observe how the alternative hypothesis $$H_A$$ is one-sided with $$p_{m} - p_{f} > 0$$. Had we opted for a two-sided alternative, we would have set $$p_{m} - p_{f} \neq 0$$. To keep things simple for now, we’ll stick with the simpler one-sided alternative. We’ll present an example of a two-sided alternative in Section 9.5.

Third, a test statistic is a point estimate/sample statistic formula used for hypothesis testing. Note that a sample statistic is merely a summary statistic based on a sample of observations. Recall we saw in Section 3.3 that a summary statistic takes in many values and returns only one. Here, the samples would be the $$n_m$$ = 24 résumés with male names and the $$n_f$$ = 24 résumés with female names. Hence, the point estimate of interest is the difference in sample proportions $$\widehat{p}_{m} - \widehat{p}_{f}$$.

Fourth, the observed test statistic is the value of the test statistic that we observed in real life. In our case, we computed this value using the data saved in the promotions data frame. It was the observed difference of $$\widehat{p}_{m} -\widehat{p}_{f} = 0.875 - 0.583 = 0.292 = 29.2\%$$ in favor of résumés with male names.

Fifth, the null distribution is the sampling distribution of the test statistic assuming the null hypothesis $$H_0$$ is true. Ooof! That’s a long one! Let’s unpack it slowly. The key to understanding the null distribution is that the null hypothesis $$H_0$$ is assumed to be true. We’re not saying that $$H_0$$ is true at this point, we’re only assuming it to be true for hypothesis testing purposes. In our case, this corresponds to our hypothesized universe of no gender discrimination in promotion rates. Assuming the null hypothesis $$H_0$$, also stated as “Under $$H_0$$,” how does the test statistic vary due to sampling variation? In our case, how will the difference in sample proportions $$\widehat{p}_{m} - \widehat{p}_{f}$$ vary due to sampling under $$H_0$$? Recall from Subsection 7.3.2 that distributions displaying how point estimates vary due to sampling variation are called sampling distributions. The only additional thing to keep in mind about null distributions is that they are sampling distributions assuming the null hypothesis $$H_0$$ is true.

In our case, we previously visualized a null distribution in Figure 9.6, which we re-display in Figure 9.7 using our new notation and terminology. It is the distribution of the 16 differences in sample proportions our friends computed assuming a hypothetical universe of no gender discrimination. We also mark the value of the observed test statistic of 0.292 with a vertical line.

Sixth, the $$p$$-value is the probability of obtaining a test statistic just as extreme or more extreme than the observed test statistic assuming the null hypothesis $$H_0$$ is true. Double ooof! Let’s unpack this slowly as well. You can think of the $$p$$-value as a quantification of “surprise”: assuming $$H_0$$ is true, how surprised are we with what we observed? Or in our case, in our hypothesized universe of no gender discrimination, how surprised are we that we observed a difference in promotion rates of 0.292 from our collected samples assuming $$H_0$$ is true? Very surprised? Somewhat surprised?

The $$p$$-value quantifies this probability, or in the case of our 16 differences in sample proportions in Figure 9.7, what proportion had a more “extreme” result? Here, extreme is defined in terms of the alternative hypothesis $$H_A$$ that “male” applicants are promoted at a higher rate than “female” applicants. In other words, how often was the discrimination in favor of men even more pronounced than $$0.875 - 0.583 = 0.292 = 29.2\%$$?

In this case, 0 times out of 16, we obtained a difference in proportion greater than or equal to the observed difference of 0.292 = 29.2%. A very rare (in fact, not occurring) outcome! Given the rarity of such a pronounced difference in promotion rates in our hypothesized universe of no gender discrimination, we’re inclined to reject our hypothesized universe. Instead, we favor the hypothesis stating there is discrimination in favor of the “male” applicants. In other words, we reject $$H_0$$ in favor of $$H_A$$.

Seventh and lastly, in many hypothesis testing procedures, it is commonly recommended to set the significance level of the test beforehand. It is denoted by the Greek letter $$\alpha$$ (pronounced “alpha”). This value acts as a cutoff on the $$p$$-value, where if the $$p$$-value falls below $$\alpha$$, we would “reject the null hypothesis $$H_0$$.”

Alternatively, if the $$p$$-value does not fall below $$\alpha$$, we would “fail to reject $$H_0$$.” Note the latter statement is not quite the same as saying we “accept $$H_0$$.” This distinction is rather subtle and not immediately obvious. So we’ll revisit it later in Section 9.4.

While different fields tend to use different values of $$\alpha$$, some commonly used values for $$\alpha$$ are 0.1, 0.01, and 0.05; with 0.05 being the choice people often make without putting much thought into it. We’ll talk more about $$\alpha$$ significance levels in Section 9.4, but first let’s fully conduct the hypothesis test corresponding to our promotions activity using the infer package.