⇦ Back

This page is about the unpaired two one-sided t-tests procedure (‘unpaired TOST’). The paired two one-sided t-tests procedure (‘paired TOST’) is covered on this page.

The unpaired TOST can be used to test equivalence: do the means of two populations differ by less than a given amount?

The test was originally published by Donald J Schuirmann1, 2 (not Donald L Schuirmann as appears in the R and Pingouin documentation).

1 Statistical Hypotheses

At first glance, a research question such as “do the means of two populations differ by less than a given amount?” suggests the following pair of hypotheses:

  • \(H_0:\ \mu_x - \mu_y \geq \delta\)
  • \(H_1:\ \mu_x - \mu_y \lt \delta\)

where \(\mu_x\) and \(\mu_y\) are the means of the two populations and \(\delta\) is the tolerance you’ve decided on. However, we need to remember that we can’t assume that \(\mu_x\) is larger than \(\mu_y\), and so the difference between them (if one exists) might be negative! We thus need to have a second set of hypotheses to cover this possibility:

  • \(H_0:\ \mu_x - \mu_y \leq -\delta\)
  • \(H_1:\ \mu_x - \mu_y \gt -\delta\)

This is why it’s called the two one-sided t-tests procedure: the fact that you have two pairs of hypotheses means that you need to do the one-sided t-test twice. It is also why, ultimately, you only choose the result of one of these t-tests as your final answer: it’s because you only care about the one where the difference between the means has the same sign as \(\delta\).

2 Example One: Pingouin

This comes from the documentation for Pingouin’s tost() function. Here’s the example data:

# Example data
a = [4, 7, 8, 6, 3, 2]
b = [6, 8, 7, 10, 11, 9]

2.1 Doing it with Pingouin

The Pingouin package gives you access to the tost() function which, as the name suggests, performs the TOST. It can be installed from the terminal with:

python3.11 -m pip install pingouin

Replace 3.11 with whichever version of Python you have installed and are using. If you see a message saying OutdatedPackageWarning: The package pingouin is out of date. it means you need to update (upgrade) Pingouin. Do this from the terminal with:

python3.11 -m pip install pingouin --upgrade

As mentioned, the TOST asks “do the means of two populations differ by less than a given amount?”. This given amount is called the ‘bound’ and, by default, Pingouin gives it a value of 1:

import pingouin as pg

pval = pg.tost(a, b).loc['TOST', 'pval']

print(f'TOST: p = {pval:.3f}')
## TOST: p = 0.965

If you’re interested in seeing exactly what this function is doing in the background, you can take a look at the source code.

2.2 Doing it with SciPy

While SciPy doesn’t have a dedicated TOST function, it does have the t-test. So we can use that twice in its ‘one-sided’ configuration. Specifically, we’re using the unpaired (independent) two-sample t-test:

from scipy import stats
import numpy as np

#
# TOST
#
# Magnitude of region of similarity
bound = 1
# Unpaired two-sample t-test
_, p_greater = stats.ttest_ind(np.array(a) + bound, b, alternative='greater')
_, p_less = stats.ttest_ind(np.array(a) - bound, b, alternative='less')
# Choose the maximum p-value
pval = max(p_less, p_greater)

print(f'TOST: p = {pval:5.3f}')
## TOST: p = 0.965

As expected, this gives us the same answer as Pingouin.

SciPy and Numpy can be installed from the terminal with:

python3.11 -m pip install scipy
python3.11 -m pip install numpy

Again, 3.11 should be the version of Python you have installed and are using.

2.3 Doing it Explicitly

To see what is going on in the background in more detail, here’s the same result obtained in more steps:

# Sample means
mean_a = np.mean(a)
mean_b = np.mean(b)
# Sample sizes
sample_size_a = len(a)
sample_size_b = len(b)
# Sample variances
var_a = np.var(a, ddof=1)
var_b = np.var(b, ddof=1)
# Pooled sample variance (ie we assume equal variances)
var_p = ((sample_size_a - 1) * var_a + (sample_size_b - 1) * var_b) / (sample_size_a + sample_size_b - 2)
# Pooled sample standard deviation
std_p = np.sqrt(var_p)
# t-Values
bound = 1
t_1 = (mean_a - mean_b + bound) / np.sqrt(var_p / sample_size_a + var_p / sample_size_b)
t_2 = (mean_a - mean_b - bound) / np.sqrt(var_p / sample_size_a + var_p / sample_size_b)
# Degrees of freedom
dof = sample_size_a + sample_size_b - 2
# Critical values of the t distribution
alpha_1 = 0.05
t_crit_1 = stats.t.ppf(1 - alpha_1, dof)
alpha_2 = 1 - alpha_1
t_crit_2 = stats.t.ppf(1 - alpha_2, dof)
# Test the t-statistics against the t-values required for significance
if (t_1 > t_crit_1) and (t_2 < t_crit_2):
    print('Reject both null hypotheses; the means of the two samples are equivalent')
else:
    print('Fail to reject both null hypotheses; the means of the two samples are not equivalent')
## Fail to reject both null hypotheses; the means of the two samples are not equivalent

3 Example Two: Real Statistics

This example comes from here and uses this data:

import pandas as pd

data = {
    'values': [
        2311, 2274, 2262, 2297, 2291, 2319, 2263, 2329, 2289, 2287, 2290, 2301,
        2298, 2260, 2250, 2242, 2302, 2297, 2293, 2286, 2270, 2313, 2327, 2290,
    ],
    'types': ['original'] * 12 + ['new'] * 12
}
df = pd.DataFrame(data)

Pandas can be installed from the terminal with:

python3.11 -m pip install pandas

3.1 Summary Statistics

Here’s how to replicate the values in the top table of Figure 1 in the example:

# Initialise the summary table
summary = pd.DataFrame()

# Count
n = df.groupby('types').count()
summary['Count'] = n['values']
# Mean
summary['Mean'] = df.groupby('types').mean()['values']
# Variance
summary['Variance'] = df.groupby('types').var()['values']

print(summary.round(2))
##           Count     Mean  Variance
## types                             
## new          12  2285.67    650.79
## original     12  2292.75    423.84
# Difference of the means
diff_mean = df.groupby('types').mean().diff().values[1][0]
print(f'Difference of the means = {diff_mean:.3f}')
## Difference of the means = 7.083
# Mean variance
mean_var = df.groupby('types').var().mean().values[0]
print(f'Mean variance = {mean_var:.3f}')
## Mean variance = 537.314
# Cohen's d
cohens_d = diff_mean / np.sqrt(mean_var)
print(f"Cohen's d = {cohens_d:.3f}")
## Cohen's d = 0.306

3.2 t-Test: Equal Variances

Here’s how to replicate the values in the middle table of Figure 1 in the example:

# Standard deviations
std = df.groupby('types').std(ddof=1)
# Standard errors of the means
sem = std / np.sqrt(df.groupby('types').mean())
# Standard error of the difference of the means
sed = np.sqrt(std['values']['new']**2 / n['values']['new'] + std['values']['original']**2 / n['values']['original'])

print(f'Standard error of the difference of the means = {sed:.3f}')
## Standard error of the difference of the means = 9.463
# Degrees of freedom
dof = n.sum() - 2
print(f"Degrees of freedom = {dof['values']}")
## Degrees of freedom = 22

Now it’s time to do the t-tests. Let’s re-configure the raw data so it’s slightly easier to work with:

# Data frame to dictionary
dct = df.groupby('types')['values'].apply(list).to_dict()
# Data
x = dct['original']
y = dct['new']

The example we’re following does both a one-tailed and a two-tailed test, where the one-tailed test has a bound of 0 (which they call the ‘hyp mean’ or ‘hypothetical difference of the means’) and an alternative hypothesis of ‘greater’:

# One-sided unpaired two-sample t-test
bound = 0
t_greater, p_greater = stats.ttest_ind(np.array(x) + bound, y, alternative='greater')

print(f't-stat = {t_greater:.3f}; p-value = {p_greater:.3f}')
## t-stat = 0.749; p-value = 0.231
# Two-sided unpaired two-sample t-test
bound = 0
t_stat, pval = stats.ttest_ind(x, y, alternative='two-sided')

print(f't-stat = {t_stat:.3f}; p-value = {pval:.3f}')
## t-stat = 0.749; p-value = 0.462

Note that using a bound of 0 is not practical: if your research question is asking if the difference between the means of two groups is between +0 and -0 you should rather be testing for equivalence! In other words, you should be using a single two-sample t-test, not the TOST (which, remember, is two two-sample t-tests). This is actually what the example is doing, but it’s not very clear!

If we use a significance level of 0.1, here are the critical t-values we would need to reach in order to get significance:

# Significance level
alpha = 0.1
# Percent-point function (aka quantile function) of the t-distribution
t_crit_one = stats.t.ppf(1 - alpha, dof)[0]
print(f't-crit (one-tailed): {t_crit_one:.3f}')
## t-crit (one-tailed): 1.321
# Percent-point function (aka quantile function) of the t-distribution
t_crit_two = stats.t.ppf(1 - (alpha / 2), dof)[0]
print(f't-crit (two-tailed): {t_crit_two:.3f}')
## t-crit (two-tailed): 1.717

Now for the confidence interval for the difference of the means (the fact that we’re finding an interval that extends both sides of the difference of the means implies we need to use the critical t-value from the two-sided test):

# Margin of error
d = t_crit_two * sed
# Confidence interval
ci_upper = diff_mean + d
ci_lower = diff_mean - d

print(f'Difference of the means = {diff_mean:.3f} 95% CI [{ci_lower:.3f}, {ci_upper:.3f}]')
## Difference of the means = 7.083 95% CI [-9.166, 23.333]

3.3 Final Answer

The example only shows one one-sided t-test for equal variances in Figure 1, which is only one-half of the full TOST. Here’s the full thing with a bound of 25:

#
# TOST
#
# Magnitude of region of similarity
bound = 25
# Unpaired two-sample t-test
_, p_greater = stats.ttest_ind(np.array(x) + bound, y, alternative='greater')
_, p_less = stats.ttest_ind(np.array(x) - bound, y, alternative='less')
# Choose the maximum p-value
pval = max(p_less, p_greater)

print(f'TOST: p = {pval:5.3f}')
## TOST: p = 0.036

Here it is with Pingouin:

pval = pg.tost(x, y, bound=25).loc['TOST', 'pval']

print(f'TOST: p = {pval:.3f}')
## TOST: p = 0.036

This p-value of 0.036 matches that obtained in the example (although this isn’t shown in the figure, it’s in the text).

3.4 Doing it Explicitly

Once again, here’s the same result obtained in more steps:

# Sample
a = [2311, 2274, 2262, 2297, 2291, 2319, 2263, 2329, 2289, 2287, 2290, 2301]
b = [2298, 2260, 2250, 2242, 2302, 2297, 2293, 2286, 2270, 2313, 2327, 2290]

# Sample means
mean_a = np.mean(a)
mean_b = np.mean(b)
# Sample sizes
sample_size_a = len(a)
sample_size_b = len(b)
# Sample variances
var_a = np.var(a, ddof=1)
var_b = np.var(b, ddof=1)
# Pooled sample variance (ie we assume equal variances)
var_p = ((sample_size_a - 1) * var_a + (sample_size_b - 1) * var_b) / (sample_size_a + sample_size_b - 2)
# Pooled sample standard deviation
std_p = np.sqrt(var_p)
# t-Values
bound = 25
t_1 = (mean_a - mean_b + bound) / np.sqrt(var_p / sample_size_a + var_p / sample_size_b)
t_2 = (mean_a - mean_b - bound) / np.sqrt(var_p / sample_size_a + var_p / sample_size_b)
# Degrees of freedom
dof = sample_size_a + sample_size_b - 2
# Critical values of the t distribution
alpha_1 = 0.05
t_crit_1 = stats.t.ppf(1 - alpha_1, dof)
alpha_2 = 1 - alpha_1
t_crit_2 = stats.t.ppf(1 - alpha_2, dof)
# Test the t-statistics against the t-values required for significance
if (t_1 > t_crit_1) and (t_2 < t_crit_2):
    print('Reject both null hypotheses; the means of the two samples are equivalent')
else:
    print('Fail to reject both null hypotheses; the means of the two samples are not equivalent')
## Reject both null hypotheses; the means of the two samples are equivalent

4 References

  1. Schuirmann, D. “On hypothesis testing to determine if the mean of a normal distribution is contained in a known interval”. Biometrics 1981; 37:617.
  2. Schuirmann, D. “A comparison of the Two One-Sided Tests Procedure and the Power Approach for assessing the equivalence of average bioavailability”. Journal of Pharmacokinetics and Biopharmaceutics 1987; 15(6):657–680. DOI: 10.1007/BF01068419. PMID: 3450848. Available here.

⇦ Back