⇦ Back

This page is a follow-on of this one on Bland-Altman analysis.

Bland-Altman analysis is used to assess the agreement between two methods of measuring something, usually clinical information. It was discussed in Bland & Altman’s 1986 paper[1] and see also the Wikipedia page.

Giavarina analysis is identical to Bland-Altman analysis except that it accounts for heteroscedasticity. It does this by using percentage differences (relative to the means) on the y-axis instead of raw differences. It was published in Giavarina’s 2015 paper[2].

Calculating agreement is useful when discussing:

[1] Bland JM, Altman DG. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet. 1986 Feb;327(8476):307–10. DOI: 10.1016/S0140-6736(86)90837-8. PMID: 2868172.

[2] Giavarina D. Understanding Bland Altman analysis. Biochemia Medica. 2015;25(2):141-151. DOI: 10.11613/BM.2015.015.

1 Using Percentage Differences: Giavarina Analysis

This example reproduces the one used in Giavarina (2015).

Imagine a situation where two methods are used to measure something in particular and the two readings differ by 10 units. If this represents a large proportional difference - eg if one reading was 10 and the other was 20 - then this implies that the agreement between the methods is poor. On the other hand, if this only represents a small proportional difference - eg if one reading was 1000 and the other was 1010 - then we might not care. We could say that the agreement is good. Traditional Bland-Altman analysis does not capture this possibility: it assumes that all differences are equally consequential to the overall agreement. Giavarina analysis addresses this and, as a result, is more appropriate for data that is heteroscedastic - ie it becomes more spread out as the readings become larger.

The following hypothetical data is used as an example in Giavarina (2015):

import pandas as pd

df = pd.DataFrame({
    'Method B': [
        8.0, 16.0, 30.0, 24.0, 39.0, 54.0, 40.0, 68.0, 72.0, 62.0, 122.0, 80.0,
        181.0, 259.0, 275.0, 380.0, 320.0, 434.0, 479.0, 587.0, 626.0, 648.0,
        738.0, 766.0, 793.0, 851.0, 871.0, 957.0, 1001.0, 960.0
    ],
    'Method A': [
        1.0, 5.0, 10.0, 20.0, 50.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0,
        150.0, 200.0, 250.0, 300.0, 350.0, 400.0, 450.0, 500.0, 550.0, 600.0,
        650.0, 700.0, 750.0, 800.0, 850.0, 900.0, 950.0, 1000.0
    ]
})

2 Plotting Options

This next chunk of code isn’t actually relevant to Giavarina or Bland-Altman analysis, it just sets some parameters for creating nice plots in Python:

import matplotlib.pyplot as plt

# Options
x = 5  # Want figures to be A5
plt.rc('figure', figsize=[46.82 * .5**(.5 * x), 33.11 * .5**(.5 * x)])
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
plt.rc('text.latex', preamble=r'\usepackage{textgreek}')

3 Regression Analysis

Just like the Bland-Altman paper, the first recommended step is to visualise the data. Giavarina takes the extra step of performing Passing-Bablok regression, which is explained on this page:

import numpy as np

ax = plt.axes()
ax.set(
    title='Regression Analysis of Hypothetical Data',
    xlabel='Method A',
    ylabel='Method B'
)
# Scatter plot
ax.scatter(df['Method A'], df['Method B'], c='k', s=20, alpha=0.6, marker='o')
# Get axis limits
left, right = plt.xlim()
top, bottom = plt.ylim()
# Set wider axis limits
ax.set_xlim(0, right)
ax.set_ylim(0, right)
# Reference line
ax.plot([0, right], [0, right], c='grey', ls='--', label='Line of Equality')
# Passing-Bablok regression line
beta, alpha = passing_bablok(df['Method A'], df['Method B'])
x = np.array([left, right])
y = beta[0] * x + alpha[0]
ax.plot(x, y, label=f'{beta[0]:4.2f}x + {alpha[0]:4.2f}')
# Passing-Bablok regression line - confidence intervals
x = np.array([left, right])
y_lb = beta[1] * x + alpha[1]
y_ub = beta[2] * x + alpha[2]
ax.plot(
    x, y_ub, c='tab:blue', alpha=0.2,
    label=f'Upper CI: {beta[2]:4.2f}x + {alpha[2]:4.2f}'
)
ax.plot(
    x, y_lb, c='tab:blue', alpha=0.2,
    label=f'Lower CI: {beta[1]:4.2f}x + {alpha[1]:4.2f}'
)
ax.fill_between(x, y_ub, y_lb, alpha=0.2)
# Set aspect ratio
ax.set_aspect('equal')
# Legend
ax.legend(frameon=False)
# Show plot
plt.show()

The above re-produces Figure 1 from Giavarina (2015).

4 Bland-Altman Analysis

Figure 6 in Giavarina (2015) is a traditional Bland-Altman plot.

Summary statistics:

means = df.mean(axis=1)
diffs = df.diff(axis=1).iloc[:, -1]
bias = np.mean(diffs)
sd = np.std(diffs, ddof=1)
upper_loa = bias + 1.96 * sd
lower_loa = bias - 1.96 * sd

Confidence intervals:

import scipy.stats as stats

# Sample size
n = df.shape[0]
# Variance
var = sd**2
# Standard error of the bias
se_bias = np.sqrt(var / n)
# Standard error of the limits of agreement
se_loas = np.sqrt(3 * var / n)
# Endpoints of the range that contains 95% of the Student’s t distribution
t_interval = stats.t.interval(alpha=0.95, df=n - 1)
# Confidence intervals
## <string>:1: DeprecationWarning: Use of keyword argument `alpha` for method `interval` is deprecated. Use first positional argument or keyword argument `confidence` instead.
ci_bias = bias + np.array(t_interval) * se_bias
ci_upperloa = upper_loa + np.array(t_interval) * se_loas
ci_lowerloa = lower_loa + np.array(t_interval) * se_loas

Plot:

ax = plt.axes()
ax.set(
    title='Bland-Altman Plot for Two Hypothetical Measurement Methods',
    xlabel='Mean', ylabel='Difference'
)
# Scatter plot
ax.scatter(means, diffs, c='k', s=20, alpha=0.6, marker='o')
# Plot the zero line
ax.axhline(y=0, c='k', lw=0.5)
# Plot the bias and the limits of agreement
ax.axhline(y=upper_loa, c='grey', ls='--')
ax.axhline(y=bias, c='grey', ls='--')
ax.axhline(y=lower_loa, c='grey', ls='--')
# Get axis limits
left, right = plt.xlim()
bottom, top = plt.ylim()
# Increase the y-axis limits to create space for the confidence intervals
max_y = max(abs(ci_lowerloa[0]), abs(ci_upperloa[1]), abs(bottom), abs(top))
ax.set_ylim(-max_y * 1.1, max_y * 1.1)
# Set x-axis limits
domain = right - left
ax.set_xlim(left - domain * 0.05, left + domain * 1.15)
# Add the annotations
ax.annotate('+1.96×SD', (right, upper_loa), xytext=(0, 7), textcoords='offset pixels')
ax.annotate(f'{upper_loa:+4.2f}', (right, upper_loa), xytext=(0, -25), textcoords='offset pixels')
ax.annotate('Bias', (right, bias), xytext=(0, 7), textcoords='offset pixels')
ax.annotate(f'{bias:+4.2f}', (right, bias), xytext=(0, -25), textcoords='offset pixels')
ax.annotate('-1.96×SD', (right, lower_loa), xytext=(0, 7), textcoords='offset pixels')
ax.annotate(f'{lower_loa:+4.2f}', (right, lower_loa), xytext=(0, -25), textcoords='offset pixels')
# Plot the confidence intervals
ax.plot([left] * 2, list(ci_upperloa), c='grey', ls='--')
ax.plot([left] * 2, list(ci_bias), c='grey', ls='--')
ax.plot([left] * 2, list(ci_lowerloa), c='grey', ls='--')
# Plot the confidence intervals' caps
x_range = [left - domain * 0.025, left + domain * 0.025]
ax.plot(x_range, [ci_upperloa[1]] * 2, c='grey', ls='--')
ax.plot(x_range, [ci_upperloa[0]] * 2, c='grey', ls='--')
ax.plot(x_range, [ci_bias[1]] * 2, c='grey', ls='--')
ax.plot(x_range, [ci_bias[0]] * 2, c='grey', ls='--')
ax.plot(x_range, [ci_lowerloa[1]] * 2, c='grey', ls='--')
ax.plot(x_range, [ci_lowerloa[0]] * 2, c='grey', ls='--')
# Show plot
plt.show()

5 Giavarina Analysis

The percentage differences are calculated relative to the means:

means = df.mean(axis=1)
diffs = df.diff(axis=1).iloc[:, -1]
percent_diffs = diffs / means * 100
bias = np.mean(percent_diffs)
sd = np.std(percent_diffs, ddof=1)
upper_loa = bias + 1.96 * sd
lower_loa = bias - 1.96 * sd

Confidence intervals:

# Sample size
n = df.shape[0]
# Variance
var = sd**2
# Standard error of the bias
se_bias = np.sqrt(var / n)
# Standard error of the limits of agreement
se_loas = np.sqrt(3 * var / n)
# Endpoints of the range that contains 95% of the Student’s t distribution
t_interval = stats.t.interval(alpha=0.95, df=n - 1)
# Confidence intervals
## <string>:1: DeprecationWarning: Use of keyword argument `alpha` for method `interval` is deprecated. Use first positional argument or keyword argument `confidence` instead.
ci_bias = bias + np.array(t_interval) * se_bias
ci_upperloa = upper_loa + np.array(t_interval) * se_loas
ci_lowerloa = lower_loa + np.array(t_interval) * se_loas

Plot (this re-produces Figure 7 in Giavarina (2015)):

ax = plt.axes()
ax.set(
    title='Giavarina Plot for Two Hypothetical Measurement Methods',
    xlabel='Means', ylabel=r'Percentage Differences (\%)'
)
# Scatter plot
ax.scatter(means, percent_diffs, c='k', s=20, alpha=0.6, marker='o')
# Plot the zero line
ax.axhline(y=0, c='k', lw=0.5)
# Plot the bias and the limits of agreement
ax.axhline(y=upper_loa, c='grey', ls='--')
ax.axhline(y=bias, c='grey', ls='--')
ax.axhline(y=lower_loa, c='grey', ls='--')
# Get axis limits
left, right = plt.xlim()
bottom, top = plt.ylim()
# Increase the y-axis limits to create space for the confidence intervals
max_y = max(abs(ci_lowerloa[0]), abs(ci_upperloa[1]), abs(bottom), abs(top))
ax.set_ylim(-max_y * 1.1, max_y * 1.1)
# Set x-axis limits
domain = right - left
ax.set_xlim(left - domain * 0.05, left + domain * 1.13)
# Add the annotations
ax.annotate('+1.96×SD', (right, upper_loa), xytext=(0, 7), textcoords='offset pixels')
ax.annotate(fr'{upper_loa:+4.2f}\%', (right, upper_loa), xytext=(0, -25), textcoords='offset pixels')
ax.annotate('Bias', (right, bias), xytext=(0, 7), textcoords='offset pixels')
ax.annotate(fr'{bias:+4.2f}\%', (right, bias), xytext=(0, -25), textcoords='offset pixels')
ax.annotate('-1.96×SD', (right, lower_loa), xytext=(0, 7), textcoords='offset pixels')
ax.annotate(fr'{lower_loa:+4.2f}\%', (right, lower_loa), xytext=(0, -25), textcoords='offset pixels')
# Plot the confidence intervals
ax.plot([left] * 2, list(ci_upperloa), c='grey', ls='--')
ax.plot([left] * 2, list(ci_bias), c='grey', ls='--')
ax.plot([left] * 2, list(ci_lowerloa), c='grey', ls='--')
# Plot the confidence intervals' caps
x_range = [left - domain * 0.025, left + domain * 0.025]
ax.plot(x_range, [ci_upperloa[1]] * 2, c='grey', ls='--')
ax.plot(x_range, [ci_upperloa[0]] * 2, c='grey', ls='--')
ax.plot(x_range, [ci_bias[1]] * 2, c='grey', ls='--')
ax.plot(x_range, [ci_bias[0]] * 2, c='grey', ls='--')
ax.plot(x_range, [ci_lowerloa[1]] * 2, c='grey', ls='--')
ax.plot(x_range, [ci_lowerloa[0]] * 2, c='grey', ls='--')
# Show plot
plt.show()

6 Euser Analysis?

Another paper to expand upon Bland & Altman’s ideas was Euser, Dekker & Le Cessie (2008) which introduced ‘Euser analysis’:

Euser analysis also accounts for heteroscedasticity - just like Giavarina analysis does - except that it does so via a logarithmic transformation. While this approach was mentioned in Bland & Altman (1986) it was Euser, Dekker & Le Cessie who published a method to transform the data back into the native space and calculate a meaningful coefficient of variation[3]. A tutorial is over here.

[3] Euser AM, Dekker FW, Le Cessie S. A practical approach to Bland-Altman plots and variation coefficients for log transformed variables. J Clin Epidemiol. 2008;61(10):978–82. DOI: 10.1016/j.jclinepi.2007.11.003

⇦ Back