Dunnett's test is a specialized post-hoc analysis used after a significant ANOVA to compare multiple treatment groups against a single control group. Unlike other post-hoc tests that compare all possible group pairs, Dunnett's test focuses specifically on treatment-to-control comparisons while controlling the familywise error rate, making it ideal for experimental research designs.
💡 Pro Tip: Use Dunnett's test when you have a clear control group. If you need to compare all groups to each other, use ourTukey's HSD Test Calculatorinstead. Always run a significant ANOVA first!
Ready to compare your treatments to control? to see Dunnett's test in action, or upload your experimental data to discover which treatments show significant effects compared to your control condition.
Empty or blank values are excluded from the options
Dunnett's Test is a multiple comparison procedure used to compare several treatments against a single control group. It maintains the family-wise error rate while providing more statistical power than methods that compare all pairs.
Test Statistic:
Where:
The standard error of the mean is calculated as:
Where is the pooled variance, combining the variances of all groups:
In ANOVA, the pooled variance corresponds to the within-group variance:
| Group | Data | N | Mean | SD |
|---|---|---|---|---|
| Control | 8.5, 7.8, 8, 8.2, 7.9 | 5 | 8.08 | 0.27 |
| Treatment A | 9.1, 9.3, 8.9, 9, 9.2 | 5 | 9.10 | 0.16 |
| Treatment B | 7.2, 7.5, 7, 7.4, 7.3 | 5 | 7.28 | 0.19 |
| Treatment C | 10.1, 10.3, 10, 10.2, 9.9 | 5 | 10.10 | 0.16 |
For each treatment group i vs control:
For each treatment vs control:
Critical value for , treatments, :
Compare with critical value:
All treatments show significant differences from the control group ():
library(multcomp)
library(tidyverse)
# Data preparation
df = tibble(
Group = rep(c("Control", "Treatment A", "Treatment B", "Treatment C"), each = 5),
Response = c(8.5, 7.8, 8, 8.2, 7.9,
9.1, 9.3, 8.9, 9, 9.2,
7.2, 7.5, 7, 7.4, 7.3,
10.1, 10.3, 10, 10.2, 9.9)
)
# multcom package requires the Group variable to be a factor
df$Group <- as.factor(df$Group)
# Perform one-way ANOVA
model <- aov(Response ~ Group, data = df)
# Perform Dunnett's test
dunnett_test <- glht(model, linfct = mcp(Group = "Dunnett"))
summary(dunnett_test)import numpy as np
from statsmodels.stats.multicomp import pairwise_tukeyhsd
import pandas as pd
# Create example data
data = pd.DataFrame({
'Response': [8.5, 7.8, 8.0, 8.2, 7.9, # Control
9.1, 9.3, 8.9, 9.0, 9.2, # Treatment A
7.2, 7.5, 7.0, 7.4, 7.3, # Treatment B
10.1, 10.3, 10.0, 10.2, 9.9], # Treatment C
'Group': np.repeat(['Control', 'Treatment A',
'Treatment B', 'Treatment C'], 5)
})
# Perform multiple pairwise comparisons using Tukey's test
# (Note: Dunnett's test focuses only on comparisons against a control group,
# but statsmodels does not provide a direct implementation of Dunnett's test.)
results = pairwise_tukeyhsd(data['Response'],
data['Group'])
print(results)Dunnett's test is a specialized post-hoc analysis used after a significant ANOVA to compare multiple treatment groups against a single control group. Unlike other post-hoc tests that compare all possible group pairs, Dunnett's test focuses specifically on treatment-to-control comparisons while controlling the familywise error rate, making it ideal for experimental research designs.
💡 Pro Tip: Use Dunnett's test when you have a clear control group. If you need to compare all groups to each other, use ourTukey's HSD Test Calculatorinstead. Always run a significant ANOVA first!
Ready to compare your treatments to control? to see Dunnett's test in action, or upload your experimental data to discover which treatments show significant effects compared to your control condition.
Empty or blank values are excluded from the options
Dunnett's Test is a multiple comparison procedure used to compare several treatments against a single control group. It maintains the family-wise error rate while providing more statistical power than methods that compare all pairs.
Test Statistic:
Where:
The standard error of the mean is calculated as:
Where is the pooled variance, combining the variances of all groups:
In ANOVA, the pooled variance corresponds to the within-group variance:
| Group | Data | N | Mean | SD |
|---|---|---|---|---|
| Control | 8.5, 7.8, 8, 8.2, 7.9 | 5 | 8.08 | 0.27 |
| Treatment A | 9.1, 9.3, 8.9, 9, 9.2 | 5 | 9.10 | 0.16 |
| Treatment B | 7.2, 7.5, 7, 7.4, 7.3 | 5 | 7.28 | 0.19 |
| Treatment C | 10.1, 10.3, 10, 10.2, 9.9 | 5 | 10.10 | 0.16 |
For each treatment group i vs control:
For each treatment vs control:
Critical value for , treatments, :
Compare with critical value:
All treatments show significant differences from the control group ():
library(multcomp)
library(tidyverse)
# Data preparation
df = tibble(
Group = rep(c("Control", "Treatment A", "Treatment B", "Treatment C"), each = 5),
Response = c(8.5, 7.8, 8, 8.2, 7.9,
9.1, 9.3, 8.9, 9, 9.2,
7.2, 7.5, 7, 7.4, 7.3,
10.1, 10.3, 10, 10.2, 9.9)
)
# multcom package requires the Group variable to be a factor
df$Group <- as.factor(df$Group)
# Perform one-way ANOVA
model <- aov(Response ~ Group, data = df)
# Perform Dunnett's test
dunnett_test <- glht(model, linfct = mcp(Group = "Dunnett"))
summary(dunnett_test)import numpy as np
from statsmodels.stats.multicomp import pairwise_tukeyhsd
import pandas as pd
# Create example data
data = pd.DataFrame({
'Response': [8.5, 7.8, 8.0, 8.2, 7.9, # Control
9.1, 9.3, 8.9, 9.0, 9.2, # Treatment A
7.2, 7.5, 7.0, 7.4, 7.3, # Treatment B
10.1, 10.3, 10.0, 10.2, 9.9], # Treatment C
'Group': np.repeat(['Control', 'Treatment A',
'Treatment B', 'Treatment C'], 5)
})
# Perform multiple pairwise comparisons using Tukey's test
# (Note: Dunnett's test focuses only on comparisons against a control group,
# but statsmodels does not provide a direct implementation of Dunnett's test.)
results = pairwise_tukeyhsd(data['Response'],
data['Group'])
print(results)