Introduction to Biostatistics

1 Learning objectives

1.1 Learning objectives

  • Describe family-wise error rate (FWER) methods of multiple comparison correction
  • Describe false discovery rate (FDR) methods of multiple comparison correction
  • Compare and contrast the methods

2 Multiple comparisons

2.1 Multiple comparisons

  • Multiple groups compared to control
  • Multiple time points
  • Multiple outcomes
  • Multiple tests across space
  • Subgroups
  • Stopping rules

2.2 How science “works” sometimes

https://xkcd.com/882/

2.3 How science “works” sometimes

2.4 How science “works” sometimes

2.5 How science “works” sometimes

2.6 How science “works” sometimes

2.7 How science “works” sometimes

2.8 How science “works” sometimes

2.9 Problem: Inflated type I error

  • Every test is a chance to make a type I error
    • Find an effect that doesn’t really exist
    • Nominally \(\alpha\) per test: 5% for \(\alpha\) = .05
  • \((1 - \alpha)\) probability of no type I error per test
    • \((1 - \alpha)^m\) probability of no type I error across \(m\) tests
    • 20 tests at \(\alpha\) = .05
      • \((1 - 0.05)^{20} = 0.358\) of no type I error across all 20 tests
      • \(1 - 0.358 = 0.642\) of at least one type I error

2.10 Two approaches to corrections

  • Reduce family wise error rate (FWER) back to nominal \(\alpha\) level
    • Probability of at least one type I error
    • More conservative, less powerful
  • Maintain false discovery rate (FDR) at a specified level (typically higher than nominal \(\alpha\))
    • Proportion of type I errors out of total significant effects
    • FDR = false positives / (false positives + true positives)
    • More powerful, but allow more type I errors

3 Family-wise error rate (FWER)

3.1 Type I error rate

  • P(type I error for one test) = \(\alpha\)
    • P(no type I error for one test) = \(1 - \alpha\)
  • P(at least one type I error for two tests) = \(1 - (1 - \alpha)^2\)
    • P(no type I error for two tests) = \((1 - \alpha)*(1 - \alpha) = (1 - \alpha)^2\)
  • P(at least one type I error for \(m\) tests) = \(1 - (1 - \alpha)^m\) = FWER
    • P(no type I error for \(m\) tests) = \((1 - \alpha)^m\)

3.2 FWER by \(\alpha\) and number of tests

Code
x <- c(1:100)
y05 <- 1 - (1 - .05)^x
y01 <- 1 - (1 - .01)^x
y001 <- 1 - (1 - .001)^x
ew <- data.frame(x, y05, y01, y001)
ggplot(data = ew, aes(x = x, y = y05)) +
  geom_line(linewidth = 1) +
  geom_line(aes(x = x, y = y01), linetype = "dashed", linewidth = 1) +
  geom_line(aes(x = x, y = y001), linetype = "dotted", linewidth = 1) +
  labs(x = "Number of tests", y = "FWER") +
  annotate("text", x = 80, y = 0.9, label = "alpha = .05") +
  annotate("text", x = 80, y = 0.45, label = "alpha = .01") +
  annotate("text", x = 80, y = 0.15, label = "alpha = .001") +
  geom_hline(yintercept = 0.5, color = "red")
  • FWER = P(at least one type I error)

3.3 Independent vs correlated tests

  • Correlated tests
    • Multiple comparisons to the same group (i.e., control)
    • Correlated outcomes: time, space
    • Stopping rules (time, same sample)
  • Uncorrelated tests
    • Multiple outcomes (unless correlated in some way)
    • Subgroup analysis
    • Specific group comparisons that are orthogonal

3.4 Independent vs correlated tests

  • Correlated
    • Bonferroni (1936)
    • Scheffe (1959)
    • Holm (1979)
  • Independent
    • Tukey (1949)
    • Sidak (1967)
    • Hochberg (1988)
    • Hommel (1988)

3.5 Bonferroni

  • Simplest adjustment
  • Most conservative (FWER \(\le \alpha\))
  • Two identical options
    • Adjust the \(\alpha\)
    • Adjust the \(p\)-values

3.6 Bonferroni: Adjust \(\alpha\)

  • Divide nominal \(\alpha\) level by the number of tests
    • Evaluate each test at that \(\alpha\)
  • Example: \(\alpha\) = .05 for 3 tests with \(p\)-values of .04, .02, .01
    • Evaluate each test at \(\alpha\) = .05/3 = .016667
    • Observed \(p\)-value = .04 > .016667 (NS at \(\alpha\) = .05)
    • Observed \(p\)-value = .02 > .016667 (NS at \(\alpha\) = .05)
    • Observed \(p\)-value = .01 < .016667 (significant at \(\alpha\) = .05)

3.7 Bonferroni: Adjust \(p\)-values

  • Multiply each observed \(p\)-value by the number of tests
    • Evaluate that \(p\)-value against nominal \(\alpha\)
  • Example: \(\alpha\) = .05 for 3 tests with \(p\)-values of .04, .02, .01
    • Evaluate each test with \(p\)-value \(\times\) 3
    • .04 \(\times\) 3 = .12 \(\rightarrow\) NS at \(\alpha\) = .05
    • .02 \(\times\) 3 = .06 \(\rightarrow\) NS at \(\alpha\) = .05
    • .01 \(\times\) 3 = .03 \(\rightarrow\) significant at \(\alpha\) = .05

3.8 Holm

  • Simple adjustment
  • Less conservative than Bonferroni (but FWER \(\le \alpha\))
  • Two identical options
    • Adjust the \(\alpha\)
    • Adjust the \(p\)-values

3.9 Holm: Adjust \(\alpha\)

  • Sort \(p\)-values from smallest to largest
    • Compare first (smallest) \(p\)-value to \(\frac{\alpha}{m}\)
    • 2nd \(p\)-value vs \(\frac{\alpha}{m-1}\), 3rd \(p\)-value vs \(\frac{\alpha}{m-2}\), etc.
    • Stop when a test is not significant
  • Example: \(\alpha\) = .05 for 3 tests with \(p\)-values of .04, .02, .01
    • Observed \(p\)-value = .01 < .05/3 = .016667 (significant at \(\alpha\) = .05)
    • Observed \(p\)-value = .02 < .05/2 = .025 (significant at \(\alpha\) = .05)
    • Observed \(p\)-value = .04 < .05/1 = .05 (significant at \(\alpha\) = .05)

3.10 Holm: Adjust \(p\)-value

  • Sort \(p\)-values from smallest to largest
    • Multiply each observed \(p\)-value by the number of remaining tests
    • Evaluate that \(p\)-value against nominal \(\alpha\)
  • Example: \(\alpha\) = .05 for 3 tests with \(p\)-values of .04, .02, .01
    • .01 \(\times\) 3 = .03 \(\rightarrow\) significant at \(\alpha\) = .05
    • .02 \(\times\) 2 = .04 \(\rightarrow\) significant at \(\alpha\) = .05
    • .04 \(\times\) 1 = .04 \(\rightarrow\) significant at \(\alpha\) = .05

3.11 Adjustments in R: \(p\)-values

  • p.adjust() function in stats package
    • p: list of \(p\)-values for the tests
    • method: Method of adjustment
      • "holm", "hochberg", "hommel", "bonferroni", "BH", "BY", "fdr", "none"
    • n: number of tests
p.adjust(c(.04, .02, .01), 
         "bonferroni", 
         n = 3)
[1] 0.12 0.06 0.03
p.adjust(c(.04, .02, .01), 
         "holm", 
         n = 3)
[1] 0.04 0.04 0.03

4 False discovery rate (FDR)

4.1 False discovery rate (FDR)

  • Any structure
    • Benjamini and Hochberg (1995)
  • Positively correlated tests
    • Benjamini and Yekutieli (2001)

4.2 False discovery rate

  \(H_0\) false \(H_0\) true Total
Significant test True positive False positive \(Total\ signif\ tests\)
NS test False negative True negative \(Total\ NS\ tests\)
  \(Total\) \(H_0\ false\) \(Total\) \(H_0\ true\) \(Total\ \#\ of\ Tests\)

4.3 False discovery rate: What do we know?

  \(H_0\) false \(H_0\) true Total
Significant test True positive False positive \(\color{red}{Total\ signif\ tests}\)
NS test False negative True negative \(\color{red}{Total\ NS\ tests}\)
  \(Total\) \(H_0\ false\) \(Total\) \(H_0\ true\) \(\color{red}{Total\ \#\ of\ tests}\)

4.4 False discovery rate: What is FDR?

  \(H_0\) false \(H_0\) true Total
Significant test \(\color{blue}{True}\) \(\color{blue}{positive}\) \(\color{blue}{False}\) \(\color{blue}{positive}\) \(\color{red}{Total\ signif\ tests}\)
NS test False negative True negative \(\color{red}{Total\ NS\ tests}\)
  \(Total\) \(H_0\ false\) \(Total\) \(H_0\ true\) \(\color{red}{Total\ \#\ of\ Tests}\)
  • FDR = false positives / (false positives + true positives)

4.5 Distributions of \(p\)-values

  • \(H_0\) is true (no effect: 900)
Code
obs<-100                # obs in each single regression
Nloops<-900            # number of experiments
output<-numeric(Nloops) # vector holding p-values of estimated a1 parameter from Nloops experiments

for(i in seq_along(output)){
x<-rnorm(obs) 
y<-rnorm(obs)
# x and y are independent, so null hypothesis is true
output[i] <-(summary(lm(y~x))$coefficients)[2,4] # we grab p-value of a1
}

h0true <- as.data.frame(output) %>%
  mutate(true = 1)
#h0true

ggplot(data = h0true, aes(x = output)) +
  geom_histogram(breaks = seq(from = 0, to = 1, by = .05)) +
  ylim(0, 100) +
  geom_vline(xintercept = 0.05, 
             color = "red", 
             linewidth = 1, 
             linetype = "dashed") +
  annotate("text", 
           x = .2, y = 75, 
           label = "< False positives",
           size = 8) +
  annotate("text", 
           x = .75, y = 60, 
           label = "True negatives", 
           size = 8)

50

  • \(H_0\) is false (yes effect: 100)
Code
obs<-100                # obs in each single regression
Nloops<-100            # number of experiments
output<-numeric(Nloops) # vector holding p-values of estimated a1 parameter from Nloops experiments

for(i in seq_along(output)){
x<-rnorm(obs) 
y<-rnorm(obs, .5*x + rnorm(obs, 0, 1))
# x and y are related, so null hypothesis is false
output[i] <-(summary(lm(y~x))$coefficients)[2,4] # we grab p-value of a1
}

h0false <- as.data.frame(output) %>%
  mutate(true = 0)
#h0false

ggplot(data = h0false, aes(x = output)) +
  geom_histogram(breaks = seq(from = 0, to = 1, by = .05)) +
  ylim(0, 100) +
  geom_vline(xintercept = 0.05, 
             color = "red", 
             linewidth = 1, 
             linetype = "dashed") +
  annotate("text", 
           x = .2, y = 75, 
           label = "< True positives",
           size = 8) +
  annotate("text", 
           x = .75, y = 60, 
           label = "False negatives", 
           size = 8)

93

4.6 Combined distribution of \(p\)-values

Code
h0all <- rbind(h0true, h0false)
ggplot(data = h0all, aes(x = output)) +
  geom_histogram(breaks = seq(from = 0, to = 1, by = .05)) +
  geom_vline(xintercept = 0.05, 
             color = "red", 
             linewidth = 1, 
             linetype = "dashed") +
  geom_hline(yintercept = sum(h0all$output > .05)/19, 
             color = "blue", 
             linewidth = 1)

4.7 Ordered \(p\)-values

Code
h0all <- h0all %>% arrange(output)
ggplot(data = h0all, 
       aes(x = 1:nrow(h0all), 
           y = output)) +
  geom_point(alpha = .2) +
  geom_hline(yintercept = .05, color = "blue", linewidth = 1) +
  labs(x = "Order", y = "p-value") +
  theme(legend.position="none")

4.8 Ordered \(p\)-values

Code
h0all <- h0all %>% arrange(output)
ggplot(data = h0all, 
       aes(x = 1:nrow(h0all), 
           y = output,
           color = as.factor(true)),
       alpha = .3) +
  geom_point() +
  geom_hline(yintercept = .05, color = "blue", linewidth = 1) +
  labs(x = "Order", y = "p-value") +
  theme(legend.position="none")

4.9 No correction: FDR = 50/143 = 0.35

Code
h0all <- h0all %>% arrange(output)
ggplot(data = h0all, 
       aes(x = 1:nrow(h0all), 
           y = output,
           color = as.factor(true))) +
  geom_point() +
  labs(x = "Order", y = "p-value") +
  geom_hline(yintercept = .05, color = "blue", linewidth = 1) +
  xlim(0,150) +
  ylim(0,.06) +
  theme(legend.position="none")

4.10 Corrected B-H FDR = .05

Code
h0all <- h0all %>% arrange(output)
ggplot(data = h0all, 
       aes(x = 1:nrow(h0all), 
           y = output,
           color = as.factor(true))) +
  geom_point() +
  labs(x = "Order", y = "p-value") +
  geom_hline(yintercept = .05, 
             color = "blue", 
             linewidth = 1) +
  geom_abline(slope = .05/1000, 
              color = "blue", 
              linewidth = 1, 
              linetype = "dashed") +
  xlim(0,150) +
  ylim(0,.06) +
  annotate("text", x = 125, y = 0.01, label = "slope = .05/1000") +
  theme(legend.position="none")

4.11 Corrected B-H FDR = .10

Code
h0all <- h0all %>% arrange(output)
ggplot(data = h0all, 
       aes(x = 1:nrow(h0all), 
           y = output,
           color = as.factor(true))) +
  geom_point() +
  labs(x = "Order", y = "p-value") +
  geom_hline(yintercept = .05, color = "blue", linewidth = 1) +
  geom_abline(slope = .1/1000, color = "blue", linewidth = 1, linetype = "dashed") +
  xlim(0,150) +
  ylim(0,.06) +
  annotate("text", x = 125, y = 0.01, label = "slope = .10/1000") +
  theme(legend.position="none")

4.12 How many significant tests?

  • Uncorrected
Code
h0all <- h0all %>% mutate(row = 1:nrow(h0all),
                          signif = ifelse(output <.05, 1, 0),
                          fdr10 = ifelse(output < row*(.10/1000), 1, 0),
                          fdr05 = ifelse(output < row*(.05/1000), 1, 0))
uncorrected <- table(h0all$signif, h0all$true)
rownames(uncorrected) <- c("NS", "Signif")
colnames(uncorrected) <- c("H0 false", "H0 true")
addmargins(uncorrected)
        
         H0 false H0 true  Sum
  NS            7     850  857
  Signif       93      50  143
  Sum         100     900 1000
  • Corrected (FDR = .10 and .05)
Code
fdr10 <- table(h0all$fdr10, h0all$true)
rownames(fdr10) <- c("NS", "Signif")
colnames(fdr10) <- c("H0 false", "H0 true")
addmargins(fdr10)
        
         H0 false H0 true  Sum
  NS           23     890  913
  Signif       77      10   87
  Sum         100     900 1000
Code
fdr05 <- table(h0all$fdr05, h0all$true)
rownames(fdr05) <- c("NS", "Signif")
colnames(fdr05) <- c("H0 false", "H0 true")
addmargins(fdr05)
        
         H0 false H0 true  Sum
  NS           35     895  930
  Signif       65       5   70
  Sum         100     900 1000

4.13 B-H FDR: Adjust \(\alpha\) (/critical \(p\)-value)

  • Sort \(p\)-values from smallest to largest
    • Compare first (smallest) \(p\)-value to \(\frac{FDR}{m}\)
    • 2nd \(p\)-value vs \(\frac{2\times FDR}{m}\), 3rd \(p\)-value vs \(\frac{3 \times FDR}{m}\), etc.
    • Stop when a test is not significant
  • Example: FDR = .05 for 3 tests with \(p\)-values of .04, .02, .01
    • Observed \(p\)-value = .01 < 1\(\times\).05/3 = .0167 (significant at FDR = .05)
    • Observed \(p\)-value = .02 < 2\(\times\).05/3 = .0333 (significant at FDR = .05)
    • Observed \(p\)-value = .04 < 3\(\times\).05/3 = .05 (significant at FDR = .05)

4.14 B-H FDR: Adjust \(p\)-value

  • Sort \(p\)-values from smallest to largest
    • Multiply each observed \(p\)-value by \(\frac{total\ number\ of\ tests}{rank\ of\ the\ test}\)
    • Evaluate that \(p\)-value against FDR
  • Example: FDR = .05 for 3 tests with \(p\)-values of .04, .02, .01
    • .01 \(\times\) 3/1 = .03 \(\rightarrow\) significant at FDR = .05
    • .02 \(\times\) 3/2 = .03 \(\rightarrow\) significant at FDR = .05
    • .04 \(\times\) 3/3 = .04 \(\rightarrow\) significant at FDR = .05

4.15 Adjustments in R: \(p\)-values

  • p.adjust() function in stats package
    • p: list of \(p\)-values for the tests
    • method: Method of adjustment
      • "holm", "hochberg", "hommel", "bonferroni", "BH", "BY", "fdr", "none"
    • n: number of tests
bh_pvalues <- p.adjust(c(.04, .02, .01), 
                       "BH", 
                       n = 3)
bh_pvalues <- as.data.frame(bh_pvalues)

4.16 Adjustments in R: \(p\)-values

  • Adjusted \(p\)-values
    • A test is significant if this value is < selected FDR
bh_pvalues
  bh_pvalues
1       0.04
2       0.03
3       0.03

4.17 Adjustments in R: \(p\)-values

  • p.adjust() function in stats package
    • p: list of \(p\)-values for the tests
    • method: Method of adjustment
      • "holm", "hochberg", "hommel", "bonferroni", "BH", "BY", "fdr", "none"
    • n: number of tests
bhsim_pvalues <- p.adjust(h0all$output,
                          "BH",
                          n = 1000)
bhsim_pvalues <- as.data.frame(bhsim_pvalues)

4.18 Adjustments in R: \(p\)-values

  • Adjusted \(p\)-values
    • A test is significant if this value is < selected FDR
bhsim_pvalues
      bhsim_pvalues
1    0.000003150246
2    0.000006579863
3    0.000007622379
4    0.000021876620
5    0.000048376563
6    0.000059332784
7    0.000059332784
8    0.000085410826
9    0.000119189805
10   0.000173697523
11   0.000173697523
12   0.000173697523
13   0.000173697523
14   0.000223265242
15   0.000223265242
16   0.000223265242
17   0.000232231007
18   0.000269489153
19   0.000439577353
20   0.000439577353
21   0.000439577353
22   0.000439577353
23   0.000449613755
24   0.000449613755
25   0.000451375266
26   0.000614713595
27   0.000928934520
28   0.001144947072
29   0.001319165911
30   0.001709176634
31   0.001709176634
32   0.001743355499
33   0.001920126452
34   0.001952290449
35   0.004098334075
36   0.004324836012
37   0.005363535768
38   0.007575846540
39   0.008391539456
40   0.009000735444
41   0.009637677909
42   0.009971410600
43   0.010945744035
44   0.012063249591
45   0.012235656838
46   0.012235656838
47   0.012235656838
48   0.012235656838
49   0.012975881616
50   0.013848294008
51   0.013930463654
52   0.014307237847
53   0.015189979043
54   0.015282909271
55   0.017781234400
56   0.019150600110
57   0.019150600110
58   0.019150600110
59   0.020498046574
60   0.025503896812
61   0.025503896812
62   0.025583251142
63   0.028995114275
64   0.029732677369
65   0.031008392356
66   0.036480906781
67   0.036480906781
68   0.044958167403
69   0.045896251718
70   0.045896251718
71   0.052214463096
72   0.052214463096
73   0.061194679700
74   0.063290595583
75   0.068244496631
76   0.069273892715
77   0.072690695757
78   0.075248833194
79   0.075248833194
80   0.075248833194
81   0.079643973896
82   0.079864347492
83   0.089356808563
84   0.090451416241
85   0.093130659342
86   0.094208418368
87   0.096886641870
88   0.100487553329
89   0.103614665507
90   0.103614665507
91   0.106663244038
92   0.117559337829
93   0.126087614597
94   0.139945449590
95   0.145156121815
96   0.173604933038
97   0.175318263247
98   0.179144965768
99   0.181559497480
100  0.184633586270
101  0.184633586270
102  0.184633586270
103  0.192475977566
104  0.192817896705
105  0.193054645652
106  0.195269104162
107  0.195269104162
108  0.196525981808
109  0.197476493329
110  0.201977698304
111  0.201977698304
112  0.219016954765
113  0.220673192769
114  0.248789985963
115  0.253814007061
116  0.268959853546
117  0.274461458673
118  0.274461458673
119  0.274461458673
120  0.274461458673
121  0.274461458673
122  0.274461458673
123  0.274461458673
124  0.274461458673
125  0.274461458673
126  0.275762745961
127  0.278678873445
128  0.280243267051
129  0.282039583223
130  0.284828410688
131  0.287682296551
132  0.297698617131
133  0.303992792923
134  0.307362907313
135  0.308932853350
136  0.314640806451
137  0.315658567567
138  0.318477223961
139  0.324544198552
140  0.327012269406
141  0.339668029819
142  0.342656170606
143  0.349440224342
144  0.349440224342
145  0.365557021075
146  0.372849872678
147  0.372880766934
148  0.373723872969
149  0.382430148405
150  0.383991250012
151  0.397631112573
152  0.397631112573
153  0.402110022785
154  0.409436130344
155  0.409436130344
156  0.412384437341
157  0.412384437341
158  0.413418839322
159  0.414913845845
160  0.415339974457
161  0.429381648153
162  0.439374845698
163  0.441556546588
164  0.441556546588
165  0.449566908718
166  0.449566908718
167  0.449566908718
168  0.449566908718
169  0.457172850273
170  0.464363369805
171  0.471711681385
172  0.479176495056
173  0.479176495056
174  0.479176495056
175  0.485762816937
176  0.486735091760
177  0.491081170703
178  0.512282005737
179  0.516223530503
180  0.519617345353
181  0.524423507711
182  0.524423507711
183  0.524423507711
184  0.525346019993
185  0.526150096341
186  0.529452564376
187  0.544299497122
188  0.544299497122
189  0.544299497122
190  0.544299497122
191  0.544299497122
192  0.548341450998
193  0.557751750114
194  0.557751750114
195  0.559259986095
196  0.570848582931
197  0.576793992883
198  0.582445775766
199  0.584525122133
200  0.584525122133
201  0.617818803318
202  0.617818803318
203  0.632383634901
204  0.632383634901
205  0.632383634901
206  0.633417140952
207  0.633591231512
208  0.639467230458
209  0.639467230458
210  0.644605174532
211  0.646072015839
212  0.649117804526
213  0.649117804526
214  0.650589130903
215  0.650589130903
216  0.650589130903
217  0.652464993605
218  0.652464993605
219  0.652464993605
220  0.652464993605
221  0.652464993605
222  0.652464993605
223  0.652464993605
224  0.654580262771
225  0.661617068832
226  0.663952575946
227  0.663952575946
228  0.678936790235
229  0.678936790235
230  0.678936790235
231  0.678936790235
232  0.678936790235
233  0.688161994737
234  0.688161994737
235  0.693961182721
236  0.693961182721
237  0.695666599270
238  0.695666599270
239  0.695666599270
240  0.695666599270
241  0.695666599270
242  0.695666599270
243  0.695666599270
244  0.695666599270
245  0.695666599270
246  0.695666599270
247  0.695666599270
248  0.695666599270
249  0.695666599270
250  0.695666599270
251  0.705602221881
252  0.706872685372
253  0.706872685372
254  0.706872685372
255  0.711199006620
256  0.711199006620
257  0.711199006620
258  0.711199006620
259  0.712893353692
260  0.713209814345
261  0.722867839169
262  0.723822446263
263  0.731364710903
264  0.731579378169
265  0.731579378169
266  0.731579378169
267  0.740175782611
268  0.740175782611
269  0.740175782611
270  0.740175782611
271  0.740175782611
272  0.740175782611
273  0.740175782611
274  0.740175782611
275  0.740175782611
276  0.740175782611
277  0.740175782611
278  0.740175782611
279  0.740175782611
280  0.740983693132
281  0.740983693132
282  0.742543278535
283  0.747061719240
284  0.756310627982
285  0.763999876819
286  0.777383211794
287  0.777383211794
288  0.777383211794
289  0.777383211794
290  0.777383211794
291  0.777383211794
292  0.777383211794
293  0.777383211794
294  0.777383211794
295  0.777383211794
296  0.777383211794
297  0.777383211794
298  0.777383211794
299  0.777383211794
300  0.781479914397
301  0.781479914397
302  0.781479914397
303  0.781479914397
304  0.784023972190
305  0.784023972190
306  0.784023972190
307  0.784023972190
308  0.786829071392
309  0.790768971564
310  0.790768971564
311  0.790768971564
312  0.790768971564
313  0.790768971564
314  0.790768971564
315  0.790768971564
316  0.791528885546
317  0.797506242235
318  0.797506242235
319  0.797506242235
320  0.805901844623
321  0.805901844623
322  0.805901844623
323  0.806619069996
324  0.809805122700
325  0.809805122700
326  0.809805122700
327  0.809805122700
328  0.809805122700
329  0.809805122700
330  0.809805122700
331  0.809805122700
332  0.809805122700
333  0.809805122700
334  0.809805122700
335  0.809805122700
336  0.809805122700
337  0.809805122700
338  0.809805122700
339  0.809805122700
340  0.809805122700
341  0.809805122700
342  0.809805122700
343  0.809805122700
344  0.809805122700
345  0.809805122700
346  0.809805122700
347  0.809805122700
348  0.809805122700
349  0.809805122700
350  0.809805122700
351  0.809805122700
352  0.809805122700
353  0.809805122700
354  0.809805122700
355  0.809805122700
356  0.809805122700
357  0.809805122700
358  0.809805122700
359  0.809805122700
360  0.809805122700
361  0.809805122700
362  0.809805122700
363  0.809805122700
364  0.809805122700
365  0.809805122700
366  0.809805122700
367  0.809805122700
368  0.809805122700
369  0.809805122700
370  0.809805122700
371  0.809805122700
372  0.809805122700
373  0.809805122700
374  0.809805122700
375  0.809805122700
376  0.809805122700
377  0.809805122700
378  0.809805122700
379  0.816647213379
380  0.816647213379
381  0.816647213379
382  0.816647213379
383  0.816647213379
384  0.816647213379
385  0.816647213379
386  0.816647213379
387  0.816647213379
388  0.820335960705
389  0.823631903922
390  0.824590667308
391  0.828061906494
392  0.828061906494
393  0.828061906494
394  0.829831755108
395  0.832371519024
396  0.832371519024
397  0.832371519024
398  0.832371519024
399  0.832371519024
400  0.832995083414
401  0.832995083414
402  0.832995083414
403  0.832995083414
404  0.832995083414
405  0.832995083414
406  0.832995083414
407  0.832995083414
408  0.832995083414
409  0.832995083414
410  0.832995083414
411  0.832995083414
412  0.832995083414
413  0.832995083414
414  0.832995083414
415  0.832995083414
416  0.832995083414
417  0.832995083414
418  0.833142598945
419  0.834525053533
420  0.837056824672
421  0.840749661593
422  0.841534069048
423  0.841534069048
424  0.841534069048
425  0.841534069048
426  0.842254294014
427  0.842254294014
428  0.842254294014
429  0.842254294014
430  0.845334927429
431  0.850162460222
432  0.854084867854
433  0.854084867854
434  0.855066882342
435  0.855066882342
436  0.855741103023
437  0.855741103023
438  0.858689328652
439  0.858689328652
440  0.858689328652
441  0.858689328652
442  0.858689328652
443  0.858689328652
444  0.858689328652
445  0.862090526916
446  0.862090526916
447  0.862090526916
448  0.862373909050
449  0.862373909050
450  0.862373909050
451  0.862373909050
452  0.863026100793
453  0.865417260279
454  0.865417260279
455  0.866574277831
456  0.866574277831
457  0.866574277831
458  0.866574277831
459  0.877048747988
460  0.877048747988
461  0.877048747988
462  0.877048747988
463  0.877048747988
464  0.877048747988
465  0.877048747988
466  0.877048747988
467  0.877048747988
468  0.877048747988
469  0.877048747988
470  0.877048747988
471  0.877048747988
472  0.877048747988
473  0.877048747988
474  0.877048747988
475  0.877048747988
476  0.877048747988
477  0.877048747988
478  0.877048747988
479  0.877702539576
480  0.878506524567
481  0.887877397619
482  0.887877397619
483  0.887877397619
484  0.887877397619
485  0.887877397619
486  0.889474209645
487  0.889474209645
488  0.889474209645
489  0.889474209645
490  0.889519604456
491  0.894147626206
492  0.894147626206
493  0.894147626206
494  0.894447646832
495  0.894902376235
496  0.894902376235
497  0.894902376235
498  0.894902376235
499  0.894902376235
500  0.894902376235
501  0.894902376235
502  0.894902376235
503  0.894902376235
504  0.896671772963
505  0.896671772963
506  0.896671772963
507  0.896735328733
508  0.897072148663
509  0.897885707897
510  0.897885707897
511  0.897885707897
512  0.899127592266
513  0.900237126362
514  0.900237126362
515  0.900237126362
516  0.900237126362
517  0.900237126362
518  0.900237126362
519  0.900237126362
520  0.900237126362
521  0.900237126362
522  0.900237126362
523  0.900237126362
524  0.900237126362
525  0.900237126362
526  0.900237126362
527  0.900237126362
528  0.900237126362
529  0.900237126362
530  0.900237126362
531  0.900237126362
532  0.900237126362
533  0.900237126362
534  0.900237126362
535  0.900237126362
536  0.900237126362
537  0.900237126362
538  0.900237126362
539  0.900237126362
540  0.900237126362
541  0.900237126362
542  0.900237126362
543  0.900237126362
544  0.900237126362
545  0.900237126362
546  0.900237126362
547  0.900237126362
548  0.900237126362
549  0.900237126362
550  0.900237126362
551  0.900237126362
552  0.900237126362
553  0.900237126362
554  0.900237126362
555  0.900237126362
556  0.900237126362
557  0.900237126362
558  0.900237126362
559  0.900237126362
560  0.900237126362
561  0.900237126362
562  0.900237126362
563  0.900237126362
564  0.900237126362
565  0.900237126362
566  0.900237126362
567  0.900237126362
568  0.900237126362
569  0.900237126362
570  0.900237126362
571  0.900237126362
572  0.900237126362
573  0.900237126362
574  0.900237126362
575  0.900237126362
576  0.900237126362
577  0.900237126362
578  0.900237126362
579  0.900237126362
580  0.900237126362
581  0.900237126362
582  0.900237126362
583  0.900237126362
584  0.900237126362
585  0.900237126362
586  0.900237126362
587  0.900237126362
588  0.900237126362
589  0.900237126362
590  0.900237126362
591  0.900237126362
592  0.900237126362
593  0.900237126362
594  0.900237126362
595  0.900237126362
596  0.900237126362
597  0.900237126362
598  0.900237126362
599  0.900237126362
600  0.900237126362
601  0.900237126362
602  0.900237126362
603  0.900237126362
604  0.900237126362
605  0.900237126362
606  0.900237126362
607  0.900237126362
608  0.900237126362
609  0.900237126362
610  0.900237126362
611  0.900237126362
612  0.900237126362
613  0.900237126362
614  0.901077071791
615  0.901680734998
616  0.901680734998
617  0.901680734998
618  0.902918398230
619  0.902918398230
620  0.903699335229
621  0.903699335229
622  0.903699335229
623  0.903699335229
624  0.903699335229
625  0.903699335229
626  0.903699335229
627  0.903699335229
628  0.903699335229
629  0.903699335229
630  0.911013455346
631  0.911013455346
632  0.911013455346
633  0.911013455346
634  0.911013455346
635  0.911013455346
636  0.911013455346
637  0.911013455346
638  0.911740209895
639  0.912384877176
640  0.912384877176
641  0.912384877176
642  0.912848158023
643  0.912848158023
644  0.913309940566
645  0.913309940566
646  0.913575914943
647  0.913575914943
648  0.913575914943
649  0.913575914943
650  0.913575914943
651  0.913575914943
652  0.913575914943
653  0.913575914943
654  0.913575914943
655  0.913575914943
656  0.913575914943
657  0.913575914943
658  0.913575914943
659  0.913575914943
660  0.913575914943
661  0.913575914943
662  0.913575914943
663  0.913575914943
664  0.913575914943
665  0.914946306324
666  0.914946306324
667  0.915585680377
668  0.915585680377
669  0.915585680377
670  0.915585680377
671  0.915585680377
672  0.915585680377
673  0.915585680377
674  0.915585680377
675  0.915663680342
676  0.915663680342
677  0.917112488776
678  0.917112488776
679  0.917112488776
680  0.917112488776
681  0.917112488776
682  0.917325743791
683  0.917325743791
684  0.917325743791
685  0.917325743791
686  0.917984854558
687  0.917984854558
688  0.917984854558
689  0.917984854558
690  0.917984854558
691  0.917984854558
692  0.917984854558
693  0.917984854558
694  0.917984854558
695  0.917984854558
696  0.917984854558
697  0.917984854558
698  0.917984854558
699  0.917984854558
700  0.917984854558
701  0.917984854558
702  0.917984854558
703  0.917984854558
704  0.917984854558
705  0.917984854558
706  0.917984854558
707  0.920428778229
708  0.920924858883
709  0.920924858883
710  0.920924858883
711  0.920924858883
712  0.920924858883
713  0.923901733376
714  0.923901733376
715  0.926767872708
716  0.926767872708
717  0.926767872708
718  0.926767872708
719  0.931279980922
720  0.931279980922
721  0.931279980922
722  0.931279980922
723  0.931279980922
724  0.931279980922
725  0.931279980922
726  0.931279980922
727  0.931279980922
728  0.931279980922
729  0.931279980922
730  0.931279980922
731  0.931279980922
732  0.931279980922
733  0.931279980922
734  0.931279980922
735  0.931279980922
736  0.931279980922
737  0.934419109127
738  0.935963829872
739  0.935963829872
740  0.935963829872
741  0.935963829872
742  0.935963829872
743  0.935963829872
744  0.935963829872
745  0.936428502193
746  0.937255622946
747  0.937434944252
748  0.937434944252
749  0.942795835504
750  0.942795835504
751  0.942795835504
752  0.942795835504
753  0.944624661958
754  0.944624661958
755  0.944624661958
756  0.946694149066
757  0.946694149066
758  0.946694149066
759  0.946694149066
760  0.951065548058
761  0.951065548058
762  0.951065548058
763  0.951065548058
764  0.951065548058
765  0.953390984893
766  0.953390984893
767  0.955939212071
768  0.956248689742
769  0.958408836018
770  0.959687104503
771  0.959687104503
772  0.959687104503
773  0.959687104503
774  0.959687104503
775  0.959687104503
776  0.959687104503
777  0.959687104503
778  0.959687104503
779  0.959687104503
780  0.959687104503
781  0.959687104503
782  0.959687104503
783  0.959687104503
784  0.959687104503
785  0.959687104503
786  0.960372556480
787  0.960372556480
788  0.960372556480
789  0.960372556480
790  0.960519332914
791  0.960519332914
792  0.961897223400
793  0.961897223400
794  0.961897223400
795  0.961897223400
796  0.961897223400
797  0.961897223400
798  0.961897223400
799  0.961897223400
800  0.961897223400
801  0.961897223400
802  0.961897223400
803  0.961897223400
804  0.961962969105
805  0.965284778354
806  0.967600922827
807  0.967600922827
808  0.969739413152
809  0.972984299677
810  0.972984299677
811  0.972984299677
812  0.972984299677
813  0.972984299677
814  0.972984299677
815  0.972984299677
816  0.972984299677
817  0.973072886561
818  0.974693555641
819  0.974693555641
820  0.974693555641
821  0.975983466016
822  0.975983466016
823  0.975983466016
824  0.975983466016
825  0.975983466016
826  0.975983466016
827  0.975983466016
828  0.975983466016
829  0.975983466016
830  0.975983466016
831  0.975983466016
832  0.975983466016
833  0.975983466016
834  0.975983466016
835  0.975983466016
836  0.977124274548
837  0.977124274548
838  0.977893390240
839  0.977971551017
840  0.977971551017
841  0.977971551017
842  0.977971551017
843  0.977971551017
844  0.977971551017
845  0.977971551017
846  0.977971551017
847  0.977971551017
848  0.977971551017
849  0.977971551017
850  0.977971551017
851  0.977971551017
852  0.977971551017
853  0.977971551017
854  0.979438724693
855  0.980501517579
856  0.980501517579
857  0.982185486541
858  0.982236660666
859  0.982335260935
860  0.983573222877
861  0.983573222877
862  0.983573222877
863  0.983573222877
864  0.983573222877
865  0.983573222877
866  0.983573222877
867  0.983573222877
868  0.983573222877
869  0.983573222877
870  0.983573222877
871  0.986552601069
872  0.987122962019
873  0.987122962019
874  0.987122962019
875  0.987122962019
876  0.987122962019
877  0.987122962019
878  0.987122962019
879  0.987122962019
880  0.987122962019
881  0.987122962019
882  0.991607067697
883  0.991607067697
884  0.991607067697
885  0.992062111361
886  0.992062111361
887  0.992062111361
888  0.992062111361
889  0.992062111361
890  0.993831991573
891  0.993831991573
892  0.993831991573
893  0.993831991573
894  0.993831991573
895  0.993831991573
896  0.993831991573
897  0.993831991573
898  0.993831991573
899  0.993831991573
900  0.993831991573
901  0.993831991573
902  0.993831991573
903  0.993831991573
904  0.993831991573
905  0.993831991573
906  0.993831991573
907  0.993831991573
908  0.993831991573
909  0.993831991573
910  0.993831991573
911  0.993831991573
912  0.993831991573
913  0.993831991573
914  0.993831991573
915  0.993831991573
916  0.993831991573
917  0.993831991573
918  0.993831991573
919  0.993831991573
920  0.993831991573
921  0.993831991573
922  0.993831991573
923  0.993831991573
924  0.993831991573
925  0.993831991573
926  0.993831991573
927  0.993831991573
928  0.993831991573
929  0.993831991573
930  0.993831991573
931  0.993831991573
932  0.993831991573
933  0.993831991573
934  0.993831991573
935  0.993831991573
936  0.993831991573
937  0.993831991573
938  0.993831991573
939  0.993831991573
940  0.993831991573
941  0.993831991573
942  0.993831991573
943  0.993831991573
944  0.993831991573
945  0.993831991573
946  0.993831991573
947  0.993831991573
948  0.993831991573
949  0.993831991573
950  0.993831991573
951  0.993831991573
952  0.993831991573
953  0.993831991573
954  0.993831991573
955  0.993831991573
956  0.993831991573
957  0.993831991573
958  0.993831991573
959  0.993831991573
960  0.993831991573
961  0.993831991573
962  0.993831991573
963  0.993831991573
964  0.993831991573
965  0.993831991573
966  0.993831991573
967  0.993831991573
968  0.993831991573
969  0.993831991573
970  0.993831991573
971  0.993831991573
972  0.993831991573
973  0.993831991573
974  0.993831991573
975  0.993831991573
976  0.993831991573
977  0.993831991573
978  0.993831991573
979  0.993831991573
980  0.993831991573
981  0.993831991573
982  0.993831991573
983  0.993831991573
984  0.993831991573
985  0.993831991573
986  0.993831991573
987  0.993831991573
988  0.993831991573
989  0.993831991573
990  0.993831991573
991  0.993831991573
992  0.993831991573
993  0.993831991573
994  0.993831991573
995  0.993831991573
996  0.993831991573
997  0.995470620852
998  0.997636846581
999  0.998373657778
1000 0.999450260044

5 Summary

5.1 Summary of results

Test No correction Bonferroni Holm FDR (B-H)
1 .04 .12 .04 .04
2 .02 .06 .04 .03
3 .01 .03 .03 .03

5.2 Which should I use? FWER or FDR?

  • For just a few tests, it probably doesn’t matter which you use
    • and/or very small \(p\)-values
  • For many tests (dozens to thousands)
    • FWER methods are too conservative: You miss real effects
    • FDR methods are the better choice

5.3 What FDR level? It depends

  • What is more harmful?
    • False positives: significant test when \(H_0\) is true
    • False negatives: NS test when \(H_0\) is false

5.4 What do you correct for?

  • The hypothesis that you’re testing
    • If there are multiple hypotheses in a project, don’t correct for all tests in the project
    • Only correct for all tests about a specific hypothesis

5.5 Critique, commentary, and nuance

  • Daniel Lakens’ chapter on “error control”
    • Union-intersection approach vs intersection-union approach
      • “At least one test must be significant to make a claim” vs “All tests must be significant to make a claim”
    • Optional stopping
    • Positive predictive value

6 In-class activities

6.1 In-class activities

  • Do some corrections for multiple tests
  • Compare the findings and discuss