-
-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can CIs be "non-significant" when the p-value is "significant"? #584
Labels
bug 🐜
Something isn't working
Comments
I think we might have a mistake in the CI method for RR. @bwiernik we currently have: SE_logRR <- sqrt(p1 / ((1 - p1) * n1)) + sqrt(p2 / ((1 - p2) * n2))
Z_logRR <- stats::qnorm(alpha / 2, lower.tail = FALSE)
confs <- exp(log(RR) + c(-1, 1) * SE_logRR * Z_logRR) But I think that first row should be: SE_logRR <- sqrt((1 - p1) / (n1 * p1) + (1 - p2) / (n2 * p2)) This gives identical results to tab <- c(13, 6, 16, 25) |>
matrix(nrow = 2)
effectsize::riskratio(tab)
#> Risk ratio | 95% CI
#> -------------------------
#> 1.75 | [1.07, 2.86]
PropCIs::riskscoreci(13, 19, 16, 41, conf.level = .95)
#>
#>
#>
#> data:
#>
#> 95 percent confidence interval:
#> 1.046215 2.860102
#>
d <- tab |> t() |>
as.data.frame() |>
transform(g = 0:1)
m <- glm(cbind(V1, V2) ~ g, data = d, family = binomial())
emmeans::emmeans(m, ~ g) |>
emmeans::regrid(transform = "log") |>
emmeans::contrast(method = "pairwise", type = "resp", infer = TRUE)
#> contrast ratio SE df asymp.LCL asymp.UCL null z.ratio p.value
#> g0 / g1 1.75 0.438 Inf 1.07 2.86 1 2.248 0.0246
#>
#> Confidence level used: 0.95
#> Intervals are back-transformed from the log scale
#> Tests are performed on the log scale As a side note, since chisq.test(tab, correct = FALSE)
#>
#> Pearson's Chi-squared test
#>
#> data: tab
#> X-squared = 4.4929, df = 1, p-value = 0.03404
#>
effectsize::phi(tab, adjust = FALSE)
#> Phi | 95% CI
#> -------------------
#> 0.27 | [0.05, 1.00]
#>
#> - One-sided CIs: upper bound fixed at [1.00]. Also note that Fisher's exact test is not significant here: fisher.test(tab)
#>
#> Fisher's Exact Test for Count Data
#>
#> data: tab
#> p-value = 0.05172
#> alternative hypothesis: true odds ratio is not equal to 1
#> 95 percent confidence interval:
#> 0.9412296 12.9916305
#> sample estimates:
#> odds ratio
#> 3.314406
#> |
Fixed in #585 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I'm working with 2x2 data whose table looks like this:
I want to get a risk ratio with CIs but when using
effectsize::riskratio()
this overlaps with 1, whereas the chi-square test p value was < 0.05. Shouldn't these correspond?PropCIs::riskscoreci()
, which is showcased in Agresti's An Introduction to Categorical Data Analysis, 3rd Edition, gives a CI that doesn't overlap with 1 (as expected). Similarly, phi CIs overlap with 0.Doesn't the calculation in
PropCIs::riskscoreci()
seem more consistent with the chi-square results?Created on 2023-04-08 by the reprex package (v2.0.1)
Session info
The text was updated successfully, but these errors were encountered: