我正在尝试根据我进行的一个实验中的4个连续变量对逻辑响应进行建模。最初,我使用了多元回归,并获得了相当好的结果,但最近有人建议我应该使用GAMs代替。对于如何正确地为GLM选择模型,以及如何解释我从多元回归GLM中得到的一些警告,我感到有点迷茫。我怀疑我的问题来自于过度适应,但我不知道如何解决这些问题。
基本上,问题是:对这些数据建模的最佳/最简约的方法是什么?
df:
df <- structure(list(response = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0),
V1 = c(14.2,13.67, 13.05, 14.18, 13.4, 14.12, 14.22, 14.15, 13.35, 13.67,
18.58, 18.27, 18.6, 17.94, 18.38, 18.98, 18.15, 19, 18.55, 18.53,
20.77, 21.65, 21.03, 21.57, 21.25, 21.63, 21.6, 21.09, 21.62,
21.6, 26.23, 26.52, 25.7, 26.57, 26.6, 26.25, 26.48, 26.26, 26.25,
26.4, 28.98, 29.45, 29.2, 29.65, 29.38, 28.6, 28.42, 28.95, 28.85,
28.8), V2 = c(27.2, 37.98, 24.63, 32.97, 30.27, 18.66, 13.77,
33.99, 15.8, 21.32, 14.21, 15.81, 35.83, 21.64, 26.93, 38.62,
34.03, 18.76, 24.12, 29.67, 29.83, 33.22, 27.11, 24.92, 21.72,
39.02, 12.93, 18.44, 36.34, 15.81, 13.29, 21.04, 19.05, 33.62,
30.52, 16.07, 28.43, 24.97, 39.9, 37.05, 19.31, 31.3, 34.08,
13.63, 25.1, 28.93, 22.36, 34.69, 39.5, 16.41),
V3 = c(8.06, 7.87, 7.81, 7.72, 8.04, 7.66, 7.72, 7.87, 7.72, 7.98, 7.59, 7.9,
8.08, 7.64, 8.02, 7.73, 7.77, 7.74, 7.66, 7.71, 8.05, 7.68, 7.63,
7.7, 7.64, 7.8, 7.7, 7.98, 7.86, 7.68, 7.65, 7.74, 7.99, 7.75,
7.91, 7.64, 7.69, 7.78, 7.69, 7.66, 7.72, 7.76, 7.71, 7.88, 7.63,
7.7, 7.99, 7.82, 7.75, 7.93),
V4 = c(362.12, 645.38, 667.54,
957.44, 391.84, 818.34, 732.91, 649.05, 722.02, 406.71, 918.9,
471.32, 363.77, 926.82, 385.4, 1038.91, 850.67, 715.11, 964.79,
890.11, 370.51, 1078.68, 1083.7, 893.76, 1026.1, 887.29, 737.68,
406.76, 690.39, 872.8, 847.26, 738.53, 397.33, 895.3, 563.93,
991.17, 957.28, 734.55, 1140.5, 1199.12, 817.17, 800.5, 992.82,
533.45, 1123.29, 943.25, 411.59, 745.22, 929.75, 460.82)),
row.names = c(NA,-50L), class = "data.frame")
我应该注意到,通过做实验和了解系统,我知道V1和V2对响应的影响最大。您还可以通过绘制这些变量的响应图来查看,因为所有积极的响应都聚集在这个2-D空间中。另外,看看一些临时的样条,似乎V1与响应是线性相关的,V2是二次的,V3可能根本不是,V4可能是弱二次的。
另一个重要的注意事项: V3和V4基本上是同一事物的两个不同度量,因此它们高度相关,不会在任何模型中一起使用。
因此,首先我尝试在多逻辑回归中对所有这些模型进行建模:有人建议我在模型选择中测试一大堆不同的模型,所以我将它们写在一个列表中,并在循环中运行它们:
formulas <- list(# single predictors
response ~ V1,
response ~ V2,
response ~ V3,
response ~ V4,
# two predictors
response ~ V1 + V2,
response ~ V1 + V3,
response ~ V1 + V4,
response ~ V2 + V3,
response ~ V2 + V4,
# three predictors
response ~ V1 + V2 + V3,
response ~ V1 + V2 + V4,
# start quadratic models
response ~ V2 + I(V2^2) + V1 + I(V1^2),
response ~ V2 + I(V2^2) + V1 + I(V1^2) + V3,
response ~ V2 + I(V2^2) + V1 + I(V1^2) + V4,
response ~ V1 + V2 + I(V1^2),
response ~ V1 + V2 + I(V1^2) + V3,
response ~ V1 + V2 + I(V1^2) + V4,
response ~ V1 + I(V1^2),
response ~ V1 + V2 + I(V2^2),
response ~ V1 + V2 + I(V2^2) + V3,
response ~ V1 + V2 + I(V2^2) + V4,
response ~ V2 + I(V2^2),
response ~ V2 + I(V2^2) + V1 + I(V1^2),
# add interactions
response ~ V1 + V2 + V1*V2,
response ~ V1 + V2 + V1*V2 + V3,
response ~ V1 + V2 + V1*V2 + V4,
# quadratic with interaction
response ~ V1 + V2 + V1*V2 + V3 + I(V1^2),
response ~ V1 + V2 + V1*V2 + V3 + I(V2^2),
response ~ V1 + V2 + V1*V2 + V4 + I(V1^2),
response ~ V1 + V2 + V1*V2 + V4 + I(V2^2)
)
# run them all in a loop, then order by AIC
selection <- purrr::map_df(formulas, ~{
mod <- glm(.x, data= df, family="binomial")
data.frame(formula = format(.x),
AIC = round(AIC(mod),2),
BIC = round(BIC(mod),2),
R2adj = round(DescTools::PseudoR2(mod,which=c("McFaddenAdj")),3)
)
})
warnings()
warnings()
# this returns a bunch of warnings about coercing the formulas into vectors, ignore those.
# however, this also lists the following for a handful of the models:
# "glm.fit: fitted probabilities numerically 0 or 1 occurred"
# which means perfect separation, but I'm not sure if this is a totally bad thing
# or not, as perfect separation actually pretty much does exist in the real data
# then we arrange by AIC and get our winning model:
selection %>% arrange(desc(AIC))
因此,使用该技术,我们发现最好的两个模型是response ~ V1 + V2 + I(V2^2) + V4
和response ~ V1 + V2 + I(V2^2)
。但是,当我们一次一个地运行它们时,我们得到了两个模型的numerically 1 or 0
误差,并且我们看到,在最佳模型中,唯一的区别(添加的V4)本身在统计上并不显著。所以..。我们使用哪一个??
bestmod1 <- glm(response ~ V1 + V2 + I(V2^2) + V4,
family="binomial",
data=df)
summary(bestmod1)$coefficients
bestmod2 <- glm(response ~ V1 + V2 + I(V2^2),
family="binomial",
data=df)
summary(bestmod2)$coefficients
方法2: GAMs
这里的技术类似,列出所有公式并在循环中运行。
library(mgcv)
gam_formulas <- list( # ind. main effects,
response ~ s(V1),
response ~ s(V2),
response ~ s(V3),
response ~ s(V4),
# two variables
response ~ s(V1) + s(V2),
response ~ s(V1) + s(V3),
response ~ s(V1) + s(V4),
response ~ s(V2) + s(V3),
response ~ s(V2) + s(V4),
# three variables
response ~ s(V1) + s(V2) + s(V3),
response ~ s(V1) + s(V2) + s(V4),
# add interactions
response ~ te(V1, V2),
response ~ te(V1, V2) + s(V3),
response ~ te(V1, V2) + s(V4),
response ~ te(V1, V3),
response ~ te(V1, V3) + s(V2),
response ~ te(V1, V4),
response ~ te(V1, V4) + s(V2),
response ~ te(V2, V3),
response ~ te(V2, V3) + s(V1),
response ~ te(V2, V4),
response ~ te(V2, V4) + s(V1),
response ~ te(V2, by=V1),
response ~ te(V1, by=V2),
response ~ te(V2, by=V3),
# two interactions?
response ~ te(V1, V3) + te(V1, V2),
response ~ te(V1, V4) + te(V1, V2),
response ~ te(V2, V3) + te(V1, V2),
response ~ te(V2, V4) + te(V1, V2)
)
gam_selection <- purrr::map_df(gam_formulas, ~{
gam <- gam(.x,
data= df, # always use same df
family="binomial",
method="REML") # always use REML smoothing method
data.frame(cbind(formula = as.character(list(formula(.x))),
round(broom::glance(gam),2),
R2 = summary(gam)$r.sq
))
})
# similarly, this gives a bunch of warnings about coercing the formulas into characters,
# but also this, for a handful of the models, which I am guessing is an overfitting thing:
# In newton(lsp = lsp, X = G$X, y = G$y, Eb = G$Eb, UrS = G$UrS, ... :
# Fitting terminated with step failure - check results carefully
gam_selection %>% arrange(desc(AIC))
但这返回了一堆奇怪的东西,因为许多模型(甚至不一定有类似的公式或AIC值)表明R2 = 1.00,而且从生物学上讲,这些公式没有太大的意义。为什么会发生这种情况?我该怎么做呢?(我知道这与使用"REML“有关,因为其中一些错误会在没有该行的情况下消失)。根据AIC值,我认为它实际上最准确的是从最好的第三:response ~ te(V2, by = V1)
,使用V2作为平滑变量,使用V1作为线性变量。
此外,根据AIC的说法,当更仔细地查看前两个gam时,没有一个变量本身是显着的(p值=1。奇怪),这让我觉得我不应该使用这些。
bestgam <- gam(response ~ s(V1) + s(V2) + s(V4),
data= df, # always use same df
family="binomial",
method="REML")
summary(bestgam)
bestgam2 <- gam(response ~ s(V1) + s(V2) + s(V3),
data= df, # always use same df
family="binomial",
method="REML")
summary(bestgam2)
bestgam3 <- gam(response ~ te(V2, by = V1),
data= df, # always use same df
family="binomial",
method="REML")
summary(bestgam3) # this is the one I think I should be using
基本上,我不知道为什么我会使用GAM而不是GLM,或者反之亦然,那么如何在这个过程中选择变量并避免过度拟合。任何建议都很感谢。
谢谢!
发布于 2021-05-14 11:22:47
如果您认为因变量和自变量之间存在非线性关系,则可以使用GAM。
对于模型选择,您可以将收缩率添加到模型中的平滑器,以便在不需要它们的情况下可以将它们惩罚出模型。
s()
函数的基本类型。例如,如果使用薄板回归样条,则添加bs = 'ts'
;如果使用三次回归样条,则添加bs = 'cs'
。在调用gam()
时指定
select = TRUE
关于这是如何工作的,以及这两种方法之间的区别,在这个答案中有更多的细节:https://stats.stackexchange.com/questions/405129/model-selection-for-gam-in-r
https://stackoverflow.com/questions/61014113
复制