Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
230 views
in Technique[技术] by (71.8m points)

In R version 4, Anova fails to compare linear mixed regression base model to other models with different variables

I was working with R version 3.6.3 and recently updated to version 4.0.3. I have given an example of the model I am working on.

Model0 <- lmer(accuracy~Temperature+Relative_Humidity+scaledweek+(1 | pxids),data=ptrails.df)
Model1 <- lmer(accuracy~CO+Temperature+Relative_Humidity+scaledweek+(1 | pxids),data=ptrails.df)
Model2 <- lmer(accuracy~pm10+Temperature+Relative_Humidity+scaledweek+(1 | pxids),data=ptrails.df)
Model3 <- lmer(accuracy~NO+Temperature+Relative_Humidity+scaledweek+(1 | pxids),data=ptrails.df)

anova(Model0,Model1,Model2, Model3)

The idea is for each model to compare with the base model (Model0) to determine which variable has a significant effect.

Example of output:

enter image description here

I do not get the p-value for Model2 and Model3. This wasn't the case in the previous version. I tried to compare model0 with model1 and then model0 with model2 etc. such comparisons give me the p-value however, I have very large data, I would need to do it together.

question from:https://stackoverflow.com/questions/65905142/in-r-version-4-anova-fails-to-compare-linear-mixed-regression-base-model-to-oth

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

This is a little bit of a guess because you haven't provided a reproducible example, but this probably has to do with versions of lme4 (not R) before and after version 1.1-24: the NEWS file for lme4 reports that

anova() now returns a p-value of NA if the df difference between two models is 0 (implying they are equivalent models)

"df difference" means the difference in the numbers of parameters estimated. There are two ways that different models could have equal numbers of df:

  • the models are actually equivalent (although their parameters may have been specified differently, e.g. if f and g are factors then including f*g and f:g give different parameterizations but an equivalent model fit. In this case the change in deviance ("Chisq") also be zero. This appears to be what's happening in models 0 vs 2 in your case (although it puzzles me that npar is different: hard to understand without a reproducible example. Perhaps you added a perfectly correlated predictor, or your model fit was singular?) In this case it could be argued that the p-value is 1, but NA is also reasonable (see this discussion)
  • the models are non-nested, for example one model includes numeric covariate A, the second model includes numeric covariate B. This is not a case that occurred to the package developers ... In this case the likelihood ratio test is inappropriate, so it doesn't make sense to return a p-value at all.

If you want to compare non-nested models you either need something like Vuong's test, or just compare the AIC values. (I would argue that bbmle::AICtab() gives a more useful format for comparison ...)


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...