Tutorial 9: Mixed models - part 1

March 21, 2023 (9th week of classes)
Data hierarchy & dependency, and the Linear Mixed Effects ANOVA (or mixed model ANOVA)


Remember: The tutorials were produced in such a way that you should be able to easily adapt the codes to your own data!

Simple simulations to help understanding the nature of data hierarchy & dependency

Let’s start by simulating data for data that are not hierarchical such as data that one would analyse using a simple one-way ANOVA. This will help you to have a better understanding of what hierarchical data means.

Suppose an experiment to assess the effects of temperature on fish growth as seen in class. Let’s assume four different temperature levels and that three tanks per temperature level (replicates) with 10 fish each were used in the experiment. For the sake of realism (here data were simulated), let’s assume that growth values represent the difference in weight of each fish before and after 1 week of being expose to treatment temperatures.

Let’s start by setting up the characteristics of the experiment for which we will simulate data:

n.treatments <- 4 
n.replicates.per.treatment <- 3 # Number of tanks per treatment
n.individuals.per.replicate <- 10 

The total number of data points for the entire experiment N is then:

N <- n.treatments * n.replicates.per.treatment * n.individuals.per.replicate
N

Let’s generate data in which the null hypothesis is rejected, i.e., the mean fish growth varied among treatments. To do that, we will set means for each of the 4 temperature treatments as follows:

lowest temperature (treatment 1) fish grew in average 200mg. sub-intermediate temperature (treatment 2) fish grew in average 220mg. intermediate temperature (treatment 3) fish grew in average 250mg. high temperature (treatment 4) fish grew in average 400mg.

We can set these values as follows:

treatment.means <- c(200,220,250,400)

Let’s set the residual standard deviation sigma at 3 for example. Because we assume homoscedasticity in residuals, the same residual standard deviation is used across the entire experiment (i.e., replicates (tanks) and temperature levels):

sigma <- 3 

Let’s simulate residual variation around each individual fish. This residual variation is the error for each individual fish added to the true temperature effects of each treatment (i.e., the mean effects set just above for each temperature level). If the error is small (small sigma), then we have greater statistical chances to detect their true differences

residual.values <- rnorm(N, 0, sigma)  
residual.values

Now we need to build a vector of labels that contains the temperature treatment for each individual fish (as we have in real data). The command below will simply repeat the treatment numbers (i.e., 1 to 4; 1 to n.treatments) according to the number of replicates (tanks) within treatments (3 tanks per treatment) and number of individuals per replicate (i.e., 10 individuals per tank):

treatments <- rep(1:n.treatments, rep(n.replicates.per.treatment * n.individuals.per.replicate, n.treatments)) 
treatments

To simulate data, we can use different ways (including more complex design matrices). For simplicity, we will use a very simple way in which we simply add residuals to the means of each treatment. But first, we need to create a vector of means per treatment across all 120 individual fish:

means <- rep(treatment.means, rep(n.replicates.per.treatment * n.individuals.per.replicate, n.treatments)) 
means

Generate the response variable Y (i.e., growth) as a function of means and residuals. Note that in the way we are generating data (i.e., for an ANOVA design), we don’t need to add a constant. The constant is automatically the mean of all populations (i.e., 267.5; mean(c(200,220,250,400))). Y is a sample, but its mean should be very close to the true mean of all treatments combined (i.e., 267.5):

Y <- means + residual.values
Y
mean(Y)

Let’s look at the simulated data (scroll down to see all values):

View(cbind(Y,treatments))

Let’s plot the data:

boxplot(Y~treatments, col="firebrick", xlab= "Treatments", ylab ="Growth", main= "", las =1, outline=FALSE)

Let’s calculate the mean value per treatment:

tapply(Y, treatments, mean)

Note that the means within temperature treatments across the 3 replicates (tanks) don’t differ much; so, it makes sense to group all individual fish together across aquaria within treatments (i.e., together regardless of the aquarium); i.e., treat each individual fish as an independent observation. Though this can be tested more formally (see later in this tutorial).

separate.replicates <- rep(1:12, each=n.individuals.per.replicate) 
separate.replicates 

Now let’s plot each replicate per treatment separately. Note that the growth mean per replicate (tank) within treatments are very similar, i.e., variation within treatments is random and variation among treatments is not random given that we set treatment means as different (i.e., reject H0 that growth is equal among treatments). This is the assumption of the fixed-effect ANOVA.

boxplot(Y~separate.replicates, col="firebrick", xlab= "Replicates per treatment", ylab ="Growth", main= "", las =1, outline=FALSE)
abline(v=c(3.5,6.5,9.5))

Vertical lines in the boxplot separate each temperature treatment (and their 3 replicates, i.e., tanks).

Let’s calculate the mean value per treatment per replicate:

matrix(tapply(Y, separate.replicates, mean),3,4)

Note that the means within temperature treatments across the 3 replicates don’t differ much; so, it makes sense use them together as independent individuals.

For these data, we can simply aggregate the individual fish within treatments and analyse the data as follows.Note that the lm function creates internally the design matrix to be used as a predictor:

anova(lm(Y~treatments))

As expected (given the nature of the simulated data), we rejected the null hypothesis that the treatments have equal effects on fish growth, i.e., we have evidence that temperature affects fish growth and that as temperature increases, fish growth increases as well.

We generated the data following the ANOVA assumptions (normality and homoscedasticity), so obviously there is no need to check the assumptions.

Simulating hierarchical data

To make you better understand the nature of hierarchical data (random effects), we will use the same experimental setting as in the case of the fixed ANOVA design above, i.e.,

n.treatments <- 4 # Number of treatments
n.replicates.per.treatment <- 3 # Number of tanks per treatment
n.individuals.per.replicate <- 10 

The total number of data points N is (as before) then:

N <- n.treatments * n.replicates.per.treatment * n.individuals.per.replicate
N

In hierarchical data, replicates within treatments (growth variation within the same temperature treatment) vary in their mean values as well, i.e., a different mean per aquarium. This can only happen if individuals within replicates (i.e., aquarium) are more similar to each other than individuals across aquaria.

In the last boxplot, you will notice that the mean growth values of replicates (individuals within an aquarium for a given temperature treatment) within temperature treatments were very similar; their variation was due to error (residual variation) alone. This will be not the case when data have a strong hierarchical structure.

Let’s set now variation across the mean values for each replicate within a given treatment. Note that in simple mixed effect models, this value is assumed to be the same (not the case in more complex mixed models where the variance within treatments can vary as well; but let’s start simple and this is the most common assumption across biological studies anyway):

within.treatment.sd <- 5

Generate random variation around the means within each temperature treatment:

treatment.means.rnd <- replicate(3, rnorm(4, mean=treatment.means, sd=within.treatment.sd))
treatment.means.rnd

Note that random variation around mean values within each treatment are in rows of the matrix treatment.means.rnd.

We need know to put this matrix into a format in which we can add random residuals later on to generate random variation within and between treatments:

treatment.means.rnd.t <- c(t(treatment.means.rnd))
treatment.means.rnd.t

Unlike the fixed effect (as earlier), the random effect has a mean per replicate that is a random sample from the treatment effect (i.e., hierarchical). Now we will repeat the mean per replicate within treatments and then add residuals to simulated data:

means <- rep(treatment.means.rnd.t, each=10) 
means

As before, let’s set the residual standard deviation sigma at 1 for example. Note that we set sigma to be much smaller than within.treatment.sd (see below for consequences):

sigma <- 1 

Generate residuals:

residual.values <- rnorm(N, 0, sigma) 
residual.values

Generate the response variable Y:

Y <- means + residual.values
Y

Looking at the simulated data

Let’s look at the simulated data (scroll down to see all values) but first we need to create a label for each replicate across treatments. There are 4 temperature treatments, 3 aquaria per treatment (1 to 12) and 10 individual fish per aquarium. Make sure you understand these data structure in View below.

separate.replicates <- rep(1:12, each=n.individuals.per.replicate) 
View(cbind(Y,treatments,separate.replicates))

Replicates (aquaria) 1 to 3 are within treatment 1 (low temperatures), replicates (aquaria) 4 to 6 are within treatment 2 (sub-intermediate temperature), and so on.

Let’s look at the distributions of the simulated data. You will notice that although the treatments still vary in their mean growth, there is variation within treatments in mean per aquarium; which we did not have in the first simulation.

boxplot(Y~separate.replicates, col="firebrick", xlab= "Replicates per treatment", ylab ="Growth", main= "", las =1, outline=FALSE)
abline(v=c(3.5,6.5,9.5))

Let’s calculate the mean value per treatment per replicate:

matrix(tapply(Y, separate.replicates, mean),3,4)

Note that means within temperature treatments across the 3 replicates do differ somewhat for these data. Although there is variation across treatments (temperature levels), the variation within treatments (i.e., for any given temperature level) is not completely random, i.e., individuals within replicates (tank) are more similar in growth than other individuals within the same temperature treatment. So the data are clearly hierarchical and a mixed model needs to be used to assess differences in means among treatments while considering the variation within treatments (i.e., between aquaria within the same treatment).

Using random and one-factorial mixed-effects models

One of the most powerful and used packages to conduct mixed models (i.e., random and fixed effects) is nlme. We have used this package already in a previous tutorial (GLM). Let’s analyse the simulated data.

If you need to install the nlme package in your computer, simply:

install.packages("nlme")
library(nlme)

We need first to set each replicate (tank) as a factor which will be then consider as the random factor in the model:

tanks <- as.factor(separate.replicates)
tanks

The intraclass correlation (as seen in class) can be calculated by the gls function in package nlme. It’s called Rho in the output of the function:

gls(Y ~ 1, correlation=corCompSymm(form= ~ 1 | tanks))

As you can notice, Rho is very close to one, clearly indicating a very strong hierarchical structure in these simulated data. This is because the standard deviation for the population means (inter-populations) was set as much larger (within.treatment.sd = 5) than the standard deviation for the residual variation within populations (intra-population) set as sigma = 1. Try later on to re-run the data simulation with a much smaller within.treatment.sd in relation to sigma, which would generate data with a much smaller intraclass correlation (i.e., a weaker hierarchical structure in the data).

Let’s start by analyzing the random effect only (i.e., replicates within and across treatments) to see how strong the random effect is. The code below ~1|tanks reflects the fact that tanks are nested (hierarchical) within the grand mean (i.e., the grand mean here is the model constant or intercept, which is coded by nlme as 1).

lme.fit <- lme(Y ~ tanks,random = ~1|tanks)
lme.fit

Because it’s a random effect, then each replicate (tank) is assumed to have its own effect. In other words, the model estimates the tank (replicate) effect.

The nlme package does not conduct statistical tests for random effects models; there is a good amount of discussion in the literature as to whether we should estimate p-values or not for random effects (as discussed in class). We can, however, use the package car for testing. The random effects model is tested via the chi-square distribution and not by the F-distribution as in classic fixed effect ANOVAs.

library(car)
Anova(lme.fit)

The random effect is clearly strong (and statistically significant), as suggested by the large intraclass correlation.

It is clear that these data need to account for the variation within treatments while comparing treatments using a one-factorial mixed-effects ANOVA.

To run the one-factorial mixed-effects, we need first to set the label for the treatment (fixed effect). We already have the treatment label from the first simulation since the data structure have changed, i.e.:

treatments <- rep(1:n.treatments, rep(n.replicates.per.treatment * n.individuals.per.replicate, n.treatments)) 
treatments

Let’s now run the mixed model effect with treatments (temperature classes) as a fixed factor and tanks as random:

data.df <- data.frame(Y=Y,treatments=as.factor(treatments),tanks=tanks)
mixed.anova <- lme(Y ~ treatments, random = ~1|tanks,data=data.df)
anova(mixed.anova)

As you can see, despite the fact that the data are quite hierarchical (strong variation in means across aquaria within the same treatment), that temperature has a strong effect on fish growth.

Model diagnostics & surprises

fixed effect ANOVA

One of the issues with hierarchical data is that they are often heteroscedastic. In the case of this ANOVA design, given that mean values per aquarium change within each factor level, it will be often the case that the original ANOVA will have heteroscedastic residuals. Let’s start then by the residuals of the original (i.e., fixed-effect) ANOVA:

fixed.anova <- lm(Y ~ treatments,data=data.df)
plot(fixed.anova,which=3)

Perhaps is easier to observe this using a boxplot:

boxplot(residuals(fixed.anova) ~ treatments,data=data.df)

Indeed, treatement 2 seems to have greater variance in residuals than the othe treatments and treatment 3 smaller residuals.

As we can see there is variation that differs across treatments. Let’s run a formal statistical test to assess heteroscedasticity:

library(car)
leveneTest(residuals(fixed.anova) ~ treatments,data=data.df)

The Levene’s test confirms our original suspicion. Residuals cannot be assumed as homoscedastic. Interestingly, they should be homoscedastic within tanks; remember that we assumed a common error sigma for all tanks but that means within treatements varied among tanks. Since our main interest is the factor temperature and not tanks (tanks are just replicates within treatments), the heteroscedasticity of interest is of the main effect. That said, look at the variance of residuals witin tanks:

boxplot(residuals(fixed.anova) ~ tanks,data=data.df)
abline(v=c(3.5,6.5,9.5))

Observe how the residual variation across tanks seem to be relatively similar. That said, observe how the mean of residuals per tank vary within treatments. This is what generated the heteroscedasticity in the fixed effect ANOVA when looking at the residual variation among treatment levels.

leveneTest(residuals(fixed.anova) ~ as.factor(tanks))

When assessing residuals variance among tanks, the residuals can be assumed indeed homoscedastic. Again, we are not interested in tank variation (they are replicates); this is just to increase your understanding of what random effects are and the reason we use them.

Let’s assess the normality of the fixed model:

qqnorm(residuals(fixed.anova))
qqline(resid(fixed.anova), main = "Residuals")

The residuals do seem to depart from normality. Remember though that ANOVAs are quite robust against normality but not heteroscedasticity.

mixed effect ANOVA

Let’s run the diagnostics now for the mixed model ANOVA we conducted earlier.

Let’s start with the Q-Q plot to assess normality:

qqnorm(resid(mixed.anova), main = "Residuals")
qqline(resid(mixed.anova), main = "Residuals")

Observe how the residuals for the mixed model seem quite normally distributed. This is because the random effect tanks allowed the mixed model to incorporate the heteroscedasticity among tanks within treatment that reduces the perturbation of the intermediate residuals around the line we observed earlier for the fixed ANOVA.

Let’s check for heteroscedasticity of the mixed model now. The package lme doesn’t have a plot for residuals based on the square root of the absolute values of residuals as the one we traditionally use for fixed-effect ANOVAs (as above). The package offers the plot based on standardized residuals which are referred as the Tukey-Anscombe residuals plots:

plot(result.lme)

The residual values are independent of the fitted values indicating homoscedasticity among temperature treatments.

If we want to reproduce the same plot based on the square root of the absolute values of the residuals, we will need to use another package as follows:

install.packages("lattice")
library(lattice)
xyplot(sqrt(abs(resid(mixed.anova))) ~ fitted(mixed.anova), type = c("p", "smooth", "g"),
    col.line = "black")

Let’s do the boxplot:

boxplot(residuals(result.lme) ~ as.factor(treatments))

The variation of residuals among treatments do seem much more similar now.

And finally a formal test for residual homoscedasticity:

leveneTest(residuals(mixed.anova) ~ as.factor(treatments))

We can indeed assume that residuals for the mixed model are homoscedastic.

This tutorial was built for you to understand:

  1. What hierarchical data are.
  2. That hierarchical data often generate heteroscedasticity of residuals in the main factor.
  3. Mixed models incorporate variation within treatments among replicates for the same treatment (here tanks) so that the model properties and assumptions hold when testing the main effect of interest (in this example temperature).

For more on mixed-model diagnostics, check:

https://ethz.ch/content/dam/ethz/special-interest/math/statistics/sfs/Education/Advanced%20Studies%20in%20Applied%20Statistics/course-material-1921/Regression/MixedModels_Lab.pdf