Appendix C — Case Studies

C.1 Owls (Poisson GLM)

For this case study, we will use the fairly well known Owl dataset which is provided in glmmTMB (see ?Owls for more info about the data). A frequentist base model would be:

library(glmmTMB)
library(effects)
Loading required package: carData
lattice theme set by effectsTheme()
See ?effectsTheme for details.
m1 <- glm(SiblingNegotiation ~ SexParent, data=Owls , family = poisson)
summary(m1)

Call:
glm(formula = SiblingNegotiation ~ SexParent, family = poisson, 
    data = Owls)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-3.7971  -3.4676  -0.4639   1.6272   6.7678  

Coefficients:
              Estimate Std. Error z value Pr(>|z|)    
(Intercept)    1.79380    0.02605  68.847  < 2e-16 ***
SexParentMale  0.18154    0.03272   5.548 2.89e-08 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 4290.9  on 598  degrees of freedom
Residual deviance: 4259.7  on 597  degrees of freedom
AIC: 5944.5

Number of Fisher Scoring iterations: 5
plot(allEffects(m1))

Exercise 1: fit this GLM using a Bayesian approach, e.g. Jags, STAN or brms

Tip

For STAN and JAGS, you will have to transform categorical variables to dummy coding, i.e.

sex = as.numeric(Owls$SexParent) - 1 

Then you can code

sexEffect * sex[i]

Exercise 2: include a log offset to the model too account for BroodSize

m2 <- glm(SiblingNegotiation ~ FoodTreatment*SexParent + offset(log(BroodSize)), data=Owls , family = poisson)

Exercise 3: check residuals and / or add obvious additional components to the model inspired by the frequentist example here.

Exercise 1:

library(glmmTMB)
library(rjags)

Data = list(SiblingNegotiation = Owls$SiblingNegotiation, 
            SexParent = as.numeric(Owls$SexParent)-1, # dummy coding!
            FoodTreatment = as.numeric(Owls$FoodTreatment)-1,
            LogBroodSize = log(Owls$BroodSize),
            nobs = nrow(Owls))


modelCode = "model{

  for(i in 1:nobs){
    SiblingNegotiation[i] ~ dpois(lambda[i])  # poisson error distribution
    lambda[i] <- exp(eta[i]) # inverse link function
    eta[i] <- intercept + EffectSexParent*SexParent[i] + EffektFoodTreatment*FoodTreatment[i] + EffektInterSexFood*SexParent[i]*FoodTreatment[i] + LogBroodSize[i]       # linear predictor
  }
  
  intercept ~ dnorm(0,0.0001)
  EffectSexParent ~ dnorm(0,0.0001)
  EffektFoodTreatment ~ dnorm(0,0.0001)
  EffektInterSexFood ~ dnorm(0,0.0001)

}"

jagsModel <- jags.model(file= textConnection(modelCode), data=Data, n.chains = 3)
Compiling model graph
   Resolving undeclared variables
   Allocating nodes
Graph information:
   Observed stochastic nodes: 599
   Unobserved stochastic nodes: 4
   Total graph size: 2465

Initializing model
para.names <- c("intercept","EffectSexParent", "EffektFoodTreatment", "EffektInterSexFood")
Samples <- coda.samples(jagsModel, variable.names = para.names, n.iter = 5000)

plot(Samples)

summary(Samples)

Iterations = 1001:6000
Thinning interval = 1 
Number of chains = 3 
Sample size per chain = 5000 

1. Empirical mean and standard deviation for each variable,
   plus standard error of the mean:

                        Mean      SD  Naive SE Time-series SE
EffectSexParent      0.06477 0.04044 0.0003302       0.001258
EffektFoodTreatment -0.54843 0.05264 0.0004298       0.001621
EffektInterSexFood   0.07474 0.06677 0.0005452       0.002044
intercept            0.59271 0.03290 0.0002686       0.001022

2. Quantiles for each variable:

                        2.5%      25%      50%      75%   97.5%
EffectSexParent     -0.01426  0.03821  0.06487  0.09166  0.1445
EffektFoodTreatment -0.65267 -0.58382 -0.54851 -0.51282 -0.4453
EffektInterSexFood  -0.05817  0.03028  0.07491  0.11859  0.2081
intercept            0.52785  0.57096  0.59306  0.61455  0.6563

Including the offset

library(rjags)

Data = list(SiblingNegotiation = Owls$SiblingNegotiation, 
            SexParent = as.numeric(Owls$SexParent)-1, # dummy coding!
            FoodTreatment = as.numeric(Owls$FoodTreatment)-1,
            LogBroodSize = log(Owls$BroodSize),
            nobs = nrow(Owls))


modelCode = "model{

for(i in 1:nobs){
SiblingNegotiation[i] ~ dpois(lambda[i])  # poisson error distribution
lambda[i] <- exp(eta[i]) # inverse link function
eta[i] <- intercept + EffectSexParent*SexParent[i] + EffektFoodTreatment*FoodTreatment[i] + EffektInterSexFood*SexParent[i]*FoodTreatment[i] + LogBroodSize[i]       # linear predictor
}

intercept ~ dnorm(0,0.0001)
EffectSexParent ~ dnorm(0,0.0001)
EffektFoodTreatment ~ dnorm(0,0.0001)
EffektInterSexFood ~ dnorm(0,0.0001)

}"

jagsModel <- jags.model(file= textConnection(modelCode), data=Data, n.chains = 3)
Compiling model graph
   Resolving undeclared variables
   Allocating nodes
Graph information:
   Observed stochastic nodes: 599
   Unobserved stochastic nodes: 4
   Total graph size: 2465

Initializing model
para.names <- c("intercept","EffectSexParent", "EffektFoodTreatment", "EffektInterSexFood")
Samples <- coda.samples(jagsModel, variable.names = para.names, n.iter = 5000)

plot(Samples)

summary(Samples)

Iterations = 1001:6000
Thinning interval = 1 
Number of chains = 3 
Sample size per chain = 5000 

1. Empirical mean and standard deviation for each variable,
   plus standard error of the mean:

                        Mean      SD  Naive SE Time-series SE
EffectSexParent      0.06586 0.04134 0.0003376       0.001288
EffektFoodTreatment -0.54818 0.05393 0.0004403       0.001584
EffektInterSexFood   0.07293 0.06833 0.0005579       0.002015
intercept            0.59241 0.03348 0.0002734       0.001055

2. Quantiles for each variable:

                        2.5%      25%      50%      75%   97.5%
EffectSexParent     -0.01427  0.03737  0.06524  0.09458  0.1469
EffektFoodTreatment -0.65330 -0.58465 -0.54833 -0.51157 -0.4424
EffektInterSexFood  -0.06025  0.02658  0.07308  0.11989  0.2055
intercept            0.52634  0.56933  0.59317  0.61581  0.6562

Checking residuals

library(rjags)

Data = list(SiblingNegotiation = Owls$SiblingNegotiation, 
            SexParent = as.numeric(Owls$SexParent)-1, # dummy coding!
            FoodTreatment = as.numeric(Owls$FoodTreatment)-1,
            LogBroodSize = log(Owls$BroodSize),
            nobs = nrow(Owls))


modelCode = "model{

  for(i in 1:nobs){
    SiblingNegotiation[i] ~ dpois(lambda[i])  # poisson error distribution
    lambda[i] <- exp(eta[i]) # inverse link function
    eta[i] <- intercept + EffectSexParent*SexParent[i] + EffektFoodTreatment*FoodTreatment[i] + EffektInterSexFood*SexParent[i]*FoodTreatment[i] + LogBroodSize[i]       # linear predictor
  }
  
  intercept ~ dnorm(0,0.0001)
  EffectSexParent ~ dnorm(0,0.0001)
  EffektFoodTreatment ~ dnorm(0,0.0001)
  EffektInterSexFood ~ dnorm(0,0.0001)

  # Posterior predictive simulations 
  for (i in 1:nobs) {
    SiblingNegotiationPred[i]~dpois(lambda[i])
  }

}"

jagsModel <- jags.model(file= textConnection(modelCode), data=Data, n.chains = 3)
Compiling model graph
   Resolving undeclared variables
   Allocating nodes
Graph information:
   Observed stochastic nodes: 599
   Unobserved stochastic nodes: 603
   Total graph size: 3064

Initializing model
para.names <- c("intercept","EffectSexParent", "EffektFoodTreatment", "EffektInterSexFood","lambda", "SiblingNegotiationPred")
Samples <- coda.samples(jagsModel, variable.names = para.names, n.iter = 5000)

library(BayesianTools)
library(DHARMa)
This is DHARMa 0.4.6. For overview type '?DHARMa'. For recent changes, type news(package = 'DHARMa')
x = getSample(Samples)
# note - yesterday, we calcualted the predictions from the parameters
# here we observe them direct - this is the normal way to calcualte the 
# posterior predictive distribution
posteriorPredDistr = x[,5:(4+599)]
posteriorPredSim = x[,(5+599):(4+2*599)]


sim = createDHARMa(simulatedResponse = t(posteriorPredSim), observedResponse = Owls$SiblingNegotiation, fittedPredictedResponse = apply(posteriorPredDistr, 2, median), integerResponse = T)
plot(sim)
DHARMa:testOutliers with type = binomial may have inflated Type I error rates for integer-valued distributions. To get a more exact result, it is recommended to re-run testOutliers with type = 'bootstrap'. See ?testOutliers for details

Here a base model with random effect

library(brms)
m2 = brms::brm(SiblingNegotiation ~ FoodTreatment * SexParent
  + (1|Nest) + offset(log(BroodSize)), 
  data = Owls , 
  family = negbinomial)
Running /Library/Frameworks/R.framework/Resources/bin/R CMD SHLIB foo.c
clang -mmacosx-version-min=10.13 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG   -I"/Library/Frameworks/R.framework/Versions/4.2/Resources/library/Rcpp/include/"  -I"/Library/Frameworks/R.framework/Versions/4.2/Resources/library/RcppEigen/include/"  -I"/Library/Frameworks/R.framework/Versions/4.2/Resources/library/RcppEigen/include/unsupported"  -I"/Library/Frameworks/R.framework/Versions/4.2/Resources/library/BH/include" -I"/Library/Frameworks/R.framework/Versions/4.2/Resources/library/StanHeaders/include/src/"  -I"/Library/Frameworks/R.framework/Versions/4.2/Resources/library/StanHeaders/include/"  -I"/Library/Frameworks/R.framework/Versions/4.2/Resources/library/RcppParallel/include/"  -I"/Library/Frameworks/R.framework/Versions/4.2/Resources/library/rstan/include" -DEIGEN_NO_DEBUG  -DBOOST_DISABLE_ASSERTS  -DBOOST_PENDING_INTEGER_LOG2_HPP  -DSTAN_THREADS  -DUSE_STANC3 -DSTRICT_R_HEADERS  -DBOOST_PHOENIX_NO_VARIADIC_EXPRESSION  -D_HAS_AUTO_PTR_ETC=0  -include '/Library/Frameworks/R.framework/Versions/4.2/Resources/library/StanHeaders/include/stan/math/prim/fun/Eigen.hpp'  -D_REENTRANT -DRCPP_PARALLEL_USE_TBB=1   -I/usr/local/include   -fPIC  -Wall -g -O2  -c foo.c -o foo.o
In file included from <built-in>:1:
In file included from /Library/Frameworks/R.framework/Versions/4.2/Resources/library/StanHeaders/include/stan/math/prim/fun/Eigen.hpp:22:
In file included from /Library/Frameworks/R.framework/Versions/4.2/Resources/library/RcppEigen/include/Eigen/Dense:1:
In file included from /Library/Frameworks/R.framework/Versions/4.2/Resources/library/RcppEigen/include/Eigen/Core:19:
/Library/Frameworks/R.framework/Versions/4.2/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:679:10: fatal error: 'cmath' file not found
#include <cmath>
         ^~~~~~~
1 error generated.
make: *** [foo.o] Error 1

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.000232 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 2.32 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 2.019 seconds (Warm-up)
Chain 1:                1.583 seconds (Sampling)
Chain 1:                3.602 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 0.00012 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 1.2 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 1.917 seconds (Warm-up)
Chain 2:                1.496 seconds (Sampling)
Chain 2:                3.413 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 0.000128 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 1.28 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 1.962 seconds (Warm-up)
Chain 3:                1.63 seconds (Sampling)
Chain 3:                3.592 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 0.00013 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 1.3 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 1.981 seconds (Warm-up)
Chain 4:                1.58 seconds (Sampling)
Chain 4:                3.561 seconds (Total)
Chain 4: 
summary(m2)
 Family: negbinomial 
  Links: mu = log; shape = identity 
Formula: SiblingNegotiation ~ FoodTreatment * SexParent + (1 | Nest) + offset(log(BroodSize)) 
   Data: Owls (Number of observations: 599) 
  Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup draws = 4000

Group-Level Effects: 
~Nest (Number of levels: 27) 
              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept)     0.39      0.10     0.22     0.61 1.00     1034     1605

Population-Level Effects: 
                                    Estimate Est.Error l-95% CI u-95% CI Rhat
Intercept                               0.72      0.14     0.44     1.02 1.00
FoodTreatmentSatiated                  -0.78      0.17    -1.11    -0.43 1.00
SexParentMale                          -0.03      0.15    -0.33     0.25 1.00
FoodTreatmentSatiated:SexParentMale     0.16      0.21    -0.25     0.57 1.00
                                    Bulk_ESS Tail_ESS
Intercept                               2621     2779
FoodTreatmentSatiated                   3128     3210
SexParentMale                           3373     2454
FoodTreatmentSatiated:SexParentMale     3079     3016

Family Specific Parameters: 
      Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
shape     0.84      0.07     0.71     0.97 1.00     5786     2933

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
plot(m2, ask = FALSE)

C.2 Beetles

# This first part creates a dataset with beetles counts across an altitudinal gradient (several plots each observed several years), with a random intercept on year and zero-inflation. 

altitude = rep(seq(0,1,len = 50), each = 20)
dataID = 1:1000
spatialCoordinate = rep(seq(0,30, len = 50), each = 20)

# random effects + zeroinflation
plot = rep(1:50, each = 20)
year = rep(1:20, times = 50)

yearRandom = rnorm(20, 0, 1)
plotRandom = rnorm(50, 0, 1)
overdispersion = rnorm(1000, sd = 0.5)
zeroinflation = rbinom(1000,1,0.6)

beetles <- rpois(1000, exp( 0  + 12*altitude - 12*altitude^2 
                            #  + overdispersion   + plotRandom[plot]
                            + yearRandom[year]) * zeroinflation )

data = data.frame(dataID, beetles, altitude, plot, year, spatialCoordinate)

plot(year, altitude, cex = beetles/50, pch =2, main = "Beetle counts across altitudinal gradient\n triangle is proportional to counts")

library(R2jags)

Attaching package: 'R2jags'
The following object is masked from 'package:coda':

    traceplot
modelData=as.list(data)
modelData = append(data, list(nobs=1000, nplots = 50, nyears = 20))
head(data)
  dataID beetles altitude plot year spatialCoordinate
1      1       2        0    1    1                 0
2      2       0        0    1    2                 0
3      3       0        0    1    3                 0
4      4       2        0    1    4                 0
5      5       0        0    1    5                 0
6      6       0        0    1    6                 0
# 1) Fit GLM only 

modelstring="
model {
  
  # Likelihood
  for (i in 1:nobs) {
    lambda[i] <- exp(intercept + alt * altitude[i] + alt2 * altitude[i] * altitude[i]) 
    beetles[i]~dpois(lambda[i]) 
  }
  
  # Fixed effect priors 
  intercept ~ dnorm(0,0.0001)
  alt ~ dnorm(0,0.0001)
  alt2 ~ dnorm(0,0.0001)

  # Posterior predictive simulations 
  
  for (i in 1:nobs) {
    beetlesPred[i]~dpois(lambda[i])
  }
  Prediction <- sum(beetlesPred)
}
"

model=jags(model.file = textConnection(modelstring), data=modelData, n.iter=10000,  parameters.to.save = c("intercept", "alt", "alt2", "beetlesPred", "lambda"), DIC = F)
module glm loaded
module dic loaded
Warning in jags.model(model.file, data = data, inits = init.values, n.chains =
n.chains, : Unused variable "dataID" in data
Warning in jags.model(model.file, data = data, inits = init.values, n.chains =
n.chains, : Unused variable "plot" in data
Warning in jags.model(model.file, data = data, inits = init.values, n.chains =
n.chains, : Unused variable "year" in data
Warning in jags.model(model.file, data = data, inits = init.values, n.chains =
n.chains, : Unused variable "spatialCoordinate" in data
Warning in jags.model(model.file, data = data, inits = init.values, n.chains =
n.chains, : Unused variable "nplots" in data
Warning in jags.model(model.file, data = data, inits = init.values, n.chains =
n.chains, : Unused variable "nyears" in data
Compiling model graph
   Resolving undeclared variables
   Allocating nodes
Graph information:
   Observed stochastic nodes: 1000
   Unobserved stochastic nodes: 1003
   Total graph size: 3208

Initializing model
library(DHARMa)
simulations = model$BUGSoutput$sims.list$beetlesPred
pred = apply(model$BUGSoutput$sims.list$lambda, 2, median)
dim(simulations)
[1] 3000 1000
sim = createDHARMa(simulatedResponse = t(simulations), observedResponse = data$beetles, fittedPredictedResponse = pred, integerResponse = T)
plotSimulatedResiduals(sim)
plotSimulatedResiduals is deprecated, please switch your code to simply using the plot() function
DHARMa:testOutliers with type = binomial may have inflated Type I error rates for integer-valued distributions. To get a more exact result, it is recommended to re-run testOutliers with type = 'bootstrap'. See ?testOutliers for details
qu = 0.25, log(sigma) = -4.086977 : outer Newton did not converge fully.

C.3 CNDD estimated, Comita et al., 2010

This is the model from Comita, L. S., Muller-Landau, H. C., Aguilar, S., & Hubbell, S. P. (2010). Asymmetric density dependence shapes species abundances in a tropical tree community. Science, 329(5989), 330-332.

The model was originally written in WinBugs. The version here was here slightly modified to be run with JAGS. Rerunning these models were part of the tests we did to settle on the methodology in Hülsmann, L., Chisholm, R. A., Comita, L., Visser, M. D., de Souza Leite, M., Aguilar, S., … & Hartig, F. (2024). Latitudinal patterns in stabilizing density dependence of forest communities. Nature, 1-8.

Our tests indicated that this model is excellent in recovering CNDD estimates from simulations. The main reason we didn’t use a similar model in Hülsmann et al., 2024 were computational limitations and the difficulty to include splines on the species-specific density responses in such a hierarchical setting.

Task: go through the paper and the code and try to understand what the structure of the model!

model{
  for (i in 1:N) {
    SD[i] ~ dbern(p[i])
    SD_sim[i] ~ dbern(p[i])
    logit(p[i]) <- B[SPP[i], ] %*% PREDS[i, ] + u[PLOT[i]]
  }
  
  # Standard Random intercept on plot
  for (m in 1:Nplots) {
    u[m] ~ dnorm(0, a.tau)
  }
  a.sigma ~ dunif(0, 100)
  a.tau <- 1 / (a.sigma * a.sigma)
  
  #redundant parameterization speeds convergence in WinBugs, see Gelman & Hill (2007)
  for (k in 1:K) {
    for (j in 1:Nspp) {
      B[j, k] <- xi[k] * B.raw[j, k]
    }
    xi[k] ~ dunif(0, 100)
  }
  
  #multivariate normal distribution for B values of each species
  for (j in 1:Nspp) {
    B.raw[j, 1:K] ~ dmnorm(B.raw.hat[j, ], Tau.B.raw[, ])
    
    #G.raw is matrix of regression coefficients for species-level model
    #ABUND is  species-level predictors (abundance and shade tolerance)
    for (k in 1:K) {
      B.raw.hat[j, k] <-
        G.raw[k, ] %*% ABUND[j, ] # ABUND needs to be matrix w/ 1st column all 1's
    }
  }
  
  #priors for G and redundant parameterization
  for (k in 1:K) {
    for (l in 1:3) {
      G[k, l] <- xi[k] * G.raw[k, l]
      G.raw[k, l] ~ dnorm(0, 0.1)
    }
  }
  
  #covariance matrix modeled using scaled inverse wishart model
  Tau.B.raw[1:K, 1:K] ~ dwish(W[, ], df)
  df <- K + 1
  Sigma.B.raw[1:K, 1:K] <- inverse(Tau.B.raw[, ])
  
  # correlations
  for (k in 1:K) {
    for (k.prime in 1:K) {
      rho.B[k, k.prime] <-
        Sigma.B.raw[k, k.prime]  /   sqrt(Sigma.B.raw[k, k] * Sigma.B.raw[k.prime, k.prime])
    }
    #
    sigma.B[k] <- abs(xi[k]) * sqrt(Sigma.B.raw[k, k])
  }
  
  ################ Predictions ############
  # Addition to original model (Nov 2019)
  # to estimate the effect of mortality (response) when changing 1 unit on x-axis
  # data (old, read with the name 'txt') is centered but not scaled, therefore 'zero' is here the value of '-6.81'
  
  for (i in 1:N) {
    baseMort[i] = ilogit(B[SPP[i], 1] * PREDS[i, 1] + B[SPP[i], 2] * (-6.81) + B[SPP[i], 3:5] %*% PREDS[i, 3:5])
    conMort[i] = ilogit(B[SPP[i], 1] * PREDS[i, 1] + B[SPP[i], 2] * (-5.81) + B[SPP[i], 3:5] %*% PREDS[i, 3:5])
    # relConEffekt[i] <- (conMort[i] - baseMort[i]) / baseMort[i]
    # hetEffekt[i] <- ilogit(B[SPP[i],1] * PREDS[i,1] + B[SPP[i],2] * PREDS[i,3]^CC[SPP[i]] + B[SPP[i],3] * 1 + B[SPP[i],4:5] %*% PREDS[i,4:5]) - ilogit(B[SPP[i],1] * PREDS[i,1] + B[SPP[i],2] * PREDS[i,3]^CC[SPP[i]] + B[SPP[i],3] * 0 + B[SPP[i],4:5] %*% PREDS[i,4:5])
    
  }
  
}

C.4 Support for mixed model

## ---- echo=F, warning=F, message=F---------------------------------------
set.seed(123)
rm(list=ls(all=TRUE))
library(rjags)
library(runjags)
library(lme4)
Loading required package: Matrix

Attaching package: 'lme4'
The following object is masked from 'package:brms':

    ngrps
library(effects)
library(R2jags)

## ---- fig.width=5, fig.height=5------------------------------------------
a <- 5
b <- 10
sigma <- 10
rsigma = 30
group = rep(1:11, each = 5)
randomEffect = rnorm(11, sd = rsigma)

x <- -27:27
y <- a * x + b + rnorm(55,0,sd = sigma) + randomEffect[group]
plot(x,y, col = group, pch = 3)

## ---- fig.width=5, fig.height=5------------------------------------------
fit <- lm(y ~ x)
summary(fit)

Call:
lm(formula = y ~ x)

Residuals:
   Min     1Q Median     3Q    Max 
-47.21 -21.53  -4.41  11.95  71.57 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  25.6079     4.0436   6.333 5.32e-08 ***
x             4.5581     0.2547  17.894  < 2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 29.99 on 53 degrees of freedom
Multiple R-squared:  0.858, Adjusted R-squared:  0.8553 
F-statistic: 320.2 on 1 and 53 DF,  p-value: < 2.2e-16
plot(allEffects(fit, partial.residuals = T))

## ---- fig.width=5, fig.height=5------------------------------------------
fit <- lmer(y ~ x + (1|group))
summary(fit)
Linear mixed model fit by REML ['lmerMod']
Formula: y ~ x + (1 | group)

REML criterion at convergence: 438.2

Scaled residuals: 
     Min       1Q   Median       3Q      Max 
-1.81013 -0.60007 -0.07453  0.70328  1.91655 

Random effects:
 Groups   Name        Variance Std.Dev.
 group    (Intercept) 925.64   30.424  
 Residual              89.63    9.467  
Number of obs: 55, groups:  group, 11

Fixed effects:
            Estimate Std. Error t value
(Intercept)  25.6079     9.2617   2.765
x             4.6488     0.4914   9.461

Correlation of Fixed Effects:
  (Intr)
x 0.000 
plot(x,y, col = group,  pch = 3)
for(i in 1:11){
  abline(coef(fit)$group[i,1], coef(fit)$group[i,2], col = i)
}

## ------------------------------------------------------------------------
  # 1) Model definition exactly how we created our data 
  modelCode = "
    model{
      
      # Likelihood
      for(i in 1:i.max){
        y[i] ~ dnorm(mu[i],tau)
        mu[i] <- a*x[i] + b
      }

      # Prior distributions
      a ~ dnorm(0,0.001)
      b ~ dnorm(0,0.001)
      tau <- 1/(sigma*sigma)
      sigma ~ dunif(0,100)
    }
  "
  
  # 2) Set up a list that contains all the necessary data (here, including parameters of the prior distribution)
  Data = list(y = y, x = x, i.max = length(y))

  # 3) Specify a function to generate inital values for the parameters
  inits.fn <- function() list(a = rnorm(1), b = rnorm(1), sigma = runif(1,1,100))


## ---- fig.width=7, fig.height=7------------------------------------------
  # Compile the model and run the MCMC for an adaptation (burn-in) phase
  jagsModel <- jags.model(file= textConnection(modelCode), data=Data, init = inits.fn, n.chains = 3, n.adapt= 1000)
Compiling model graph
   Resolving undeclared variables
   Allocating nodes
Graph information:
   Observed stochastic nodes: 55
   Unobserved stochastic nodes: 3
   Total graph size: 230

Initializing model
  # Specify parameters for which posterior samples are saved
  para.names <- c("a","b","sigma")

  # Continue the MCMC runs with sampling
  Samples <- coda.samples(jagsModel, variable.names = para.names, n.iter = 5000)
  
  # Plot the mcmc chain and the posterior sample for p
  plot(Samples)

  dic = dic.samples(jagsModel, n.iter = 5000)
  dic
Mean deviance:  531.3 
penalty 3.12 
Penalized deviance: 534.4 
## ------------------------------------------------------------------------
gelman.diag(Samples)
Potential scale reduction factors:

      Point est. Upper C.I.
a              1          1
b              1          1
sigma          1          1

Multivariate psrf

1
## ------------------------------------------------------------------------
summary(Samples)

Iterations = 1001:6000
Thinning interval = 1 
Number of chains = 3 
Sample size per chain = 5000 

1. Empirical mean and standard deviation for each variable,
   plus standard error of the mean:

        Mean    SD Naive SE Time-series SE
a      4.556 0.261 0.002131       0.002111
b     25.151 4.127 0.033694       0.033694
sigma 30.703 3.038 0.024803       0.033976

2. Quantiles for each variable:

        2.5%    25%    50%   75%  97.5%
a      4.044  4.383  4.554  4.73  5.069
b     16.946 22.427 25.159 27.93 33.239
sigma 25.443 28.538 30.458 32.64 37.230
## ---- fig.width=5, fig.height=5------------------------------------------
plot(x,y)
sampleMatrix <- as.matrix(Samples)
selection <- sample(dim(sampleMatrix)[1], 1000)
for (i in selection) abline(sampleMatrix[i,1], sampleMatrix[i,1], col = "#11111105")

Alternative: mixed model

## ------------------------------------------------------------------------
  # 1) Model definition exactly how we created our data 
  modelCode = "
    model{
      
      # Likelihood
      for(i in 1:i.max){
        y[i] ~ dnorm(mu[i],tau)
        mu[i] <- a*x[i] + b + r[group[i]]
      }

      # random effect
      for(i in 1:nGroups){
        r[i] ~ dnorm(0,rTau)
      }

      # Prior distributions
      a ~ dnorm(0,0.001)
      b ~ dnorm(0,0.001)

      tau <- 1/(sigma*sigma)
      sigma ~ dunif(0,100)

      rTau <- 1/(rSigma*rSigma)
      rSigma ~ dunif(0,100)
    }
  "
  
  # 2) Set up a list that contains all the necessary data (here, including parameters of the prior distribution)
  Data = list(y = y, x = x, i.max = length(y), group = group, nGroups = 11)

  # 3) Specify a function to generate inital values for the parameters
  inits.fn <- function() list(a = rnorm(1), b = rnorm(1), sigma = runif(1,1,100), rSigma = runif(1,1,100))


## ---- fig.width=7, fig.height=7------------------------------------------
  # Compile the model and run the MCMC for an adaptation (burn-in) phase
  jagsModel <- jags.model(file= textConnection(modelCode), data=Data, init = inits.fn, n.chains = 3, n.adapt= 1000)
Compiling model graph
   Resolving undeclared variables
   Allocating nodes
Graph information:
   Observed stochastic nodes: 55
   Unobserved stochastic nodes: 15
   Total graph size: 300

Initializing model
  # Specify parameters for which posterior samples are saved
  para.names <- c("a","b","sigma", "rSigma")

  # Continue the MCMC runs with sampling
  Samples <- coda.samples(jagsModel, variable.names = para.names, n.iter = 5000)
  
  # Plot the mcmc chain and the posterior sample for p
  plot(Samples)

## ----  fig.width=18, fig.height=18---------------------------------------
R2JagsResults <- jags(data=Data, inits=inits.fn, parameters.to.save=c("a","b","sigma", "rSigma", "r"), n.chains=3, n.iter=5000, model.file=textConnection(modelCode))
Compiling model graph
   Resolving undeclared variables
   Allocating nodes
Graph information:
   Observed stochastic nodes: 55
   Unobserved stochastic nodes: 15
   Total graph size: 300

Initializing model
plot(R2JagsResults)

print(R2JagsResults)
Inference for Bugs model at "8", fit using jags,
 3 chains, each with 5000 iterations (first 2500 discarded), n.thin = 2
 n.sims = 3750 iterations saved
         mu.vect sd.vect    2.5%     25%     50%     75%   97.5%  Rhat n.eff
a          4.677   0.538   3.606   4.322   4.674   5.021   5.781 1.001  3800
b         22.937  10.080   2.115  16.456  23.370  29.476  42.337 1.001  3800
r[1]       6.099  17.237 -27.675  -5.190   5.841  17.011  41.579 1.001  3800
r[2]      -9.529  15.256 -39.082 -19.577  -9.758   0.123  21.715 1.001  3800
r[3]      47.123  13.504  21.074  38.305  46.654  55.792  74.498 1.001  3800
r[4]     -35.103  12.021 -58.729 -42.698 -35.340 -27.654 -10.327 1.001  3800
r[5]     -19.663  11.078 -40.985 -26.930 -19.774 -12.518   2.430 1.001  3800
r[6]       4.013  10.884 -16.642  -3.142   3.759  10.816  26.540 1.001  3800
r[7]      59.492  11.339  37.939  52.047  59.294  66.532  83.887 1.001  2700
r[8]      -4.679  11.977 -28.086 -12.724  -4.851   2.696  19.435 1.001  3800
r[9]       4.384  13.402 -21.492  -4.439   4.077  12.763  31.141 1.001  3800
r[10]    -30.631  15.248 -61.410 -40.487 -30.964 -20.650  -0.380 1.001  3800
r[11]      7.457  17.124 -27.732  -3.390   7.378  18.913  40.703 1.001  3800
rSigma    34.947   9.763  21.354  28.142  33.184  39.484  59.266 1.001  3800
sigma      9.760   1.096   7.902   8.986   9.654  10.408  12.220 1.003   820
deviance 405.020   5.778 395.856 400.952 404.168 408.333 418.513 1.002  1100

For each parameter, n.eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor (at convergence, Rhat=1).

DIC info (using the rule, pD = var(deviance)/2)
pD = 16.7 and DIC = 421.7
DIC is an estimate of expected predictive error (lower deviance is better).
dic = dic.samples(jagsModel, n.iter = 5000)
dic
Mean deviance:  405 
penalty 12.78 
Penalized deviance: 417.8 
## ---- fig.width=14, fig.height=14, eval= F-------------------------------
## R2JagsCoda <- as.mcmc(R2JagsResults)
## plot(R2JagsCoda)
## summary(R2JagsCoda)