---
title: "Comparing posterior predictions"
output:
rmarkdown::html_vignette:
vignette: >
%\VignetteIndexEntry{Comparing posterior predictions}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
bibliography: bayesnec.bib
---
[e1]: https://open-aims.github.io/bayesnec/articles/example1.html
[e2]: https://open-aims.github.io/bayesnec/articles/example2.html
[e2b]: https://open-aims.github.io/bayesnec/articles/example2b.html
[e3]: https://open-aims.github.io/bayesnec/articles/example3.html
[e4]: https://open-aims.github.io/bayesnec/articles/example4.html
# Overview
With `bayesnec` we have included a function that allows bootstrapped comparisons of posterior predictions. The main focus here is to showcase how the user can fit several different `bnec` model fits and can compare differences in the posterior predictions across these fits for individual toxicity estimates (e.g. *NEC*, *N(S)EC*, *NSEC* or *ECx*) or across a range of predictor values. Below we demonstrate usage of `compare_posterior` for objects of class `bayesnecfit` and `bayesmanecfit`. In this example we compare different types of models and model sets using a single dataset. However, the intent of this function is to allow comparison across different datasets that might represent, for example, different levels of a fixed factor covariate. At this time `bnec` does not allow inclusion of an interaction with a fixed factor. Including an interaction term within each of the non-linear models implemented in `bayesnec` is relatively straightforward, and may be introduced in future releases. However, in many cases the functional form of the response may change with different levels of a given factor. The substantial complexity of defining all possible non-linear model combinations at each factor level means it unlikely this could be feasibly implemented in `bayesnec` in the short term. In the meantime the greatest flexibility in the functional form of individual model fits can be readily obtained using models fitted independently to data within each factor level.
To run this vignette, we will also need some additional packages
```r
library(ggplot2)
```
## Comparing posterior toxicity values
```r
set.seed(333)
library(brms)
library(bayesnec)
data(nec_data)
make_ecx_data <- function(top, bot, ec50, beta, x) {
top + (bot - top) / (1 + exp((ec50 - x) * exp(beta)))
}
x <- seq(0, 10, length = 12)
y <- make_ecx_data(x = x, bot = 0, top = 1, beta = 0.5, ec50 = 5)
set.seed(333)
dat <- data.frame(x = rep(x, 15), y = rnorm(15 * length(x), y, 0.2))
# Fit a set of models
exmp <- bnec(y ~ crf(x, model = "decline"), data = dat, refresh = 0,
open_progress = FALSE)
```
```r
class(exmp)
#> [1] "bayesmanecfit" "bnecfit"
```
This call fits all models that are suitable for modelling Gaussian-distributed response data, excluding all of the hormesis models, which we are not considering here. Now that we have our example fit, we can use this to make different model sets, purely for the purposes of illustrating `compare_posterior` in `bayesnec` and highlighting the rich information that is contained in the Bayesian posterior draws.
We can pull out the **nec** models and the **ecx** models separately, to create two alternative model fits of this data that we can compare to each other, as well as the original **all** model fit.
```r
exmp_nec <- pull_out(exmp, model = "nec")
exmp_ecx <- pull_out(exmp, model = "ecx")
```
Now we have three different averaged model fits, all of class `bayemanec` in this case (because they all contain multiple fits). We can compare their posterior estimates of the ecx10 values using `compare_posterior`.
```r
post_comp <- compare_posterior(list("all" = exmp, "ecx" = exmp_ecx,
"nec" = exmp_nec),
comparison = "ecx", ecx_val = 10)
names(post_comp)
#> [1] "posterior_list" "posterior_data" "diff_list" "diff_data" "prob_diff"
```
The `compare_posterior` function outputs several elements in a named list. This includes the **posterior_data** for each model in the comparison as a `data.frame` which we can use to plot a `geom_density` plot of the posterior estimates, so they can be compared visually.
```r
ggplot(data = post_comp$posterior_data, mapping = aes(x = value)) +
geom_density(mapping = aes(group = model, colour = model, fill = model),
alpha = 0.3) +
theme_classic()
```
From this you can see that the *EC~10~* estimates are very similar for the **ecx** and **all** model sets. This is because the **ecx** model types dominate the model weights in this **all** fit, see `wi` in `exmp$mod_stats`. The *EC~10~* estimate is slightly lower (more conservative) for the **ecx** based models.
The `data.frame` `diff_data` can be used to make a similar plot, but specifically for the differences among models.
```r
ggplot(data = post_comp$diff_data, mapping = aes(x = diff)) +
geom_density(mapping = aes(group = comparison, colour = comparison,
fill = comparison), alpha = 0.3) +
theme_classic()
```
This shows the differences among the three estimates. There is no difference in the **ecx** and **all** estimates (the probability density overlaps zero - red shaded curve). This is because for this example the simulated data are from a smooth **ecx** type curve and thus those models have high weight. The **nec** based *EC~10~* estimates are greater than the **ecx** based estimates in this case.
We can formally test the probability that the toxicity estimate for one model set is greater than the other using posterior differencing. This is contained in the compare posterior output as `prob_diff`. Here you can see there is 62% chance that **all** is greater than **ecx**. There is only a 14% chance that **ecx** is greater than **nec**, and a 86% chance that the **nec** is greater than **ecx**.
```r
post_comp$prob_diff
#> comparison prob
#> 1 all-ecx 0.6160770
#> 2 all-nec 0.2700338
#> 3 ecx-nec 0.1443930
```
## Comparing posterior fitted values
The user can also compare posterior fitted values across the full range of x values, using `comparison = "fitted"`.
```r
post_comp_fitted <- compare_posterior(list("all" = exmp, "ecx" = exmp_ecx,
"nec" = exmp_nec),
comparison = "fitted")
```
In the case of `comparison = "fitted"` most of the elements returned by `compare_posterior` are class `data.frame`, with summary values for the posteriors, difference values and probabilities returned for each value of the predictor, for each model or model comparison.
```r
head(post_comp_fitted$posterior_data)
#> model x Estimate Q2.5 Q97.5
#> 1 all 0.09090909 1.020778 0.9663590 1.084920
#> 2 all 0.29313544 1.020645 0.9663378 1.083134
#> 3 all 0.49536178 1.020406 0.9663378 1.080866
#> 4 all 0.69758813 1.020050 0.9662984 1.078648
#> 5 all 0.89981447 1.019724 0.9662984 1.076251
#> 6 all 1.10204082 1.019195 0.9662886 1.073099
head(post_comp_fitted$diff_data)
#> comparison diff.Estimate diff.Q2.5 diff.Q97.5 x
#> 1 all-ecx -0.006377007 -0.08644327 0.07743305 0.09090909
#> 2 all-ecx -0.006358821 -0.08530609 0.07573855 0.29313544
#> 3 all-ecx -0.006258424 -0.08354907 0.07444896 0.49536178
#> 4 all-ecx -0.006251275 -0.08176871 0.07254152 0.69758813
#> 5 all-ecx -0.005755167 -0.08009080 0.07157816 0.89981447
#> 6 all-ecx -0.005438621 -0.07845105 0.06951660 1.10204082
```
Using the collated posterior_data we can plot the predicted curves with confidence bounds for each of the input models. This shows clearly that the **ecx** model set begins to decline earlier than the **nec** set, which is flat prior to the **nec** step point, and then declines more rapidly.
```r
ggplot(data = post_comp_fitted$posterior_data) +
geom_line(mapping = aes(x = x, y = Estimate, color = model),
linewidth = 0.5) +
geom_ribbon(mapping = aes(x = x, ymin = Q2.5, ymax = Q97.5, fill = model),
alpha = 0.3)+
labs(x = "Concentration", y = "Posterior estimate") +
theme_classic()
```
We can plot the differences between pairs of models in the list by plotting `diff.Estimate` from `diff_data` and using colours for the different comparisons. This plot highlights where the differences among these model sets are the greatest and explains the differences we see above in the *EC~10~* toxicity estimates. The **ecx** and **all** model sets are relatively similar across the entire range of concentrations (red band overlaps zero). The green band is the difference between **nec** and **all** (specifically **all**-**nec**) and the blue band is the difference between **nec** and **ecx** (specifically **ecx**-**nec**). You can see from the green and blue bands that the **nec** set has slightly lower estimates than **all** and **ecx** at low to moderate predictor values; the **nec** based curve then crosses the other two, predicting higher values of the response at around a concentration of 4-5; then drops rapidly, producing lower predictions than the other curves at around 5.
```r
ggplot(data = post_comp_fitted$diff_data) +
geom_line(mapping = aes(x = x, y = diff.Estimate, color = comparison),
linewidth = 0.5) +
geom_ribbon(mapping = aes(x = x, ymin = diff.Q2.5, ymax = diff.Q97.5,
fill = comparison), alpha = 0.3) +
labs(x = "Concentration", y = "Posterior difference") +
theme_classic()
```
And finally we can plot the probability that one model is greater than the other by plotting `prob` from `diff_data`. The pattern of this plot is identical to the plot of differences, but the y axis now shows the probability of these differences. The red line hovers around 0.5 clearly indicating the lack of significant difference in the **ecx** and **all** model sets at any point of the predictor axis. The green and blue curves pass through 0.5 at several points, meaning there are parts of the curve where there is no significant difference between the **nec** and the **ecx** or **all** model set predictions. The greatest probability of difference among these curves is between values of ~3 and ~4 of the predictor, where the probability of difference trends towards low values (**ecx** is higher than **nec**), and at around 5, where the probability difference is very high (**nec** is higher than **ecx**). These are the two points where the **ecx** set deviates most from the **nec** set.
```r
ggplot(data = post_comp_fitted$prob_diff) +
geom_line(mapping = aes(x = x, y = prob, color = comparison),
linewidth = 0.5) +
labs(x = "Concentration", y = "Probability of difference") +
theme_classic()
```
Here we use the difference in the **ecx** and **nec** models sets purely to showcase the functionality of `compare_posterior` and highlight some of the strengths of using a Bayesian approach. These differences are expected in this case, because the simulated data we used are from a smooth **exc** function. The **nec** models are fitting a break-point model that has a single mean value up to the **nec**, followed by a relatively sharp decline. When fitted to these smooth curves the **nec** models do not really fit the data well and produce inflated (less conservative) estimates of the *EC~10~* value. Fortunately, the model averaging approach solves this issue by yielding a very low weight to the **nec** based models, and the **all** model set is indistinguishable from the **ecx** set, which contains the underlying true generating model in this case. As such, this is not really an interesting example---other than illustrating the behaviour of **nec** model types in this setting.
The `compare_posterior` function was actually developed with the intention of comparing model sets based on different data to assess statistically the difference in either the toxicity estimates, or across the entire fitted curve, amongst factors of interest. Examples would include assessing for changes in toxicity across different WET tests through time, or comparing different levels of a factor treatment variable (such as climate change scenarios), to look at how these interact with concentration to impact toxicity.