--- title: "Power Analysis in R" author: "Martin Schweinberger" date: "`r format(Sys.time(), '%Y-%m-%d')`" output: bookdown::html_document2 bibliography: bibliography.bib link-citations: yes --- ```{r uq1, echo=F, fig.cap="", message=FALSE, warning=FALSE, out.width='100%'} knitr::include_graphics("https://slcladal.github.io/images/uq1.jpg") ``` # Introduction{-} This tutorial introduces power analysis using R. Power analysis is a method primarily used to determine the appropriate sample size for empirical studies. ```{r diff, echo=FALSE, out.width= "15%", out.extra='style="float:right; padding:10px"'} knitr::include_graphics("https://slcladal.github.io/images/yr_chili.jpg") ``` This tutorial is aimed at intermediate and advanced users of R with the aim of showcasing how to perform power analyses for basic inferential tests using the `pwr` package [@pwr] and for mixed-effect models (generated with the `lme4` package) using the `simr` package [@simr] in R. The aim is not to provide a fully-fledged analysis but rather to show and exemplify a handy method for estimating the power of experimental and observational designs and how to implement this in R. A list of very recommendable papers discussing research on effect sizes and power analyses on linguistic data can be found [here](http://crr.ugent.be/archives/2014). Here are some main findings coming from these papers as stated in @brysbaert2018power and on the [http://crr.ugent.be](http://crr.ugent.be) website: * In experimental psychology we can do replicable research with 20 participants or less if we have multiple observations per participant per condition, because we can turn rather small differences between conditions into effect sizes of d > .8 by averaging across observations. * The more sobering finding is that the required number of observations is higher than the numbers currently used (which is why we run underpowered studies). The ballpark figure we propose for RT experiments with repeated measures is 1600 observations per condition (e.g., 40 participants and 40 stimuli per condition). * The 1600 observations we propose is when you start a new line of research and don’t know what to expect. * Standardized effect sizes in analyses over participants (e.g., Cohen’s d) depend on the number of stimuli that were presented. Hence, you must include the same number of observations per condition if you want to replicate the results. The fact that the effect size depends on the number of stimuli also has implications for meta-analyses.

The entire R Notebook for the tutorial can be downloaded [**here**](https://slcladal.github.io/content/pwr.Rmd). If you want to render the R Notebook on your machine, i.e. knitting the document to html or a pdf, you need to make sure that you have R and RStudio installed and you also need to download the [**bibliography file**](https://slcladal.github.io/content/bibliography.bib) and store it in the same folder where you store the Rmd file.


The basis for the present tutorial is @green2016simr (which you can find [here](https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/2041-210X.12504)). @green2016simr is a highly recommendable and thorough tutorial on performing power analysis in R. Recommendable literature on this topic are, e.g. @arnold2011simulation and @johnson2015power and [this tutorial](https://www.journalofcognition.org/articles/10.5334/joc.10/). ## What determines if you find an effect?{-} To explore this issue, let us have a look at some distributions of samples with varying features sampled from either one or two distributions. Let us start with the distribution of two samples (N = 30) sampled from the same population. ```{r same, echo = F, warning=F, message=F} distplot <- function(mean1, mean2, sd1, n, pop = "two different populations", d = 0, effect = "no", ylim = 0.06, seed = 123){ options(scipen = 999) require(tidyverse) require(DescTools) set.seed(seed) data.frame(time = c(rnorm(n, mean1, sd1), rnorm(n, mean2, sd1))) %>% dplyr::mutate(group = c(rep("Norwegian", n), rep("English", n))) %>% dplyr::mutate(group = factor(group)) %>% group_by(group) %>% mutate(mean = mean(time), cil = DescTools::MeanCI(time, conf.level=0.95)[2], ciu = DescTools::MeanCI(time, conf.level=0.95)[3]) %>% ungroup() %>% mutate(t = t.test(time ~ group, conf.level=0.95)[1], df = t.test(time ~ group, conf.level=0.95)[2], p = t.test(time ~ group, conf.level=0.95)[3]) %>% rowwise() %>% mutate(ttest = paste0("t: ", round(t, 2), ", df: ", round(df, 1), ", p-value: ", round(p, 4))) %>% select(-t, -df, -p) -> pdat ggplot(pdat, aes(x = time, group = group, color = group, linetype = group)) + geom_density(aes(alpha = .9)) + theme(legend.position = "none") + geom_point(aes(x = mean, y = 0.05, group= group)) + geom_errorbarh(aes(xmin = cil, xmax = ciu, y = 0.05, height = .005)) + theme_bw() + theme(legend.position = "none") + coord_cartesian(xlim = c(60, 140), ylim = c(0, ylim)) + ggplot2::annotate("text", x = 120, y = 0.05, label = unique(pdat$ttest)) + labs(title = paste0("2 groups (N = ", n, " sampled from ", pop, " (with means and CIs)\n", effect, " effect (mean(s): ", mean1, "; ", mean2, ", sd: ", sd1, ", Cohen's d: ", d, ")!"), x = "", y = "Probability") } ``` ```{r displot1, echo = F, warning=F, message=F} distplot(mean1 = 100, mean2 = 100, sd1 = 10, n = 30, pop = "the same population") ``` The means of the samples are very similar and a t-test confirms that we can assume that the samples are drawn from the same population (as the p-value is greater than .05). Let us now draw another two samples (N = 30) but from different populations where the effect of group is weak (the population difference is small). ```{r weak, echo = F, warning=F, message=F} distplot(mean1 = 99, mean2 = 101, sd1 = 10, n = 30, d = .2, effect = "weak", seed = 111) ``` Let us briefly check if the effect size, Cohen's *d* of 0.2, is correct (for this, we increase the sample size dramatically to get very accurate estimates). If the above effect size is correct (Cohen's *d* = 0.2), then the reported effect size should be 0.2 (or -0.2). This is so because Cohen's *d* represents distances between group means measured in standard deviations (in our example, the standard deviation is 10 and the difference between the means is 2, i.e., 20 percent of a standard deviation; which is equal to a Cohen's *d* value of 0.2). Let's check the effect size now (we will only do this for this distribution but you can easily check for yourself that the effect sizes provided in the plot headers below are correct by adapting the code chunk below and using the numbers provided in the plot headers). ```{r efsize, warning=F, message=F} # generate two vectors with numbers using the means and sd from above # means = 101, 99, sd = 10 num1 <- rnorm(1000000, mean = 101, sd = 10) num2 <- rnorm(1000000, mean = 99, sd = 10) # perform t-test tt <- t.test(num1, num2, alternative = "two.sided") # extract effect size effectsize::effectsize(tt) ``` Based on the t-test results in the distribution plot, we assume that the data represents two samples from the same population (which is false) because the p-value is higher than .05. This means that the sample size is insufficient or *does not enough power* to show that they are actually from two different populations given the variability in the data nd the size of the effect (which is weak). What happens if we increase the effect size to medium? This means that we again draw two samples from two different populations but the difference between the populations is a bit larger (group has a medium effect, Cohen's *d* = .5) ```{r medium, echo = F, warning=F, message=F} distplot(mean1 = 97.5, mean2 = 102.5, sd1 = 10, n = 30, d = 0.5, effect = "medium", seed = 222) ``` Now, lets have a look at at the distribution of two different groups (group has a strong effect, Cohen's *d* = .8) ```{r strong, echo = F, warning=F, message=F} distplot(mean1 = 96, mean2 = 104, sd1 = 10, n = 30, d = 0.8, effect = "strong", seed = 444, ylim = 0.08) ``` Now, lets have a look at at the distribution of two different groups (group has a very strong effect, Cohen's *d* = 1.2) ```{r vstrong, echo = F, warning=F, message=F} distplot(mean1 = 94, mean2 = 106, sd1 = 10, n = 30, d = 1.2, effect = "very strong", seed = 555) ``` **If variability and sample size remain constant, larger effects are easier to detect than smaller effects!** ## Sample size{-} And let's now look at sample size. ```{r n30, echo = F, warning=F, message=F} distplot(mean1 = 97.5, mean2 = 102.5, sd1 = 10, n = 30, d = 0.5, effect = "medium", seed = 555) ``` Let us now increase the sample size to N = 50. ```{r n50, echo = F, warning=F, message=F} distplot(mean1 = 97.5, mean2 = 102.5, sd1 = 10, n = 50, d = 0.5, effect = "medium", seed = 888) ``` **If variability and effect size remain constant, effects are easier to detect with increasing sample size!** ## Variability{-} And let's now look at variability. ```{r sd10, echo = F, warning=F, message=F} distplot(mean1 = 97.5, mean2 = 102.5, sd1 = 10, n = 30, d = 0.5, effect = "medium", seed = 888) ``` Let's decrease the variability to sd = 5. ```{r sd5, echo = F, warning=F, message=F} distplot(mean1 = 97.5, mean2 = 102.5, sd1 = 5, n = 30, d = 0.5, effect = "medium", ylim = 0.125, seed = 999) ``` **If the sample and effect size remain constant, effects are easier to detect with decreasing variability!** **In summary**, there are three main factors that determine if a model finds an effect. The accuracy (i.e., the probability of finding an effect): * the size of the effect (bigger effects are easier to detect) * the variability of the effect (less variability makes it easier to detect an effect), and * the sample size (the bigger the sample size, the easier it is to detect an effect); + number of subjects/participants + number of items/questions + number of observations per item within subjects/participants Now, if a) we dealing with a very big effect, then we need only few participants and few items to accurately find this effect. Or b) if we dealing with an effect that has low variability (it is observable for all subjects with the same strength), then we need only few participants and few items to accurately find this effect. Before we conduct a study, we should figure out, what sample we need to detect a small/medium effect with medium variability so that our model is sufficient to detect this kind of effect. In order to do this, we would generate a data set that mirrors the kind of data that we expect to get (with the properties that we expect to get). We can then fit a model to this data and check if a model would be able to detect the expected effect. However, because a single model does not tell us that much (it could simply be luck that it happened to find the effect), we run many different models on variations of the data and see how many of them find the effect. As a general rule of thumb, we want a data set that allows a model to find a medium sized effect with at least an accuracy of 80 percent [@field2007making]. In the following, we will go through how to determine what sample size we need for an example analysis. ## Preparation and session set up{-} This tutorial is based on R. If you have not installed R or are new to it, you will find an introduction to and more information how to use R [here](https://slcladal.github.io/intror.html). For this tutorials, we need to install certain *packages* into the R *library* on your computer so that the scripts shown below are executed without errors. Before turning to the code below, please install the packages by running the code below this paragraph. If you have already installed the packages mentioned below, then you can skip ahead and ignore this section. To install the necessary packages, simply run the following code - it may take some time (between 1 and 5 minutes to install all of the libraries so you do not need to worry if it takes some time). ```{r prep1, echo=T, eval = F, message=FALSE, warning=FALSE} # install libraries install.packages(c("tidyverse", "pwr", "lme4", "sjPlot", "simr", "effectsize")) install.packages(c("DT", "knitr", "flextable")) install.packages("DescTools") install.packages("pacman") # install klippy for copy-to-clipboard button in code chunks install.packages("remotes") remotes::install_github("rlesur/klippy") ``` Now that we have installed the packages, we can activate them as shown below. ```{r prep2, message=FALSE} # set options options(stringsAsFactors = F) # no automatic conversion of factors options("scipen" = 100, "digits" = 4) # suppress math annotation options(max.print=1000) # print max 1000 results # load packages library(tidyverse) library(pwr) library(lme4) library(sjPlot) library(simr) library(effectsize) library(DT) library(knitr) library(flextable) pacman::p_load_gh("trinker/entity") # activate klippy for copy-to-clipboard button klippy::klippy() ``` Once you have installed R and RStudio and initiated the session by executing the code shown above, you are good to go. # Basic Power Analysis{-} Let's start with a simple power analysis to see how power analyses work for simpler or basic statistical tests such as t-test, $\chi$^2^-test, or linear regression. The `pwr` package [@pwr] implements power analysis as outlined by @cohen1988statistical and allows to perform power analyses for the following tests (selection): * balanced one way ANOVA (`pwr.anova.test`) * chi-square test (`pwr.chisq.test`) * Correlation (`pwr.r.test`) * general linear model (`pwr.f2.test`) * paired (one and two sample) t-test (`pwr.t.test`) * two sample t-test with unequal N (`pwr.t2n.test`) For each of these functions, you enter three of the four parameters (effect size, sample size, significance level, power) and the fourth is calculated. So how does this work? Have a look at the code chunk below. ## One-way ANOVA{-} Let check how to calculate the necessary sample size for each group for a one-way ANOVA that compares 5 groups (`k`) and that has a power of 0.80 (80 percent), when the effect size is moderate (*f* = 0.25) and the significance level is 0.05 (5 percent).. ```{r bp01, warning=F, message=F} # load package library(pwr) # calculate minimal sample size pwr.anova.test(k=5, # 5 groups are compared f=.25, # moderate effect size sig.level=.05, # alpha/sig. level = .05 power=.8) # confint./power = .8 ``` In this case, the minimum number of participants in each group would be 40. Let's check how we could calculate the power if we had already collected data (with 30 participants in each group) and we want to report the power of our analysis (and let us assume that the effect size was medium). ```{r bp02, warning=F, message=F} # calculate minimal sample size pwr.anova.test(k=5, # 5 groups are compared f=.25, # moderate effect size sig.level=.05, # alpha/sig. level = .05 n=30) # n of participants ``` In this case our analysis would only have a power or 66.8 percent. This means that we would only detect a medium sized effect in 66.7 percent of cases (which is considered insufficient). ## Power Analysis for GLMs{-} When determining the power of a generalized linear model, we need to provide * the degrees of freedom for numerator (´u´) * the degrees of freedom for denominator (`v`) * the effect size (the estimate of the intercept and the slope/estimates of the predictors) * the level of significance (i.e., the type I error probability) The values of the parameters in the example below are adapted from the fixed-effects regression example that was used to analyze different teaching styles (see [here](https://slcladal.github.io/regression.html#Example_2:_Teaching_Styles)). The effect size used here is $f^2^$ that has be categorized as follows [see @cohen1988statistical]: small $≥$ 0.02, medium $≥$ 0.15, and large $≥$ 0.35. So in order to determine if the data is sufficient to find a weak effect when comparing 2 groups with 30 participants in both groups (df_numerator_: 2-1; df_denominator_: (30-1) + (30-1)) and a significance level at $\alpha$ = .05, we can use the following code. ```{r ex1} # general linear model pwrglm <- pwr.f2.test(u = 1, v = 58, f2 = .02, sig.level = 0.05) # inspect results pwrglm ``` The results show that the regression analyses used to evaluate the effectiveness of different teaching styles only had a power of `r pwrglm$power`. ## Power Analysis for t-tests{-} For t-tests (both paired and 2-sample t-tests), the effect size measure is Cohen's $d$ that has be categorized as follows [see @cohen1988statistical]: small $≥$ 0.2, medium $≥$ 0.5, and large $≥$ 0.8. **Paired t-test** So in order to determine if the data is sufficient to find a weak effect when comparing the pre- and post-test results of a group with 30 participants, evaluating an undirected hypothesis (thus the *two-tailed* approach), and a significance level at $\alpha$ = .05, we can use the following code. ```{r ex2} # paired t-test pwrpt <- pwr.t.test(d=0.2, n=30, sig.level=0.05, type="paired", alternative="two.sided") # inspect pwrpt ``` Given the data, a weak effect in this design can only be detected with a certainty of `r pwrpt$power` percent. This means that we would need to substantively increase the sample size to detect a small effect with this design. **Two-sample (independent) t-test** The power in a similar scenario but with two different groups (with 25 and 35 subjects) can be determined as follows (in this case we test a directed hypothesis that checks if the intervention leads to an increase in the outcome - hence the *greater* in the `alternative` argument): ```{r ex3} # independent t-test pwr2t <- pwr.t2n.test(d=0.2, n1=35, n2 = 25, sig.level=0.05, alternative="greater") # inspect pwr2t ``` Given the data, a weak effect in this design can only be detected with a certainty of `r pwr2t$power` percent. This means that we would need to substantively increase the sample size to detect a small effect with this design. ## Power Analysis for $\chi$^2^-tests{-} Let us now check the power of a $\chi^2$^-test. For $\chi^2$^-test, the effect size measure used in the power analysis is $w$ that has be categorized as follows [see @cohen1988statistical]: small $≥$ 0.1, medium $≥$ 0.3, and large $≥$ 0.5. Also, keep in mind that for $\chi^2$^-tests, at least 80 percent of cells need to have values $≥$ 5 and none of the cells should have expected values smaller than 1 [see @bortz109verteilungsfreie]. ```{r ex4} # x2-test pwrx2 <- pwr.chisq.test(w=0.2, N = 25, # total number of observations df = 1, sig.level=0.05) # inspect pwrx2 ``` Given the data, a weak effect in this design can only be detected with a certainty of `r pwrx2$power` percent. This means that we would need to substantively increase the sample size to detect a small effect with this design. ***

EXERCISE TIME!

` 1. For the tests above (anova, glm, paired and independent t-test, and the $\chi^2$-test), how many participants would you need to have a power of 80 percent?
Here are the commands we used to help you:
* ANOVA: `pwr.anova.test(k=5, f=.25, sig.level=.05, n=30) ` * GLM: `pwr.f2.test(u = 1, v = 58, f2 = .02, sig.level = 0.05)` * paired t-test: `pwr.t.test(d=0.2, n=30, sig.level=0.05, type="paired", alternative="two.sided")` * independent t-test: `pwr.t2n.test(d=0.2,n1=35, n2 = 25, sig.level=0.05, alternative="greater")` * $\chi^2$-test: `pwr.chisq.test(w=0.2, N = 25, df = 1, sig.level=0.05)`
Answer ```{r ex1_2, class.source = NULL, eval = T} pwr.anova.test(k=5, f=.25, sig.level=.05, p=.8) pwr.f2.test(u = 1, f2 = .02, sig.level = 0.05, p = .8) pwr.t.test(d=0.2, p = .8, sig.level=0.05, type="paired", alternative="two.sided") pwr.t2n.test(d=0.2, n1=310, n2 = 310, sig.level=0.05, alternative="greater") # by checking N values pwr.chisq.test(w=0.2, p = .8, df = 1, sig.level=0.05) ```
` *** ## Excursion: Language is never ever random{-} In 2005, Adam @kilgarriff2005language made a point that *Language is never, ever, ever, random*. Here is part of the abstract: > Language users never choose words randomly, and language is essentially non-random. Statistical hypothesis testing uses a null hypothesis, which posits randomness. Hence, when we look at linguistic phenomena in corpora, the null hypothesis will never be true. Moreover, where there is enough data, we shall (almost) always be able to establish that it is not true. In corpus studies, we frequently do have enough data, so the fact that a relation between two phenomena is demonstrably non-random, does not support the inference that it is not arbitrary. We present experimental evidence of how arbitrary associations between word frequencies and corpora are systematically non-random. This is a problem because if we are using ever bigger corpora, even the tiniest of difference will become significant. have a look at the following example. ```{r random01} # first let us generate some data freqs1 <- matrix(c(10, 28, 30, 92), byrow = T, ncol = 2) # inspect data freqs1 ``` Now, we perform a simple $\chi^2$-test and extract the effect size. ```{r random02} # x2-test x21 <- chisq.test(freqs1) # inspect results x21 # effect size effectsize::effectsize(x21) ``` the output shows that the difference is not significant and that the effect size is extremely(!) small. Now, let us simply increase the sample size by a factor of 1000 and also perform a $\chi^2$-test on this extended data set and extract the effect size. ```{r random03} # freqs2 <- matrix(c(10000, 28000, 30000, 92000), byrow = T, ncol = 2) # first let us generate some data x22 <- chisq.test(freqs2) # inspect results x22 # effect size effectsize::effectsize(x22) ``` The output shows that the difference is not significant but that the effect size has remain the same. For this reason, in a response to Kilgariff, Stefan [@gries2005kilgariff] suggested to always also report effect size in addition to significance so that the reader has an understanding of whether a significant effect is meaningful. This is relevant here because we have focused on the power for finding small effects as these can be considered the smallest *meaningful* effects. However, even tiny effects can be meaningful under certain circumstances - as such, focusing on small effects is only a rule of thumb, should be taken with a pinch of salt, and should be re-evaluated in the context of the study at hand. # Advanced Power Analysis{-} In this section, we will perform power analyses for mixed-effects models (both linear and generalized linear mixed models). The basis for this section is @green2016simr (which you can find [here](https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/2041-210X.12504)). @green2016simr is a highly recommendable and thorough tutorial on performing power analysis in R. Recommendable literature on this topic are, e.g. @arnold2011simulation and @johnson2015power and [this tutorial](https://www.journalofcognition.org/articles/10.5334/joc.10/). Before we conduct a study, we should figure out, what sample we need to detect a small/medium effect with medium variability so that our model is sufficient to detect this kind of effect. In order to do this, we would generate a data set that mirrors the kind of data that we expect to get (with the properties that we expect to get). We can then fit a model to this data and check if a model would be able to detect the expected effect. However, because a single model does not tell us that much (it could simply be luck that it happened to find the effect), we run many different models on variations of the data and see how many of them find the effect. As a general rule of thumb, we want a data set that allows a model to find a medium sized effect with at least an accuracy of 80 percent [@field2007making]. In the following, we will go through how to determine what sample size we need for an example analysis. ## Using piloted data and a lmer{-} For this analysis, we load an existing data set resulting from a piloted study that only contains the predictors (not the response variable). ```{r reg01, message=FALSE, warning=FALSE} # load data regdat <- base::readRDS(url("https://slcladal.github.io/data/regdat.rda", "rb")) # inspect head(regdat, 10);str(regdat) ``` We inspect the data and check how many levels we have for each predictor and if the levels are distributed correctly (so that we do not have incomplete information). ```{r reg02, message=FALSE, warning=FALSE} table(regdat$Group, regdat$WordOrder); table(regdat$Group, regdat$SentenceType); table(regdat$ID) table(regdat$Sentence) ``` We could also add a response variable (but we will do this later when we deal with post-hoc power analyses). ### Generating the model{-} We now generate model that has per-defined parameters. We begin by specifying the parameters by setting effect sizes of the fixed effects and the intercept, the variability accounted fro by the random effects and the residuals. ```{r reg03} # Intercept + slopes for fixed effects # (Intercept + Group, SentenceType, WordOrder, and an interaction between Group * SentenceType) fixed <- c(.52, .52, .52, .52, .52) # Random intercepts for Sentence and ID rand <- list(0.5, 0.1) # res. variance res <- 2 ``` We now generate the model and fit it to our data. ```{r reg04} m1 <- makeLmer(y ~ (1|Sentence) + (1|ID) + Group * SentenceType + WordOrder, fixef=fixed, VarCorr=rand, sigma=res, data=regdat) # inspect m1 ``` Inspect summary table ```{r reg05} sjPlot::tab_model(m1) ``` **The summary table shows that the effects are correct but none of them are reported as being significant!** ### Power analysis{-} Let us now check if the data is sufficient to detect the main effect of `WordOrder`. In the `test` argument we use `fcompare` which allows us to compare a model with that effect (our *m1* model) to a model without that effect. Fortunately, we only need to specify the fixed effects structure. ```{r reg06, message=F, warning=F} sim_wo <- simr::powerSim(m1, nsim=20, test = fcompare(y ~ Group * SentenceType)) # inspect results sim_wo ``` *The data is not sufficient and would only detect a weak effect of `WordOrder` with `r sim_wo$x` percent accuracy!* Before we continue to see how to test the power of data to find interactions, let us briefly think about why I chose 0.52 when we specified the effect sizes. ### Why did I set the estimates to 0.52?{-} Let us now inspect the effect sizes and see why I used 0.52 as an effect size in the model. We need to determine the odds ratios of the fixed effects and then convert them into Cohen's *d* values for which we have associations between traditional denominations (small, medium, and large) and effect size values. ```{r reg08, message=FALSE, warning=FALSE} # extract fixed effect estimates estimatesfixedeffects <- fixef(m1) # convert estimates into odds ratios exp(estimatesfixedeffects) ``` We will now check the effect size which can be interpreted according to @chen2010big [see also @cohen1988statistical; @perugini2018practical, 2]. * small effect (Cohen's d 0.2, OR = 1.68) * medium effect (Cohen's d 0.5, OR = 3.47) * strong effect (Cohen's d 0.8, OR = 6.71) ### Power analysis for interactions{-} Let us now check if the data set has enough power to detect a weak effect for the interaction between `Group:SentenceType`. ```{r reg10, message=F, warning=F} sim_gst <- simr::powerSim(m1, nsim=20, # compare model with interaction to model without interaction test = fcompare(y ~ WordOrder + Group + SentenceType)) # inspect sim_gst ``` **The data is not sufficient to detect a weak effect of `Group:SentenceType` with `r sim_gst$x` percent accuracy!** ***

NOTE
Oh no... What can we do?

*** ## Extending Data{-} We will now extend the data to see what sample size is needed to get to the 80 percent accuracy threshold. We begin by increasing the number of sentences from 10 to 30 to see if this would lead to a sufficient sample size. After increasing the number of sentences, we will extend the data to see how many participants we would need. ### Adding sentences{-} ```{r ext01, message=F, warning=F} m1_as <- simr::extend(m1, along="Sentence", n=120) # inspect model m1_as ``` Check power when using 120 sentences ```{r ext03} sim_m1_as_gst <- powerSim(m1_as, nsim=20, test = fcompare(y ~ WordOrder + Group + SentenceType)) # inspect sim_m1_as_gst ``` **The data with 120 sentences is sufficient and would detect a weak effect of `Group:SentenceType` with `r sim_m1_as_gst$x` percent accuracy!** Let us now plot a power curve to see where we cross the 80 percent threshold. ```{r ext04, results='hide', message = FALSE, warning=F} pcurve_m1_as_gst <- simr::powerCurve(m1_as, test=fcompare(y ~ WordOrder + Group + SentenceType), along="Sentence", nsim = 20, breaks=seq(20, 120, 20)) # show plot plot(pcurve_m1_as_gst) ``` Using more items (or in this case sentences) is rather easy but it can make experiments longer which may lead to participants becoming tired or annoyed. An alternative would be to use more participants. Let us thus check how we can determine how many subjects we would need to reach a power of at least 80 percent. ### Checking participants{-} What about increasing the number of participants? We increase the number of participants to 120. ```{r ext13, message=F, warning=F} m1_ap <- simr::extend(m1, along="ID", n=120) # inspect model m1_ap ``` Check power when using 120 participants ```{r ext15} sim_m1_ap_gst <- powerSim(m1_ap, nsim=20, test = fcompare(y ~ WordOrder + Group + SentenceType)) # inspect sim_m1_ap_gst ``` **The data with 120 participants is sufficient and would detect a weak effect of `Group * SentenceType` with `r sim_m1_ap_gst$x` percent accuracy** ```{r ext17, results='hide'} pcurve_m1_ap_gst <- simr::powerCurve(m1_ap, test=fcompare(y ~ Group + SentenceType + WordOrder), along = "ID", nsim=20) # show plot plot(pcurve_m1_ap_gst) ``` ## Generating data and a glmer{-} In order to perform a power analysis, we will start by loading the tidyverse package to process the data and by generating a data that we will use to determine the power of a regression model. This simulated data set has * 200 data points * 2 Conditions (Control, Test) * 10 Subjects * 10 Items ```{r glmer01, message=F, warning=F} # generate data simdat <- data.frame( sub <- rep(paste0("Sub", 1:10), each = 20), cond <- rep(c( rep("Control", 10), rep("Test", 10)) , 10), itm <- as.character(rep(1:10, 20)) ) %>% dplyr::rename(Subject = 1, Condition = 2, Item = 3) %>% dplyr::mutate_if(is.character, factor) # inspect head(simdat, 15) ``` ***

EXERCISE TIME!

` 1. Can you create a data set with 5 Subjects, 5 Items (each) for 4 Conditions?
Answer ```{r datex2, echo = F} # generate data Items <- rep(rep(paste0("Item", 1:5), 4), 5) Condition <- rep(rep(paste0("Condition", 1:4), each = 5), 5) Subjects <- rep(paste0("Subject", 1:5), each = 20) # combine into data frame exdat <- data.frame(Subjects, Items, Condition) # inspect head(exdat, 20) ```
` *** ### Generating the model{-} As before with the lmer, we specify the model parameters - but when generating glmers, we only need to specify the effects for the fixed effects and the intercept and define the variability in the random effects (we do not need to specify the residuals). ```{r glmer02, message=F, warning=F} # Intercept + slopes for fixed effects # (Group, SentenceType, WordOrder, and an interaction between Group * SentenceType) fixed <- c(.52, .52) # Random intercepts for Sentence and ID rand <- list(0.5, 0.1) ``` We now generate the model and fit it to the data. ```{r glmer03, message=F, warning=F} m2 <- simr::makeGlmer(y ~ (1|Subject) + (1|Item) + Condition, family = "binomial", fixef=fixed, VarCorr=rand, data=simdat) # inspect sjPlot::tab_model(m2) ``` Next, we extract power. In this case, we use `fixed` in the `test` argument which allows us to test a specific predictor. ```{r glmer04, message=F, warning=F} # set seed for replicability set.seed(12345) # perform power analysis for present model rsim_m2_c <- powerSim(m2, fixed("ConditionTest", "z"), nsim=20) # inspect rsim_m2_c ``` **The data is sufficient and would detect a weak effect of `ConditionTest` with only `r rsim_m2_c$x` percent accuracy** ## Extending the data{-} Like before, we can now extend the data to see how many participants or items we would need to reach the 80 percent confidence threshold. ### Adding Items{-} We start of by increasing the number of items from 10 to 40. This means that our *new* data/model has the following characteristics. * 2 Conditions * 10 Subjects * 40 Items (from 10) Keep in mind though that when extending the data/model in this way, each combination occurs only once! ```{r glmer06, message=F, warning=F} m2_ai <- simr::extend(m2, along="Item", n=40) ``` Next, we plot the power curve. ```{r glmer08} pcurve_m2_ai_c <- powerCurve(m2_ai, fixed("ConditionTest", "z"), along = "Item", nsim = 20, breaks=seq(10, 40, 5)) # show plot plot(pcurve_m2_ai_c) ``` The power curve shows that we breach the 80 percent threshold with about 35 items. ### Adding subjects{-} An alternative to adding items is, of course, to use more subjects or participants. We thus continue by increasing the number of participants from 10 to 40. This means that our *new* data/model has the following characteristics. * 2 Conditions * 10 Items * 40 Subjects (from 10) Again, keep in mind though that when extending the data/model in this way, each combination occurs only once! ```{r glmer10, message=F, warning=F} m2_as <- simr::extend(m2, along="Subject", n=40) # inspect model sjPlot::tab_model(m2_as) ``` As before, we plot power curve. ```{r glmer12} pcurve_m2_as_c <- powerCurve(m2_as, fixed("ConditionTest", "z"), along = "Subject", nsim = 20, breaks=seq(10, 40, 5)) # show plot plot(pcurve_m2_as_c) ``` How often does each *combination* occur? **Only once!** So, what if we increase the number of combinations (this is particularly important when using a *repeated measures** design)? Below, we increase the number of *configuration* from 1 to 10 so that each item is shown 10 times to the same participant. This means that our *new* data/model has the following characteristics. * 2 Conditions * 10 Items * 10 Subjects Now each combination of item and subject occurs 10 times! ```{r glmer14} m2_asi_c <- extend(m2, within="Subject+Item", n=10) # perform power calculation pcurve_m2_asi_c <- powerCurve(m2_asi_c, fixed("ConditionTest", "z"), within="Subject+Item", nsim = 20, breaks=seq(2, 10, 2)) # show plot plot(pcurve_m2_asi_c) ``` If we did this, then even 5 subjects may be enough to reach the 80 percent threshold. ### Adding subjects and items{-} We can also add subjects and items *simultaneously* to address questions like *How many subjects would I need if I had 30 items?*. Hence, we increase both subjects and items from 10 to 30. This means that our *new* data/model has the following characteristics. * 2 Conditions * 30 Items (from 10) * 30 Subjects (from 10) In this case, we return to each combination only occurring once. ```{r glmer16, message=F, warning=F} m2_as <- simr::extend(m2, along="Subject", n=30) m2_asi <- simr::extend(m2_as, along="Item", n=30) # inspect model sjPlot::tab_model(m2_asi) ``` We can now plot power curve to answer the question *How many subjects do you need if you have 30 Items?*. ```{r glmer18} pcurve_m2_asi_c <- powerCurve(m2_asi, fixed("ConditionTest", "z"), along = "Subject", breaks = seq(5, 30, 5), nsim = 20) # show plot plot(pcurve_m2_asi_c) ``` We can see that with 30 items, we would need only about 15 subjects to reach the 80 percent threshold. We can also check the results in tabular form as shown below. The results are shown below. ```{r glmer19, message = FALSE, warning=F} # print results print(pcurve_m2_asi_c) ``` ## Post-Hoc Analyses{-} ***

NOTE
Power analysis have also been used post-hoc to test if the sample size of studies was sufficient to detect meaningful effects. However, such post-hoc power calculations where the target effect size comes from the data, give misleading results [@hoenig2001abuse; @perugini2018practical] and should thus be treated with extreme care!


We begin by adding a response variable to our data. In this case, we vary the response variable ti a higher likelihood of obtaining gazes in the *area of interests* (AOI) in the test condition. ```{r glmer20} simdat2 <- simdat %>% dplyr::arrange(Condition) %>% dplyr::mutate( dep <- c(sample(c("yes", "no"), 100, replace = T, prob = c(.5, .5)), sample(c("yes", "no"), 100, replace = T, prob = c(.7, .3))) ) %>% dplyr::mutate_if(is.character, factor) %>% dplyr::rename(AOI = 4) # inspect head(simdat2, 20) ``` Now that we have generated some data, we will fit a model to it and perform a power analysis on the observed effects. We will fit a first model to the data. Thus, in a first step, we load the `lme4` package to create a model, set a seed (to save the results and so that the results can be replicated), and then create an initial mixed-effects model. ```{r pwr77, message=F, warning=F} # set seed for replicability set.seed(12345) # fit model m3 <- glmer(AOI ~ (1|Subject) +(1|Item) + Condition, family="binomial", data=simdat2) # inspect results summary(m3) ``` We now check the effect sizes of the predictors in the model. We can do this by displaying the results of the model using the `tab_model` function from the `sjPlot` package. ```{r pwr78, message=F, warning=F} # tabulate results sjPlot::tab_model(m3) ``` Now, we perform power analysis on an observed effect. This analysis tells us how likely the model is to find an observed effect given the data. ***

NOTE
We use a very low number of simulations (10) and we use the default z-test which is suboptimal for small samples [@bolker2009generalized]. In a proper study, you should increase the number of simulations (to at least 1000) and you should use a bootstrapping rather than a z-test [cf. @halekoh2014kenward].

*** *What is the probability of finding the observed effect given the data?* ```{r pwr79, results='hide', message = FALSE, warning=F} # set seed for replicability set.seed(12345) # perform power analysis for present model m3_pwr <- powerSim(m3, fixed("ConditionTest", "z"), nsim=20) # inspect results m3_pwr ``` The results of the power analysis show that, given the data at hand, the model would have detected the effect of `Conidition:Test` with a probability of `m3_pwr$x` percent. However, and as stated above, the results of such post-hoc power calculations (where the target effect size comes from the data) give misleading results [@hoenig2001abuse] and should thus be treated with extreme care! ### Fixing effects{-} We will set the effects that we obtained based on our "observed" data to check if, given the size of the data, we have enough power to detect a small effect of *Condition*. In a first step, we check the observed effects. ```{r pwr80, message=F, warning=F} estimatesfixedeffects <- fixef(m3) exp(estimatesfixedeffects) ``` We can see that the effect of *Condition* is rather small which makes it very hard to detect an effect. We will now change the size of the effect of ConditionTest to represent a truly *small* effect, i.e. on the brink of being noise but being just strong enough to be considered small. In other words, we will set the effect so that its odds ratio is exactly 1.68. ```{r pwr81} # set seed for replicability set.seed(12345) # perform power analysis for small effect fixef(m3)["ConditionTest"] <- 0.519 estimatesfixedeffects <- fixef(m3) exp(estimatesfixedeffects) ``` *What is the probability of finding a weak effect given the data?* We now perform the power analysis. ```{r pwr82, results='hide', message = FALSE, warning=F} # set seed for replicability set.seed(12345) # perform power analysis for present model m3_pwr_se <- powerSim(m3, fixed("ConditionTest", "z"), nsim=20) # inspect results m3_pwr_se ``` **The data is not sufficient and would detect a weak effect of `Condition` with only `r m3_pwr_se$x` percent accuracy** ### Power Analysis of Extended Data{-} We will now extend the data to see what sample size is needed to get to the 80 percent accuracy threshold. We begin by increasing the number of items from 10 to 30 to see if this would lead to a sufficient sample size. ```{r pwr83, results='hide', message = FALSE, warning=F} # increase sample size m3_ai <- extend(m3, along="Item", n=30) # perform power simulation m3_ai_pwr <- powerSim(m3_ai, fixed("ConditionTest", "z"), nsim=20) # show results m3_ai_pwr ``` **The data with 30 Items is sufficient and would detect a weak effect of `Condition` with `r m3_ai_pwr$x` percent accuracy** *At what number of sentences are the data sufficient to detect an effect?* ```{r glmer36, message=F, warning=F} pcurve_m3_asi_c <- powerCurve(m3_ai, fixed("ConditionTest", "z"), along = "Item", breaks = seq(5, 30, 5), nsim = 20) # show plot plot(pcurve_m3_asi_c) ``` We reach the 80 percent threshold with about 25 subjects. # Resources and Links{-} If you want to know more, please have a look/go at the following resources: * [SIMR: an R package for power analysis of generalized linear mixed models by simulation](https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/2041-210X.12504) * [Power analysis in Statistics with R](https://www.r-bloggers.com/2021/05/power-analysis-in-statistics-with-r/) * [Power Analysis with simr](https://humburg.github.io/Power-Analysis/simr_power_analysis.html) * [Parallelizing simr::powercurve() in R](https://www.r-bloggers.com/2021/07/parallelizing-simrpowercurve-in-r/) * [Power Analysis in R](https://slcladal.github.io/pwr.html) Also check out Jacob Westfall's website for [Power Analysis with Crossed Random Effects](https://jakewestfall.shinyapps.io/crossedpower/) based on @westfall2014power. See @baayen2017shadows for power in generalized additive mixed models and why maximal models can be ill-advised. Also, see @matuscheka2017balancing for a simulation study on the detrimental effects of complex random effect structures on power. # Citation & Session Info {-} Schweinberger, Martin. `r format(Sys.time(), '%Y')`. *Power Analysis in R*. Brisbane: The University of Queensland. url: https://slcladal.github.io/pwr.html (Version `r format(Sys.time(), '%Y.%m.%d')`). ``` @manual{schweinberger`r format(Sys.time(), '%Y')`pwr, author = {Schweinberger, Martin}, title = {Power Analysis in R}, note = {https://slcladal.github.io/pwr.html}, year = {`r format(Sys.time(), '%Y')`}, organization = "The University of Queensland, Australia. School of Languages and Cultures}, address = {Brisbane}, edition = {`r format(Sys.time(), '%Y.%m.%d')`} } ``` ```{r fin} sessionInfo() ``` *** [Back to top](#introduction) [Back to HOME](https://slcladal.github.io/index.html) *** # References {-}