Introduction

This tutorial introduces regression modeling using R. The R-markdown document for the tutorial can be downloaded here.

Regression models are among the most widely used methods in data analysis because they are/can

  • incorporate many predictors in a single model (multivariate: allows to test the impact of one predictor while the impact of (all) other predictors is controlled for)
  • extremely flexible and and can be fitted to different types of predictors and dependent variables
  • provide output that can be easily interpreted
  • conceptually relative simple and not overly complex from a mathematical perspective

There are two basic types f regression models: fixed-effects regression models and mixed-effects regression models. The first part of this tutorial focuses on fixed-effects regression models while the second part focuses on mixed-effects regression models (the difference lies in the fact that fixed-effects regression models allow us to model hierarchical or nested data - more on that in the second part of this tutorial).

Fixed-effects regression models are simple additive models which means that the predicted values represent the intercept value plus the effects of the individual predictors while mixed-effects models are based on more complex matrix multiplications where predicted values represent the product of the random effect multiplied by the intercept values plus the effects of the fixed effects. R offers a various ready-made functions with which implementing different types of regression models is very easy.

In the following, we will go over the most relevant and frequently used types of regression models:

  • multiple linear regression

  • multiple binomial logistic regression

  • ordinal regression

  • Poisson regression

  • robust regression

The major difference between these types of models is that they take different types of dependent variables: linear regressions take numeric , logistic regressions take nominal variables, ordinal regressions take ordinal variables, and Poisson regressions take dependent variables that reflect counts of (rare) events. Robust regression, in contrast, is a simple multiple linear regression that is able to handle outliers due to a weighing procedure.

Preparation and session set up

This tutorial is based on R. If you have not installed R or are new to it, you will find an introduction to and more information how to use R here. For this tutorials, we need to install certain packages from an R library so that the scripts shown below are executed without errors. Before turning to the code below, please install the packages by running the code below this paragraph. If you have already installed the packages mentioned below, then you can skip ahead ignore this section. To install the necessary packages, simply run the following code - it may take some time (between 1 and 5 minutes to install all of the libraries so you do not need to worry if it takes some time).

# clean current workspace
rm(list=ls(all=T))
# set options
options(stringsAsFactors = F)         # no automatic data transformation
options("scipen" = 100, "digits" = 4) # suppress math annotation
# install packages
install.packages(c("boot", "car", "caret", "dplyr",  "effects", "foreign", "ggplot2", 
                   "Hmisc", "DT", "knitr", "lme4", "MASS", "mlogit", "msm", 
                   "QuantPsyc", "reshape2", "rms", "sandwich", "sfsmisc", "sjPlot", 
                   "stringr", "vcd", "visreg", "MuMIn"))

Once you have installed R-Studio and initiated the session by executing the code shown above, you are good to go.

1 Fixed Effects Regression

Before turning to mixed-effects models which are able to represent hierarchical data structures, we will focus on traditional fixed effects regression models and begin with multiple linear regression.

1.1 Simple Linear Regression

This section focuses on a very widely used statistical method which is called regression. Regressions are used when we try to understand how independent variables correlate with a dependent or outcome variable. So, if you want to investigate how a certain factor affects an outcome, then a regression is the way to go. We will have a look at two simple examples to understand what the concepts underlying a regression mean and how a regression works. The R-Code, that we will use, is adapted from Field, Miles, and Field (2012) - which is highly recommended for understanding regression analyses! In addition to Field, Miles, and Field (2012), there are various introductions which also focus on regression (among other types of analyses), for example, Gries (2009), Levshina (2015), Wilcox (2009) - Baayen (2008) is also very good but probably not the first book one should read about statistics.

Although the basic logic underlying regressions is identical to the conceptual underpinnings of analysis of variance (ANOVA), a related method, sociolinguistists have traditionally favoured regression analysis in their studies while ANOVAs have been the method of choice in psycholinguistics. The preference for either method is grounded in historical happenstances and the culture of these subdisciplines rather than in methodological reasoning.

A minor difference between regressions and ANOVA lies in the fact that regressions are based on the \(t\)-distribution while ANOVAs use the \(F\)-distribution (however, the \(F\)-value is simply the value of \(t\) squared or t2). Both \(t\)- and \(F\)-values report on the ratio between explained and unexplained variance.

The idea behind regression analysis is expressed formally in the equation below where\(f_{(x)}\) is the \(y\)-value we want to predict, \(\alpha\) is the intercept (the point where the regression line crosses the \(y\)-axis), \(\beta\) is the coefficient (the slope of the regression line).

\(f_{(x)} = \alpha + \beta_{1}x_{i} + \epsilon\)

In other words, to estimate how much some weights who is 180cm tall, we would multiply the coefficent (slope of the line) with 180 (\(x\)) and add the value of the intercept (point where line crosses the \(y\)-axis).

However, the idea behind regressions can best be described graphically: imagine a cloud of points (like the points in the scatterplot below). Regressions aim to find that line which has the minimal summed distance between points and the line (like the line in the right panel). Technically speaking, the aim of a regression is to find the line with the minimal deviance (or the line with the minimal sum of residuals). Residuals are the distance between the line and the points (the red lines) and it is also called variance.

Thus, regression lines are those lines where the sum of the red lines should be minimal. The slope of the regression line is called coefficient and the point where the regression line crosses the y-axis is called the intercept.

A word about standard errors (SE) is in order here because most commonly used statistics programs will provide SE values when reporting regression models. The SE is a measure that tells us how much the coefficients were to vary if the same regression were applied to many samples from the same population. A relatively small SE value therefore indicates that the coefficients will remain very stable if the same regression model is fitted to many different samples with identical parameters. In contrast, a large SE tells you that the model is volatile and not very stable or reliable as the coefficients vary substantially if the model is applied to many samples.

Example 1: Preposition Use across Real-Time

We will now turn to our first example. In this example, we will investigate whether the frequency of prepositions has changed from Middle English to Late Modern English. The reasoning behind this example is that Old English was highly synthetic compared with Present-Day English which comparatively analytic. In other words, while Old English speakers used case to indicate syntactic relations, speakers of Present-Day English use word order and prepositions to indicate syntactic relationships. This means that the loss of case had to be compensated by different strategies and maybe these strategies continued to develop and increase in frequency even after the change from synthetic to analytic had been mostly accomplished. And this prolonged change in compensatory strategies is what this example will focus on.

The analysis is based on data extracted from the Penn Corpora of Historical English (see http://www.ling.upenn.edu/hist-corpora/), that consists of 603 texts written between 1125 and 1900. In preparation of this example, all elements that were part-of-speech tagged as prepositions were extracted from the PennCorpora.

Then, the relative frequencies (per 1,000 words) of prepositions per text were calculated. This frequency of prepositions per 1,000 words represents our dependent variable. In a next step, the date when each letter had been written was extracted. The resulting two vectors were combined into a table which thus contained for each text, when it was written (independent variable) and its relative frequency of prepositions (dependent or outcome variable).

A regression analysis will follow the steps described below: 1. Extraction and processing of the data 2. Data visualization 3. Applying the regression analysis to the data 4. Diagnosing the regression model and checking whether or not basic model assumptions have been violated.

In a first step, we load the libraries and functions.

# load libraries
library(car)
library(dplyr)
library(ggplot2)
library(knitr)         
library(QuantPsyc)  
library(DT)
# load functions
source("https://slcladal.github.io/rscripts/multiplot.r")
source("https://slcladal.github.io/rscripts/slrsummary.r")

After preparing our session, we can now load and inspect the data to get a first impression of its properties.

# load data
slrdata <- read.delim("https://slcladal.github.io/data/lmmdata.txt", header = TRUE)
# inspect data
head(slrdata)                                   
##   Date         Genre    Text Prepositions Region
## 1 1736       Science   albin       166.01  North
## 2 1711     Education    anon       139.86  North
## 3 1808 PrivateLetter  austen       130.78  North
## 4 1878     Education    bain       151.29  North
## 5 1743     Education barclay       145.72  North
## 6 1908     Education  benson       120.77  North

Inspecting the data is very important because it can happen that a data set may not load completely or that variables which should be numeric have been converted to character variables. If unchecked, then such issues could go unnoticed and cause trouble.

We will now plot the data to get a better understanding of what the data looks like.

ggplot(slrdata, aes(Date, Prepositions)) +
  geom_point() +
  theme_bw() +
  labs(x = "Year") +
  labs(y = "Prepositions per 1,000 words") +
  geom_smooth()

ggplot(slrdata, aes(Date, Prepositions)) +
  geom_point() +
  theme_bw() +
  labs(x = "Year") +
  labs(y = "Prepositions per 1,000 words") +
  geom_smooth(method = "lm") # with linear model smoothing!

Before beginning with the regression analysis, we will scale the year. We scale by subtracting each value from the mean of year. This can be useful when dealing with numeric variables because if we did not scale year, we would get estimated values for year 0 (a year when English did not even exist yet). If a variable is scaled, the regression provides estimates of the model refer to the mean of that numeric variable. In other words, scaling can eb very helpful, especially with respect to the interpretation of the results that regression models report.

# scale date
slrdata$Date <- slrdata$Date - mean(slrdata$Date) 

We will now begin the regression analysis by generating a first regression model. and inspect its results.

# create initial model
m1.lm <- lm(Prepositions ~ Date, data = slrdata)
# inspect results
summary(m1.lm)
## 
## Call:
## lm(formula = Prepositions ~ Date, data = slrdata)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -69.101 -13.855   0.578  13.321  62.858 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 1.322e+02  8.386e-01 157.625   <2e-16 ***
## Date        1.732e-02  7.267e-03   2.383   0.0175 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 19.43 on 535 degrees of freedom
## Multiple R-squared:  0.01051,    Adjusted R-squared:  0.008657 
## F-statistic: 5.681 on 1 and 535 DF,  p-value: 0.0175

The summary output starts by repeating the regression equation. Then, the model provides the distribution of the residuals. The residuals should be sitributed normally with the Min and Max as well as the 1Q (first quartile) and 3Q (third quartile) being similar or ideally identical. In our case, the values are very similar which suggests that the residuals are distributed evenly and follow a normal distribution. The next part of the repost is the coefficients table. The Estimate for the intercept is the value where the regression line crosses the y-axis. The estimate for Date represents the slope of the regression line and tells us that with each year, the predicted frequency of prepositions incerase by .01732 prepositions. The t-value is the Estimate divided by the standard error (Std. Error). Based on the t-value, the p-value can be calculated manually as shown below.

# use pt function (which uses t-values and the degrees of freedom)
2*pt(-2.383, nrow(slrdata)-1)
## [1] 0.01751964

The R2-values tell us how much variance is explained by our model compared to the overall variance (0.0105 means that our model explains only 1.05 percent of the variance - which is a tiny amount). The adjusted R2-value tell us how much variance the model would explain if we applied the model to new data of the same nature (which data points taken from the same population). Or, to be more precise, the adjusted R2 takes the number of predictors into account: if a model contains predictors that do not explain much variance, then the adjusted R2 will be lower than the multiple R2 as the former penalizes models for having superfluous predictors. If there is a big difference between the two R2-values, then the model is overfitted which is not good. The F-statistic and the associated p-value tell us that the model, despite explaining almost no variance, is still significantly better than an intercept-only base-line model (or using the overall mean to predict the frequency of prepositions per text).

We can test this and also see where the F-values comes from by comparing the

# create intercept-only base-line model
m0.lm <- lm(Prepositions ~ 1, data = slrdata)
# compare the base-line and the more saturated model
anova(m1.lm, m0.lm, test = "F")
## Analysis of Variance Table
## 
## Model 1: Prepositions ~ Date
## Model 2: Prepositions ~ 1
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)  
## 1    535 202058                             
## 2    536 204204 -1   -2145.6 5.6809 0.0175 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

The F- and p-values are exactly those reported by the summary which shows where the F-values comes from and what it means; namely it denote the difference between the base-line and the more saturated model.

The degrees of freedom associated with the residual standard error are the number of cases in the model minus the number of predictors (including the intercept). The residual standard error is square root of the sum of the squared residuals of the model divided by the degrees of freedom. Have a look at he following to clear this up:

# DF = N - number of predictors (including intercept)
DegreesOfFreedom <- nrow(slrdata)-length(coef(m1.lm))
# sum of the squared residuals
SumSquaredResiduals <- sum(resid(m1.lm)^2)
# Residual Standard Error
sqrt(SumSquaredResiduals/DegreesOfFreedom); DegreesOfFreedom
## [1] 19.43396
## [1] 535

We will now check if mathematical assumptions have been violated (homogeneity of variance) or whether the data contains outliers. We check this using diagnostic plots.

# plot model: 3 plots per row in one window
par(mfrow = c(1, 3))
plot(resid(m1.lm))
plot(rstandard(m1.lm))
plot(rstudent(m1.lm)); par(mfrow = c(1, 1)) # restore default parameters

The left graph shows the residuals of the model (i.e., the differences between the observed and the values predicted by the regression model). The problem with this plot is that the residuals are not standardized and so they cannot be compared to the residuals of other models. To remedy this deficiency, residuals are normalized by dividing the residuals by their standard deviation. Then, the normalized residuals can be plotted against the observed values (centre panel). In this way, not only are standardized residuals obtained, but the values of the residuals are transformed into z-values, and one can use the z-distribution to find problematic data points. There are three rules of thumb regarding finding problematic data points through standardized residuals (Field, Miles, and Field 2012, 268–69):

  • Points with values higher than 3.29 should be removed from the data.

  • If more than 1% of the data points have values higher than 2.58, then the error rate of our model is too high.

  • If more than 5% of the data points have values greater than 1.96, then the error rate of our model is too high.

The right panel shows the * studentized residuals* (adjusted predicted values: each data point is divided by the standard error of the residuals). In this way, it is possible to use Student’s t-distribution to diagnose our model.

Adjusted predicted values are residuals of a special kind: the model is calculated without a data point and then used to predict this data point. The difference between the observed data point and its predicted value is then called the adjusted predicted value. In summary, studentized residuals are very useful because they allow us to identify influential data points.

The plots show that there are two potentially problematic data points (the top-most and bottom-most point). These two points are clearly different from the other data points and may therefore be outliers. We will test later if these points need to be removed.

We will now generate more diagnostic plots.

par(mfrow = c(2, 2)) # plot window: 2 plots/row, 2 plots/column
plot(m1.lm); par(mfrow = c(1, 1)) # restore normal plot window

The diagnostic plots are very positive and we will go through why this is so for each panel. The graph in the upper left panel is useful for finding outliers or for determining the correlation between residuals and predicted values: when a trend becomes visible in the line or points (e.g., a rising trend or a zigzag line), then this would indicate that the model would be problematic (in such cases, it can help to remove data points that are too influential (outliers)).

The graphic in the upper right panel indicates whether the residuals are normally distributed (which is desirable) or whether the residuals do not follow a normal distribution. If the points lie on the line, the residuals follow a normal distribution. For example, if the points are not on the line at the top and bottom, it shows that the model does not predict small and large values well and that it therefore does not have a good fit.

The graphic in the lower left panel provides information about homoscedasticity. Homoscedasticity means that the variance of the residuals remains constant and does not correlate with any independent variable. In unproblematic cases, the graphic shows a flat line. If there is a trend in the line, we are dealing with heteroscedasticity, that is, a correlation between independent variables and the residuals, which is very problematic for regressions.

The graph in the lower right panel shows problematic influential data points that disproportionately affect the regression (this would be problematic). If such influential data points are present, they should be either weighted (one could generate a robust rather than a simple linear regression) or they must be removed. The graph displays Cook’s distance, which shows how the regression changes when a model without this data point is calculated. The cook distance thus shows the influence a data point has on the regression as a whole. Data points that have a Cook’s distance value greater than 1 are problematic (Field, Miles, and Field 2012, 269).

The so-called leverage is also a measure that indicates how strongly a data point affects the accuracy of the regression. Leverage values range between 0 (no influence) and 1 (strong influence: suboptimal!). To test whether a specific data point has a high leverage value, we calculate a cut-off point that indicates whether the leverage is too strong or still acceptable. The following two formulas are used for this:

\[\begin{equation} \frac{3(k + 1)}{n} \end{equation}\]

or

\[\begin{equation} \frac{2(k + 1)}{n} \end{equation}\]

We will look more closely at leverage in the context of multiple linear regression and will therefore end the current analysis by summarizing the results of the regression analysis in a table.

# create summary table
slrresults <- slrsummary(m1.lm)  
# show summary table
slrresults
Results of a simple linear regression analysis.
Estimate Pearson’s r Std. Error t value Pr(>|t|) P-value sig.
(Intercept) 132.19 0.84 157.62 0 p < .001***
Date 0.02 0.1 0.01 2.38 0.0175 p < .05*
Model statistics Value
Number of cases in model 537
Residual standard error on 535 DF 19.43
Multiple R-squared 0.0105
Adjusted R-squared 0.0087
F-statistic (1, 535) 5.68
Model p-value 0.0175

Typically, the results of regression analyses are presented in such tables as they include all important measures of model quality and significance, as well as the magnitude of the effects.

In addition, the results of simple linear regressions should be summarized in writing. An example of how the results of a regression analysis can be written up is provided below.

A simple linear regression has been fitted to the data. A visual assessment of the model diagnostic graphics did not indicate any problematic or disproportionately influential data points (outliers) and performed significantly better compared to an intercept-only base line model but only explained .87 percent of the vraiance (Adjusted R2: .0087, F-statistic (1, 535): 5,68, p-value: 0.0175*). The final minimal adequate linear regression model is based on 537 data points and confirms a significant and positive correlation between the year in which the text was written and the relative frequency of prepositions (coefficient estimate: .02, SE: 0.01, t-value: 2.38, p-value: .0175*).

Example 2: Teaching Styles

In the previous example, we dealt with two numeric variables, while the following example deals with a categorical independent variable and a numeric dependent variable. The ability for regressions to handle very different types of variables makes regressions a widely used and robust method of analysis.

In this example, we are dealing with two groups of students that have been randomly assigned to be exposed to different teaching methods. Both groups undergo a language learning test after the lesson with a maximum score of 20 points.

The question that we will try to answer is whether the students in group A have performed significantly better than those in group B which would indicate that the teaching method to which group A was exposed works better than the teaching method to which group B was exposed.

Let’s move on to implementing the regression in “R”. In a first step, we load the data set and inspect its structure.

# load data
slrdata2 <- read.delim("https://slcladal.github.io/data/slrdata2.txt", sep = "\t", header = T)
# inspect data
head(slrdata2)
##   Group Score
## 1     A    15
## 2     A    12
## 3     A    11
## 4     A    18
## 5     A    15
## 6     A    15

Now, we graphically display the data. In this case, a boxplot represents a good way to visualize the data.

# extract means
means <- slrdata2 %>%
  dplyr::group_by(Group) %>%
  dplyr::summarise(Mean = round(mean(Score), 1), SD = round(sd(Score), 1))
## `summarise()` ungrouping output (override with `.groups` argument)
# start plot
ggplot(slrdata2, aes(Group, Score)) + 
  geom_boxplot(fill=c("orange", "darkgray")) +
  geom_text(data = means, aes(label = paste("M = ", Mean, sep = ""), y = 1)) +
  geom_text(data = means, aes(label = paste("SD = ", SD, sep = ""), y = 0)) +
  theme_bw(base_size = 15) +
  labs(x = "Group") +                      
  labs(y = "Test score (Points)", cex = .75) +   
  coord_cartesian(ylim = c(0, 20)) +  
  guides(fill = FALSE)                
Language test data

Language test data

The data indicate that group A did significantly better than group B. We will test this impression by generating the regression model and creating the model and extracting the model summary.

# generate regression model
m2.lm <- lm(Score ~ Group, data = slrdata2) 
# inspect results
summary(m2.lm)                             
## 
## Call:
## lm(formula = Score ~ Group, data = slrdata2)
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -6.767 -1.933  0.150  2.067  6.233 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  14.9333     0.5346  27.935  < 2e-16 ***
## GroupB       -3.1667     0.7560  -4.189 9.67e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 2.928 on 58 degrees of freedom
## Multiple R-squared:  0.2322, Adjusted R-squared:  0.219 
## F-statistic: 17.55 on 1 and 58 DF,  p-value: 9.669e-05

The model summary reports that Groups B performed significantly (Pr(>|t|) is smaller than .001 as indicated by the three * after the p-values) comapred with Groups A (the Estmate is negative). We will now create the diagnostic graphics.

par(mfrow = c(1, 3))        # plot window: 1 plot/row, 3 plots/column
plot(resid(m2.lm))     # generate diagnostic plot
plot(rstandard(m2.lm)) # generate diagnostic plot
plot(rstudent(m2.lm)); par(mfrow = c(1, 1))  # restore normal plot window

The graphics do not indicate outliers or other issues, so we can continue with more diagnostic graphics.

par(mfrow = c(2, 2)) # generate a plot window with 2x2 panels
plot(m2.lm); par(mfrow = c(1, 1)) # restore normal plot window

These graphics also show no problems. In this case, the data can be summarized in the next step.

# tabulate results
slrresults2 <- slrsummary(m2.lm)
slrresults2
Results of the regression model.
Estimate Pearson’s r Std. Error t value Pr(>|t|) P-value sig.
(Intercept) 14.93 0.53 27.94 0 p < .001***
GroupB -3.17 0.48 0.76 -4.19 1e-04 p < .001***
Model statistics Value
Number of cases in model 60
Residual standard error on 58 DF 2.93
Multiple R-squared 0.2322
Adjusted R-squared 0.219
F-statistic (1, 58) 17.55
Model p-value 1e-04

The results of this second simple linear regressions can be summarized as follows:

A simple linear regression was fitted to the data. A visual assessment of the model diagnostics did not indicate any problematic or disproportionately influential data points (outliers). The final linear regression model is based on 60 data points, performed significantly better than an intercept-only base line model (F (1, 58): 17.55, p-value <. 001***), and reported that the model explained 21.9 percent of variance which confirmed a good model fit. According to this final model, group A scored significantly better on the language learning test than group B (coefficient: -3.17, SE: 0.48, t-value: -4.19, p-value <. 001 ***).

1.2 Multiple Linear Regression

In contrast to simple linear regression, which estimates the effect of a single predictor, multiple linear regression estimates the effect of various predictor (see the equation below). A multiple linear regression can thus test the effects of various predictors simultaneously.

\[\begin{equation} f_{(x)} = \alpha + \beta_{1}x_{i} + \beta_{2}x_{i+1} + \dots + \beta_{n}x_{i+n} + \epsilon \end{equation}\]

There exists a wealth of literature focusing on multiple linear regressions and the concepts it is based on. For instance, there are Achen (1982), Bortz (2006), Crawley (2005), Faraway (2002), Field, Miles, and Field (2012) (my personal favourite), Gries (2009), Levshina (2015), and Wilcox (2009) to name just a few. Introductions to regression modeling in R are Baayen (2008), Crawley (2012), Gries (2009), or Levshina (2015).

The model diagnostics we are dealing with here are partly identical to the diagnostic methods discussed in the section on simple linear regression. Because of this overlap, diagnostics will only be described in more detail if they have not been described in the section on simple linear regression.

A brief note on minimum necessary sample or data set size appears necessary here. Although there appears to be a general assumption that 25 data points per group are sufficient, this is not necessarily correct (it is merely a general rule of thumb that is actually often incorrect). Such rules of thumb are inadequate because the required sample size depends on the number of variables in a given model, the size of the effect and the variance of the effect. If a model contains many variables, then this requires a larger sample size than a model which only uses very few predictors. Also, to detect an effect with a very minor effect size, one needs a substantially larger sample compared to cases where the effect is very strong. In fact, when dealing with small effects, model require a minimum of 600 cases to reliably detect these effects. Finally, effects that very robust and do not vary much require a much smaller sample size compared with effects that are spurious and vary substantially. Since the sample size depends on the effect size and variance as well as the number of variables, there is no final one-size-fits-all answer to what the best sample size is.

Another, slightly better but still incorrect, rule of thumb is that the more data, the better. This is not correct because models based on too many cases are prone for overfitting and thus report correlations as being significant that are not. However, given that there are procedures that can correct for overfitting, larger data sets are still preferable to data sets that are simply too small to warrant reliable results. In conclusion, it remains true that the sample size depends on the effect under investigation.

Despite there being no ultimate rule of thumb, Field, Miles, and Field (2012) 273-275), based on Green (1991), provide data-driven suggestions for the minimal size of data required for regression models that aim to find medium sized effects (k = number of predictors; categorical variables with more than two levels should be transformed into dummy variables):

  • If one is merely interested in the overall model fit (something I have not encountered), then the sample size should be at least 50 + k (k = number of predictors in model).
  • If one is only interested in the effect of specific variables, then the sample size should be at least 104 + k (k = number of predictors in model).
  • If one is only interested in both model fit and the effect of specific variables, then the sample size should be at least the higher value of 50 + k or 104 + k (k = number of predictors in model).

You will see in the “R”-code below that there is already a function that tests whether the sample size is sufficient.

Example: Gifts and Availability

The example we will go through here is taken from Field, Miles, and Field (2012). In this example, the research question is if the money that men spend on presents for women depends on the women’s attractiveness and their relationship status. To answer this research question, we will implement a multiple linear regression and start by preparing the R-session (activating necessary packages, and loading functions).

# load libraries
library(boot)
library(Boruta)
library(car)
library(caret)
library(DT)
library(effects)
library(foreign)
library(ggplot2)
library(ggeffects)
library(gridExtra)
library(Hmisc)
library(knitr)
library(lme4)
library(MASS)
library(mlogit)
library(msm)
library(MuMIn)
library(nlme)
library(plyr)
library(QuantPsyc)
library(reshape2)
library(rms)
library(sandwich)
library(sfsmisc)
library(sjPlot)
library(stringr)
library(vcd)
library(visreg)
# load functions
source("https://slcladal.github.io/rscripts/blrsummary.r")
source("https://slcladal.github.io/rscripts/multiplot.r")
source("https://slcladal.github.io/rscripts/mlinrsummary.r")
source("https://slcladal.github.io/rscripts/SampleSizeMLR.r")
source("https://slcladal.github.io/rscripts/ExpR.r")

After preparing the session, we can now load the data and inspect its structure and properties.

# load data
mlrdata <- read.delim("https://slcladal.github.io/data/mlrdata.txt", header = TRUE)
# inspect data
datatable(mlrdata, rownames = FALSE, options = list(pageLength = 5, scrollX=T), filter = "none")

The data set consist of three variables stored in three columns. The first column contains the relationship status of the women, the second whether the man is interested in the woman, and the third column represents the money spend on the present. The data set represents 100 cases and the mean amount of money spend on a present is 88.38 dollars. In a next step, we visualize the data to get a more detailed impression of the relationships between variables.

# create plots
p1 <- ggplot(mlrdata, aes(status, money)) +   # data + x/y-axes
  geom_boxplot(fill=c("grey30", "grey70")) + # def. col.
  theme_bw(base_size = 8)+   # black and white theme
  labs(x = "") +                        # x-axis label
  labs(y = "Money spent on present (AUD)", cex = .75) +   # y-axis label
  coord_cartesian(ylim = c(0, 250)) +   # y-axis range
  guides(fill = FALSE) +                # no legend
  ggtitle("Status")                     # title
# plot 2
p2 <- ggplot(mlrdata, aes(attraction, money)) +
  geom_boxplot(fill=c("grey30", "grey70")) +
  theme_bw(base_size = 8) +
  labs(x = "") +                              # x-axis label
  labs(y = "Money spent on present (AUD)") +  # y-axis label
  coord_cartesian(ylim = c(0, 250)) +
  guides(fill = FALSE) +
  ggtitle("Attraction")
# plot 3
p3 <- ggplot(mlrdata, aes(x = money)) +
  geom_histogram(aes(y=..density..),    # add density statistic
                 binwidth = 10,         # def. bin width
                 colour = "black",      # def. bar edge colour
                 fill = "white") +      # def. bar col.
    theme_bw() +                        # black-white theme
  geom_density(alpha=.2, fill = "gray50") + # def. col. of overlay
    labs(x = "Money spent on present (AUD)") +
  labs(y = "Density of frequency")
# plot 4
p4 <- ggplot(mlrdata, aes(status, money)) +
  geom_boxplot(notch = F, aes(fill = factor(status))) + # create boxplot
  scale_fill_manual(values = c("grey30", "grey70")) +   # def. col. palette
  facet_wrap(~ attraction, nrow = 1) +  # separate panels for attraction
  theme_set(theme_bw(base_size = 8)) +
  labs(x = "") +
  labs(y = "Money spent on present (AUD)") +
  coord_cartesian(ylim = c(0, 250)) +
  guides(fill = FALSE)
# show plots
grid.arrange(grobs = list(p1, p2, p3, p4), widths = c(1, 1), layout_matrix = rbind(c(1, 2), c(3, 4)))

The upper left figure consists of a boxplot which shows how much money was spent based on the relationship status of the moan. The figure suggests that men spend more on women who are not in a relationship. The next figure shows the relationship between the money spend on presents and whether or not the men were interested in the women.

The boxplot in the upper right panel suggests that men spend substantially more on women if the men are interested in them. The next figure depicts the distribution of the amounts of money spend on women. In addition, the figure indicates the existence of two outliers (dots in the boxplot)

The histogram in the lower left panel shows that, although the mean amount of money spent on presents is 88.38 dollars, the distribution peaks around 50 dollars indicating that on average, men spend about 50 dollars on presents. Finally, we will plot the amount of money spend on presents against relationship status by attraction in order to check whether the money spent on presents is affected by an interaction between attraction and relationship status.

The boxplot in the lower right panel confirms the existence of an interaction (a non-additive term) as men only spend more money on single women if the men are interested in the women. If men are not interested in the women, then the relationship has no effect as they spend an equal amount of money on the women regardless of whether they are in a relationship or not.

We will now start to implement the regression model. In a first step, we create two saturated base-line models that contain all possible predictors (main effects and interactions). The two models are identical but one is generated with the “lm” and the other with the “glm” function as these functions offer different model parameters in their output.

m1.mlr = lm(                      # generate lm regression object
  money ~ 1 + attraction*status,  # def. rgression formula (1 = intercept)
  data = mlrdata)                 # def. data
m1.glm = glm(                     # generate glm regression object
  money ~ 1 + attraction*status,  # def. rgression formula (1 = intercept)
  family = gaussian,              # def. linkage function
  data = mlrdata)                 # def. data

After generating the saturated base-line models we can now start with the model fitting. Model fitting refers to a process that aims at find the model that explains a maximum of variance with a minimum of predictors (see Field, Miles, and Field 2012, 318). Model fitting is therefore based on the principle of parsimony which is related to Occam’s razor according to which explanations that require fewer assumptions are more likely to be true.

Automatic Model Fitting and Why You Should Not Use It

In this section, we will use a step-wise step-down procedure that uses decreases in AIC (Akaike information criterion) as the criterion to minimize the model in a step-wise manner. This procedure aims at finding the model with the lowest AIC values by evaluating - step-by-step - whether the removal of a predictor (term) leads to a lower AIC value.

We use this method here just so that you know it exists and how to implement it but you should rather avoid using automated model fitting. The reason for avoiding automated model fitting is that the algorithsm only checks if the AIC has decreased but not if the model is stable or reliable. Thus, automated model fitting has the problem that you can never be sure that the way that lead you to the final model is reliable and that all models were indeed stable. Imagine you want to climb down from a roof top and you have a ladder. The problem is that you do not know if and how many steps are broken. This is similar to using automated model fitting. In other sections, we will explore better methods to fit models (manual step-wise step-up and step-down procedures, for example).

The AIC is calculated using the equation below. The lower the AIC value, the better the balance between explained variance and the number of predictors. AIC values can and should only be compared for models that are fit on the same dataset with the same (number of) cases (\(LL\) stands for LogLikelihood and \(k\) represents the number of predictors in the model).

\[\begin{equation} -2LL + 2k \label{eq:aic} \end{equation}\]

Interactions are evaluated first and only if all interactions have been removed would the procedure start removing main effects. Other model fitting procedures (forced entry, step-wise step up, hierarchical) are discussed during the implementation of other regression models. We cannot discuss all procedures here as model fitting is rather complex and a discussion of even the most common procedures would to lengthy and time consuming at this point. It is important to note though that there is not perfect model fitting procedure and automated approaches should be handled with care as they are likely to ignore violations of model parameters that can be detected during manual - but time consuming - model fitting procedures. As a general rule of thumb, it is advisable to fit models as carefully and deliberately as possible. We will now begin to fit the model.

# automated AIC based model fitting
step(m1.mlr, direction = "both")
## Start:  AIC=592.52
## money ~ 1 + attraction * status
## 
##                     Df Sum of Sq   RSS    AIC
## <none>                           34558 592.52
## - attraction:status  1     24947 59505 644.86
## 
## Call:
## lm(formula = money ~ 1 + attraction * status, data = mlrdata)
## 
## Coefficients:
##                          (Intercept)               attractionNotInterested  
##                                99.15                                -47.66  
##                         statusSingle  attractionNotInterested:statusSingle  
##                                57.69                                -63.18

The automated model fitting procedure informs us that removing predictors ahs not caused a decrease in the AIC. The saturated model is thus also the final minimal adequate model. We will now inspect the final minimal model and go over the model report.

m2.mlr = lm(                       # generate lm regression object
  money ~ (status + attraction)^2, # def. regression formula
  data = mlrdata)                  # def. data
m2.glm = glm(                      # generate glm regression object
  money ~ (status + attraction)^2, # def. regression formula
  family = gaussian,               # def. linkage function
  data = mlrdata)                  # def. data
# inspect final minimal model
summary(m2.mlr)
## 
## Call:
## lm(formula = money ~ (status + attraction)^2, data = mlrdata)
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -45.08 -14.26   0.46  11.93  44.14 
## 
## Coefficients:
##                                      Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                            99.155      3.795  26.131  < 2e-16 ***
## statusSingle                           57.693      5.366  10.751  < 2e-16 ***
## attractionNotInterested               -47.663      5.366  -8.882 3.75e-14 ***
## statusSingle:attractionNotInterested  -63.179      7.589  -8.325 5.81e-13 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 18.97 on 96 degrees of freedom
## Multiple R-squared:  0.852,  Adjusted R-squared:  0.8474 
## F-statistic: 184.3 on 3 and 96 DF,  p-value: < 2.2e-16

The first element of the report is called Call and it reports the regression formula of the model. Then, the report provides the residual distribution (the range, median and quartiles of the residuals) which allows drawing inferences about the distribution of differences between observed and expected values. If the residuals are distributed unevenly, then this is a strong indicator that the model is unstable and unreliable because mathematical assumptions on which the model is based are violated.

Next, the model summary reports the most important part: a table with model statistics of the fixed-effects structure of the model. The table contains the estimates (coefficients of the predictors), standard errors, t-values, and the p-values which show whether a predictor significantly correlates with the dependent variable that the model investigates.

All main effects (status and attraction) as well as the interaction between status and attraction is reported as being significantly correlated with the dependent variable (money). An interaction occurs if a correlation between the dependent variable and a predictor is affect by another predictor.

The top most term is called intercept and has a value of 99.15 which represents the base estimate to which all other estimates refer. To exemplify what this means, let us consider what the model would predict a man would spend on a present for a women who is single but the man is not attracted to her: The amount he would spend (based on the model would be 99.15 dollars (the intercept) plus 57.69 dollars (because she is single) minus 47.66 dollars (because he is not interested in her) minus 63.18 dollars because of the interaction between status and attraction.

#intercept  Single  NotInterested  Single:NotInterested
99.15     + 57.69  + 0           + 0     # 156.8 single + interested
## [1] 156.84
99.15     + 57.69  - 47.66       - 63.18 # 46.00 single + not interested
## [1] 46
99.15     - 0      + 0           - 0     # 99.15 relationship + interested
## [1] 99.15
99.15     - 0      - 47.66       - 0     # 51.49 relationship + not interested
## [1] 51.49

Interestingly, the model predicts that a man would invest even less money in a woman that he is not interested in if she were single compared to being in a relationship! We can derive the same results easier using the “predict” function.

# make prediction based on the model for original data
prediction <- predict(m2.mlr, newdata = mlrdata)
# inspect predictions
table(round(prediction,2))
## 
##  46.01  51.49  99.15 156.85 
##     25     25     25     25

Below the table of coefficients, the regression summary reports model statistics that provide information about how well the model performs. The difference between the values and the values in the coefficients table is that the model statistics refer to the model as a whole rather than focusing on individual predictors.

The multiple R2-value is a measure of how much variance the model explains. A multiple R2-value of 0 would inform us that the model does not explain any variance while a value of .852 mean that the model explains 85.2 percent of the variance. A value of 1 would inform us that the model explains 100 percent of the variance and that the predictions of the model match the observed values perfectly. Multiplying the multiple R2-value thus provides the percentage of explained variance. Models that have a multiple R2-value equal or higher than .05 are deemed substantially significant (see Szmrecsanyi 2006, 55). It has been claimed that models should explain a minimum of 5 percent of variance but this is problematic as itis not uncommon for models to have very low explanatory power while still performing significantly and systematically better than chance. In addition, the total amount of variance is negligible in cases where one is interested in very weak but significant effects. It is much more important for model to perform significantly better than minimal base-line models because if this is not the case, then the model does not have any predictive and therefore no explanatory power.

The adjusted R2-value considers the amount of explained variance in light of the number of predictors in the model (it is thus somewhat similar to the AIC and BIC) and informs about how well the model would perform if it were applied to the population that the sample is drawn from. Ideally, the difference between multiple and adjusted R2-value should be very small as this means that the model is not overfitted. If, however, the difference between multiple and adjusted R2-value is substantial, then this would strongly suggest that the model is instable and overfitted to the data while being inadequate for drawing inferences about the population. Differences between multiple and adjusted R2-values indicate that the data contains outliers that cause the distribution of the data on which the model is based to differ from the distributions that the model mathematically requires to provide reliable estimates. The difference between multiple and adjusted R2-value in our model is very small (85.2-84.7=.05) and should not cause concern.

Before continuing, we will calculate the confidence intervals of the coefficients.

# extract confidence intervals of the coefficients
confint(m2.mlr)
##                                          2.5 %    97.5 %
## (Intercept)                           91.62258 106.68702
## statusSingle                          47.04063  68.34497
## attractionNotInterested              -58.31497 -37.01063
## statusSingle:attractionNotInterested -78.24324 -48.11436
# create and compare baseline- and minimal adequate model
m0.mlr <- lm(money ~1, data = mlrdata)
anova(m0.mlr, m2.mlr)
## Analysis of Variance Table
## 
## Model 1: money ~ 1
## Model 2: money ~ (status + attraction)^2
##   Res.Df    RSS Df Sum of Sq      F    Pr(>F)    
## 1     99 233562                                  
## 2     96  34558  3    199005 184.28 < 2.2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Now, we compare the final minimal adequate model to the base-line model to test whether then final model significantly outperforms the baseline model.

# compare baseline- and minimal adequate model
Anova(m0.mlr, m2.mlr, type = "III")
## Anova Table (Type III tests)
## 
## Response: money
##             Sum Sq Df F value    Pr(>F)    
## (Intercept) 781016  1  2169.6 < 2.2e-16 ***
## Residuals    34558 96                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

The comparison between the two model confirms that the minimal adequate model performs significantly better (makes significantly more accurate estimates of the outcome variable) compared with the baseline model.

Outlier Detection

After implementing the multiple regression, we now need to look for outliers and perform the model diagnostics by testing whether removing data points disproportionately decreases model fit. To begin with, we generate diagnostic plots.

# start plotting
par(mfrow = c(2, 2)) # display plots in 3 rows/2 columns
plot(m2.mlr); par(mfrow = c(1, 1)) # cerate plots and restore original settings
## hat values (leverages) are all = 0.04
##  and there are no factor predictors; no plot no. 5

The plots do not show severe probelms such as funnel shaped patterns or drastic deviations from the diagonal line in Normal Q-Q plot (have a look at the explanation of what to look for and how to interpret these diagnostic plots in the section on simple linear regression) but data points 52, 64, and 83 are repeatedly indicated as potential outliers.

# determine a cutoff for data points that have D-values higher than 4/(n-k-1)
cutoff <- 4/((nrow(mlrdata)-length(m2.mlr$coefficients)-2))
# start plotting
par(mfrow = c(1, 2))           # display plots in 3 rows/2 columns
qqPlot(m2.mlr, main="QQ Plot") # create qq-plot
## [1] 52 83
plot(m2.mlr, which=4, cook.levels = cutoff); par(mfrow = c(1, 1))

The graphs indicate that data points 52, 64, and 83 may be problematic. We will therefore statistically evaluate whether these data points need to be removed. In order to find out which data points require removal, we extract the influence measure statistics and add them to out data set.

# extract influence statistics
infl <- influence.measures(m2.mlr)
# add infl. statistics to data
mlrdata <- data.frame(mlrdata, infl[[1]], infl[[2]])
# annotate too influential data points
remove <- apply(infl$is.inf, 1, function(x) {
  ifelse(x == TRUE, return("remove"), return("keep")) } )
# add annotation to data
mlrdata <- data.frame(mlrdata, remove)
# number of rows before removing outliers
nrow(mlrdata)
## [1] 100
# remove outliers
mlrdata <- mlrdata[mlrdata$remove == "keep", ]
# number of rows after removing outliers
nrow(mlrdata)
## [1] 98

The difference in row in the data set before and after removing data points indicate that two data points which represented outliers have been removed.

Rerun Regression

As we have a different data set now, we need to rerun the regression analysis. As the steps are identical to the regression analysis performed above, the steps will not be described in greater detail.

# recreate regression models on new data
m0.mlr = lm(money ~ 1, data = mlrdata)
m0.glm = glm(money ~ 1, family = gaussian, data = mlrdata)
m1.mlr = lm(money ~ (status + attraction)^2, data = mlrdata)
m1.glm = glm(money ~ status * attraction, family = gaussian,
             data = mlrdata)
# automated AIC based model fitting
step(m1.mlr, direction = "both")
## Start:  AIC=570.29
## money ~ (status + attraction)^2
## 
##                     Df Sum of Sq   RSS    AIC
## <none>                           30411 570.29
## - status:attraction  1     21647 52058 620.96
## 
## Call:
## lm(formula = money ~ (status + attraction)^2, data = mlrdata)
## 
## Coefficients:
##                          (Intercept)                          statusSingle  
##                                99.15                                 55.85  
##              attractionNotInterested  statusSingle:attractionNotInterested  
##                               -47.66                                -59.46
# create new final models
m2.mlr = lm(money ~ (status + attraction)^2, data = mlrdata)
m2.glm = glm(money ~ status * attraction, family = gaussian,
             data = mlrdata)
# inspect final minimal model
summary(m2.mlr)
## 
## Call:
## lm(formula = money ~ (status + attraction)^2, data = mlrdata)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -35.764 -13.505  -0.989  10.599  38.772 
## 
## Coefficients:
##                                      Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                            99.155      3.597  27.563  < 2e-16 ***
## statusSingle                           55.854      5.140  10.866  < 2e-16 ***
## attractionNotInterested               -47.663      5.087  -9.369 4.04e-15 ***
## statusSingle:attractionNotInterested  -59.461      7.269  -8.180 1.34e-12 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 17.99 on 94 degrees of freedom
## Multiple R-squared:  0.8574, Adjusted R-squared:  0.8528 
## F-statistic: 188.4 on 3 and 94 DF,  p-value: < 2.2e-16
# extract confidence intervals of the coefficients
confint(m2.mlr)
##                                          2.5 %    97.5 %
## (Intercept)                           92.01216 106.29744
## statusSingle                          45.64764  66.05943
## attractionNotInterested              -57.76402 -37.56158
## statusSingle:attractionNotInterested -73.89468 -45.02805
# compare baseline with final model
anova(m0.mlr, m2.mlr)
## Analysis of Variance Table
## 
## Model 1: money ~ 1
## Model 2: money ~ (status + attraction)^2
##   Res.Df    RSS Df Sum of Sq      F    Pr(>F)    
## 1     97 213227                                  
## 2     94  30411  3    182816 188.36 < 2.2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# compare baseline with final model
Anova(m0.mlr, m2.mlr, type = "III")
## Anova Table (Type III tests)
## 
## Response: money
##             Sum Sq Df F value    Pr(>F)    
## (Intercept) 760953  1  2352.1 < 2.2e-16 ***
## Residuals    30411 94                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Additional Model Diagnostics

After rerunning the regression analysis on the updated data set, we again create diagnostic plots in order to check whether there are potentially problematic data points.

# start plotting
par(mfrow = c(2, 2)) # display plots in 2 rows/2 columns
plot(m2.mlr)         # plot fitted values

par(mfrow = c(1, 1)) # restore original settings
# determine a cutoff for data points that have
# D-values higher than 4/(n-k-1)
cutoff <- 4/((nrow(mlrdata)-length(m2.mlr$coefficients)-2))
# start plotting
par(mfrow = c(1, 2))           # display plots in 1 row/2 columns
qqPlot(m2.mlr, main="QQ Plot") # create qq-plot
## 84 88 
## 82 86
plot(m2.mlr, which=4, cook.levels = cutoff) # plot cook*s distance

par(mfrow = c(1, 1))           # restore original settings

Although the diagnostic plots indicate that additional points may be problematic, but these data points deviate substantially less from the trend than was the case with the data points that have already been removed. To make sure that retaining the data points that are deemed potentially problematic by the diagnostic plots, is acceptable, we extract diagnostic statistics and add them to the data.

# add model diagnostics to the data
mlrdata$residuals <- resid(m2.mlr)
mlrdata$standardized.residuals <- rstandard(m2.mlr)
mlrdata$studentized.residuals <- rstudent(m2.mlr)
mlrdata$cooks.distance <- cooks.distance(m2.mlr)
mlrdata$dffit <- dffits(m2.mlr)
mlrdata$leverage <- hatvalues(m2.mlr)
mlrdata$covariance.ratios <- covratio(m2.mlr)
mlrdata$fitted <- m2.mlr$fitted.values

We can now use these diagnostic statistics to create more precise diagnostic plots.

# plot 5
p5 <- ggplot(mlrdata,
             aes(studentized.residuals)) +
  theme(legend.position = "none") +
  theme_set(theme_bw(base_size = 8))+
  geom_histogram(aes(y=..density..),
                 binwidth = 1,
                 colour="black",
                 fill="white") +
  labs(x = "Studentized Residual", y = "Density") +
  stat_function(fun = dnorm,
                args = list(mean = mean(mlrdata$studentized.residuals, na.rm = TRUE),
                            sd = sd(mlrdata$studentized.residuals, na.rm = TRUE)),
                colour = "red", size = 1)
# plot 6
p6 <- ggplot(mlrdata, aes(fitted, studentized.residuals)) +
  geom_point() +
  geom_smooth(method = "lm", colour = "Red")+
  theme_bw(base_size = 8)+
  labs(x = "Fitted Values",
       y = "Studentized Residual")
# plot 7
p7 <- qplot(sample = mlrdata$studentized.residuals, stat="qq") +
  theme_bw(base_size = 8) +
  labs(x = "Theoretical Values",
       y = "Observed Values")
grid.arrange(p5, p6, p7, nrow = 1)

The new diagnostic plots do not indicate outliers that require removal. With respect to such data points the following parameters should be considered:

  1. Data points with standardised residuals > 3.29 should be removed (Field, Miles, and Field 2012, 269)

  2. If more than 1 percent of data points have standardized residuals exceeding values > 2.58, then the error rate of the model is inacceptable (Field, Miles, and Field 2012, 269).

  3. If more than 5 percent of data points have standardized residuals exceeding values > 1.96, then the error rate of the model is inacceptable (Field, Miles, and Field 2012, 269)

  4. In addition, data points with Cook’s D-values > 1 should be removed (Field, Miles, and Field 2012, 269)

  5. Also, data points with leverage values \(3(k + 1)/n\) (k = Number of predictors, N = Number of cases in model) should be removed (Field, Miles, and Field 2012, 270)

  6. There should not be (any) autocorrelation among predictors. This means that independent variables cannot be correlated with itself (for instance, because data points come from the same subject). If there is autocorrelation among predictors, then a Repeated Measures Design or a (hierarchical) mixed-effects model should be implemented instead.

  7. Predictors cannot substantially correlate with each other (multicollinearity). If a model contains predictors that have variance inflation factors (VIF) > 10 the model is completely unreliable (Myers 1990) and predictors causing such VIFs should be removed. Indeed, even VIFs of 2.5 can be problematic (Szmrecsanyi 2006, 215, @zuur2010protocol) proposes that variables with VIFs exceeding 3 should be removed!

  8. Data points with 1/VIF values \(<\) .1 must be removed (data points with values above .2 are considered problematic) (Menard 1995).

  9. The mean value of VIFs should be \(<\) 1 (Bowerman and O’Connell 1990).

# 1: optimal = 0
# (aufgelistete datenpunkte sollten entfernt werden)
which(mlrdata$standardized.residuals > 3.29)
## integer(0)
# 2: optimal = 1
# (listed data points should be removed)
stdres_258 <- as.vector(sapply(mlrdata$standardized.residuals, function(x) {
ifelse(sqrt((x^2)) > 2.58, 1, 0) } ))
(sum(stdres_258) / length(stdres_258)) * 100
## [1] 0
# 3: optimal = 5
# (listed data points should be removed)
stdres_196 <- as.vector(sapply(mlrdata$standardized.residuals, function(x) {
ifelse(sqrt((x^2)) > 1.96, 1, 0) } ))
(sum(stdres_196) / length(stdres_196)) * 100
## [1] 6.122449
# 4: optimal = 0
# (listed data points should be removed)
which(mlrdata$cooks.distance > 1)
## integer(0)
# 5: optimal = 0
# (data points should be removed if cooks distance is close to 1)
which(mlrdata$leverage >= (3*mean(mlrdata$leverage)))
## integer(0)
# 6: checking autocorrelation:
# Durbin-Watson test (optimal: grosser p-wert)
dwt(m2.mlr)
##  lag Autocorrelation D-W Statistic p-value
##    1     -0.01433247      1.968042   0.666
##  Alternative hypothesis: rho != 0
# 7: test multicolliniarity 1
vif(m2.mlr)
##                         statusSingle              attractionNotInterested 
##                                 2.00                                 1.96 
## statusSingle:attractionNotInterested 
##                                 2.96
# 8: test multicolliniarity 2
1/vif(m2.mlr)
##                         statusSingle              attractionNotInterested 
##                            0.5000000                            0.5102041 
## statusSingle:attractionNotInterested 
##                            0.3378378
# 9: mean vif should not exceed 1
mean(vif(m2.mlr))
## [1] 2.306667

Except for the mean VIF value (2.307) which should not exceed 1, all diagnostics are acceptable. We will now test whether the sample size is sufficient for our model. With respect to the minimal sample size and based on (Green 1991), (Field, Miles, and Field 2012, 273–74) offer the following rules of thumb (k = number of predictors; categorical predictors with more than two levels should be recoded as dummy variables):

Evaluation of Sample Size

After performing the diagnostics, we will now test whether the sample size is adequate and what the values of “R” would be based on a random distribution in order to be able to estimate how likely a \(\beta\)-error is given the present sample size (see Field, Miles, and Field 2012, 274). Beta errors (or \(\beta\)-errors) refer to the erroneous assumption that a predictor is not significant (based on the analysis and given the sample) although it does have an effect in the population. In other words, \(\beta\)-error means to overlook a significant effect because of weaknesses of the analysis. The test statistics ranges between 0 and 1 where lower values are better. If the values approximate 1, then there is serious concern as the model is not reliable given the sample size. In such cases, unfortunately, the best option is to increase the sample size.

# check if sample size is sufficient
smplesz(m2.mlr)
## [1] "Sample too small: please increase your sample by  9  data points"
# check beta-error likelihood
expR(m2.mlr)
## [1] "Based on the sample size expect a false positive correlation of 0.0309 between the predictors and the predicted"

The function “smplesz” reports that the sample size is insufficient by 9 data points according to Green (1991). The likelihood of \(\beta\)-errors, however, is very small (0.0309). As a last step, we summarize the results of the regression analysis.

tab_model(m0.glm, m2.glm)
  money money
Predictors Estimates CI p Estimates CI p
(Intercept) 88.12 78.72 – 97.52 <0.001 99.15 92.10 – 106.21 <0.001
status [Single] 55.85 45.78 – 65.93 <0.001
attraction
[NotInterested]
-47.66 -57.63 – -37.69 <0.001
status [Single] *
attraction
[NotInterested]
-59.46 -73.71 – -45.21 <0.001
Observations 98 98
R2 Nagelkerke 0.000 1.000

Note: The R2 values in this report is incorrect! As we have seen above the correct R2 values are: multiple RR2 0.8574, Adjusted R2 0.8528:

summary(m2.mlr)
## 
## Call:
## lm(formula = money ~ (status + attraction)^2, data = mlrdata)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -35.764 -13.505  -0.989  10.599  38.772 
## 
## Coefficients:
##                                      Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                            99.155      3.597  27.563  < 2e-16 ***
## statusSingle                           55.854      5.140  10.866  < 2e-16 ***
## attractionNotInterested               -47.663      5.087  -9.369 4.04e-15 ***
## statusSingle:attractionNotInterested  -59.461      7.269  -8.180 1.34e-12 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 17.99 on 94 degrees of freedom
## Multiple R-squared:  0.8574, Adjusted R-squared:  0.8528 
## F-statistic: 188.4 on 3 and 94 DF,  p-value: < 2.2e-16

Although Field, Miles, and Field (2012) suggest that the main effects of the predictors involved in the interaction should not be interpreted, they are interpreted here to illustrate how the results of a multiple linear regression can be reported. Accordingly, the results of the regression analysis performed above can be summarized as follows:

A multiple linear regression was fitted to the data using an automated, step-wise, AIC-based (Akaike’s Information Criterion) procedure. The model fitting arrived at a final minimal model. During the model diagnostics, two outliers were detected and removed. Further diagnostics did not find other issues after the removal.

The final minimal adequate regression model is based on 98 data points and performs highly significantly better than a minimal baseline model (Multiple R2: .857, Adjusted R2: .853, F-statistic (3, 94): 154.4, AIC: 850.4, BIC: 863.32, p<.001\(***\)). The final minimal adequate regression model reports attraction and status as significant main effects. The relationship status of women correlates highly significantly and positively with the amount of money spend on the women’s presents (SE: 5.14, t-value: 10.87, p<.001\(***\)). This shows that men spend 156.8 dollars on presents are single while they spend 99,15 dollars if the women are in a relationship. Whether men are attracted to women also correlates highly significantly and positively with the money they spend on women (SE: 5.09, t-values: -9.37, p<.001\(***\)). If men are not interested in women, they spend 47.66 dollar less on a present for women compared with women the men are interested in.

Furthermore, the final minimal adequate regression model reports a highly significant interaction between relationship status and attraction (SE: 7.27, t-value: -8.18, p<.001\(***\)): If women are single but man are not interested in them, men spend 59.46 dollars less on their presents compared to all other constellations.

1.3 Multiple Binomial Logistic Regression

Logistic regression is a multivariate analysis technique that builds on and is very similar in terms of its implementation to linear regression but logistic regressions take dependent variables that represent nominal rather than numeric scaling (Harrell Jr 2015). The difference requires that the linear regression must be modified in certain ways to avoid producing non-sensical outcomes. The most fundamental difference between logistic and linear regressions is that logistic regression work on the probabilities of an outcome (the likelihood), rather than the outcome itself. In addition, the likelihoods on which the logistic regression works must be logged (logarithmized) in order to avoid produce predictions that produce values greater than 1 (instance occurs) and 0 (instance does not occur).

To understand what this mean, we will use a very simple example. In this example, we want to see whether the height of men affect their likelihood of being in a relationship. The data we use represents a data set consisting of two variables: height and relationship.

The left panel of the Figure above shows that a linear model would predict values for the relationship status, which represents a factor (0 = Not in Relationship and 1 = In Relationship), that are non-sensical because 1.1 does not make sense if the only options are 0 OR 1. The logistic function shown in the right panel of the Figure above solves this problem by working on the logged probabilities of an outcome rather than on the actual outcome.

Example 1: EH in Kiwi English

To exemplify hot to implement a logistic regression in R (see Agresti 1996, @agresti2011categorical for very good and thorough introductions to this topic), we will analyse the use of the discourse particle eh in New Zealand English and test which factors correlate with its occurrence. The data set represents speech units in a corpus that were coded for the speaker who uttered a given speech unit, the gender, ethnicity, and age of that speaker and whether or not the speech unit contained an eh. To begin with, we clean the current work space, set option, install and activate relevant packages, load customized functions, and load the example data set.

# load data
blrdata <- read.table("https://slcladal.github.io/data/blrdata.txt",
                      comment.char = "",  # data does not contain comments
                      quote = "",         # data does not contain quotes
                      sep = "\t",         # data is tab separetd
                      header = T)         # variables have headers
# inspect data
datatable(blrdata, rownames = FALSE, options = list(pageLength = 5, scrollX=T), filter = "none")