Introduction

This tutorial contains the materials for the Introduction to R for (absolute) beginners workshop offered by the School of Languages of Cultures at the University of Queensland and it specifically focuses on R for analyzing language data. The R markdown document for this tutorial can be downloaded here. If you already have experience with R, both Wickham and Grolemund (2016) (see here) and Gillespie and Lovelace (2016) (see here) are highly recommendable and excellent resources for improving your coding abilities and workflows in R.

Goals of this workshop

The goals of this workshop are:

  • How to get started with R
  • How to orient yourself to R and R Studio
  • How to create and work in R projects
  • How to know where to look for help and to learn more about R
  • Understand the basics of working with data: load data, save data, working with tables, create a simple plot
  • Learn some best practices for using R scripts, using data, and projects
  • Understand the basics of objects, functions, and indexing

Audience

The intended audience is beginner-level, with no previous experience using R. Thus, no prior knowledge of R is required.

If you want to know more, would like to get some more practice, or would like to have another approach to R, please check out the workshops and resources on R provided by the UQ library. In addition, there are various online resources available to learn R (you can check out a very recommendable introduction here).

Installing R and R Studio

  • You have NOT yet installed R on your computer?

    • You have a Windows computer? Then click here for downloading and installing R

    • You have a Mac? Then click here for downloading and installing R

  • You have NOT yet installed R Studio on your computer?

    • Click here for downloading and installing R Studio.
  • You have NOT yet downloaded the materials for this workshop?

    • Click here to download the data for this session

    • Click here to download the Rmd-file of this workshop

You can find a more elaborate explanation of how to download and install R and R Studio here that was created by the UQ library.

How to use the workshop materials

You can follow this workshop in different ways based on your preferences as well as prior experience and knowledge of R (the suggestions listed below are ordered from less engaged/easy/no knowledge required to more engaged/more complex/more knowledge is required)

  • You can simply sit back and follow the workshop

  • You can load the Rmd-file in R Studio and execute the code snippets in this Rmd-file as we go (we will talk about what Rmd-file are, how they work, and how to work in R Studio below)

    • If you decide on doing this, then I suggest, that you use a section of your screen for Zoom (to see what I do) and another section of your screen to work within your own R project (we will see what an R project is below)
  • You can load the Rmd-file in R Studio, create a new Rmd-file (or Notebook) and then copy and paste the code snippets in this new Rmd-file and execute them as we go.

    • This option requires some knowledge of R and R Studio

    • If you decide on doing this, then I suggest, that you use a section of your screen for Zoom (to see what I do) and another section of your screen to work within your own R project (as with the previous option)

Future workshops will be interactive and allow you to write your own code into code boxes on the website - unfortunately, I was not able to integrate that for this workshop.

Preparation

Before you actually open R or R Studio, there things to consider that make working in R much easier and give your workflow a better structure.

Imagine it like this: when you want to write a book, you could simply take pen and paper and start writing or you could think about what you want to write about, what different chapters your book would consist of, which chapters to write first, what these chapters will deal with, etc. The same is true for R: you could simply open R and start writing code or you can prepare you session and structure what you will be doing.

Folder structure and R projects

Before actually starting with writing code, you should prepare the session by going through the following steps:

1. Create a folder for your project

In that folder, create the following sub-folders (you can, of course, adapt this folder template to match your needs)

  • data (you do not create this folder for the present workshop as you can simply use the data folder that you downloaded for this workshop instead)
  • images
  • tables
  • docs

The folder for your project could look like the the one shown below.

Once you have created your project folder, you can go ahead with R Studio.

3. Open R Studio

This is what R Studio looks like when you first open it:

In R Studio, click on File

You can use the drop-down menu to create a R project

4. R Projects

In R Studio, click on New Project

Next, confirm by clicking OK and select Existing Directory.

Then, navigate to where you have just created the project folder for this workshop.

Once you click on Open, you have created a new R project

5. R Notebooks

In this project, click on File

Click on New File and then on R Notebook as shown below.

This R Notebook will be the file in which you do all your work.

6. Getting started with R Notebooks

You can now start writing in this R Notebook. For instance, you could start by changing the title of the R Notebook and describe what you are doing (what this Notebook contains).

Below is a picture of what this document looked like when I started writing it.

When you write in the R Notebook, you use what is called R Markdown which is explained below.

R Markdown

The Notebook is an R Markdown document: a Rmd (R Markdown) file is more than a flat text document: it’s a program that you can run in R and which allows you to combine prose and code, so readers can see the technical aspects of your work while reading about their interpretive significance.

You can get a nice and short overview of the formatting options in R Markdown (Rmd) files here.

R Markdown allows you to make your research fully transparent and reproducible! If a couple of years down the line another researcher or a journal editor asked you how you have done your analysis, you can simply send them the Notebook or even the entire R-project folder.

As such, Rmd files are a type of document that allows to

  • include snippets of code (and any outputs such as tables or graphs) in plain text while

  • encoding the structure of your document by using simple typographical symbols to encode formatting (rather than HTML tags or format types such as Main header or Header level 1 in Word).

Markdown is really quite simple to learn and these resources may help:

R (Studio) Basics

R Studio is a so-called IDE - Integrated Development Environment. The interface provides easy access to R. The advantage of this application is that R programs and files as well as a project directory can be managed easily. The environment is capable of editing and running program code, viewing outputs and rendering graphics. Furthermore, it is possible to view variables and data objects of an R-script directly in the interface.

R Studio: Panes

The GUI - Graphical User Interface - that `R Studio* provides the screen into four areas that are called panes:

  1. File editor
  2. Environment variables
  3. R console
  4. Management panes (File browser, plots, help display and R packages).

The two most important are the R console (bottom left) and the File editor (or Script in the top left). The Environment variables and Management panes are on the right of the screen and they contain:

  • Environment (top): Lists all currently defined objects and data sets
  • History (top): Lists all commands recently used or associated with a project
  • Plots (bottom): Graphical output goes here
  • Help (bottom): Find help for R packages and functions. Don’t forget you can type ? before a function name in the console to get info in the Help section.
  • Files (bottom): Shows the files available to you in your working directory

These R Studio panes are shown below.

R Console (bottom left pane)

The console pane allows you to quickly and immediately execute R code. You can experiment with functions here, or quickly print data for viewing.

Type next to the > and press Enter to execute.

Exercise

You can use R like a calculator. Try typing 2+8 into the R console.

2+8
## [1] 10

Here, the plus sign is the operator. Operators are symbols that represent some sort of action. However, R is, of course, much more than a simple calculator. To use R more fully, we need to understand objects, functions, and indexing - which we will learn about as we go.

For now, think of objects as nouns and functions as verbs.

Running commands from a script

To run code from a script, insert your cursor on a line with a command, and press CTRL/CMD+Enter.

Or highlight some code to only run certain sections of the command, then press CTRL/CMD+Enter to run.

Alternatively, use the Run button at the top of the pane to execute the current line or selection (see below).

Script Editor (top left pane)

In contrast to the R console, which quickly runs code, the Script Editor (in the top left) does not automatically execute code. The Script Editor allows you to save the code essential to your analysis. You can re-use that code in the moment, refer back to it later, or publish it for replication.

Now, that we have explored R Studio, we are ready to get started with R!

Getting started with R

This section introduces some basic concepts and procedures that help optimize your workflow in R.

Setting up an R session

At the beginning of a session, it is common practice to define some basic parameters. This is not required or even necessary, but it may just help further down the line.

One of the things you can to to prepare you session is to clear the current work space so that we do not erroneously rely on objects that are no longer there (carry-over effects). Also, this session preparation may include specifying options. In the present case, we

  • do not want R to automatically convert character strings to be converted into factors

  • want R to show numbers as numbers up to 100 decimal points (and not show them in mathematical notation (in mathematical notation, 0.007 would be represented as 0.7e-3))

  • want R to show maximally 100 results (otherwise, it can happen that R prints out pages-after-pages of some numbers).

Again, the session preparation is not required or necessary but it can help avoid errors.

# clean current workspace
rm(list=ls(all=T))                                      
# set options
options(stringsAsFactors = F)                           
options(scipen = 100) 
options(max.print=100) 

In script editor pane of R Studio, this would look like this:

Packages

When using R, most of the functions are not loaded or even installing automatically. Instead, most functions are in contained in what are called packages.

R comes with about 30 packages (“base R”). There are over 10,000 user-contributed packages; you can discover these packages online. A prevalent collection of packages is the Tidyverse, which includes ggplot2, a package for making graphics.

Before being able to use a package, we need to install the package (using the install.packages function) and load the package (using the library function). However, a package only needs to be installed once(!) and can then simply be loaded. When you install a package, this will likely install several other packages it depends on. You should have already installed tidyverse before the workshop.

You must load the package in any new R session where you want to use that package. Below I show what you need to type when you want to install the tidyverse, the tidytext, the quanteda, the readxl, and the tm packages (which are the packages that we will need in this workshop).

install.packages("tidyverse")
install.packages("tidytext")
install.packages("quanteda")
install.packages("readxl")
install.packages("tm")
install.packages("tokenizers")

To load these packages, use the library function which takes the package name as its main argument.

library(tidyverse)
library(tidytext)
library(quanteda)
library(readxl)
library(tm)
library(tokenizers)

The session preparation section of your Rmd file will thus also state which packages a script relies on.

In script editor pane of R Studio, the code blocks that install and activate packages would look like this:

Getting help

When working with R, you will encounter issues and face challenges. A very good thing about R is that it provides various ways to get help or find information about the issues you face.

Finding help within R

To get help regrading what functions a package contains, which arguments a function takes or to get information about how to use a function, you can use the help function or the apropos. function or you can simply type a ? before the package or two ?? if this does not give you any answers.

help(tidyverse) 
apropos("tidyverse")
?require

There are also other “official” help resources from R/R Studio.

Finding help online

One great thing about R is that you can very often find an answer to your question online.

Working with tables

We will now start working with data in R. As most of the data that we work with comes in tables, we will focus on this first before moving on to working with text data.

Loading data from the web

To show, how data can be downloaded from the web, we will download a tab-separated txt-file. Translated to prose, the code below means Create an object called ICE_Ire_bio and in that object, store the result of the read.delim function.

read.delim stands for read delimited file and it takes the URL from which to load the data (or the path to the data on your computer) as its first argument. The sep stand for separator and the \t stands for tab-separated and represents the second argument that the read.delim function takes. The third argument, header, can take either T(RUE) or F(ALSE) and it tells R if the data has column names (headers) or not.

Functions and Objects

In R, functions always have the following form: function(argument1, argument2, ..., argumentN). Typically a function does something to an object (e.g. a table), so that the first argument typically specifies the data to which the function is applied. Other arguments then allow to add some information. Just as a side note, functions are also objects that do not contain data but instructions.

To assign content to an object, we use <- or = so that the we provide a name for an object, and then assign some content to it. For example, MyObject <- 1:3 means Create an object called MyObject and it contains the numbers 1 to 3.

# load data
ICE_Ire_bio <- read.delim("https://slcladal.github.io/data/BiodataIceIreland.txt", 
                      sep = "\t", header = T)

Inspecting data

There are many ways to inspect data. We will briefly go over the most common ways to inspect data.

The head function takes the data-object as its first argument and automatically shows the first 6 elements of an object (or rows if the data-object has a table format).

head(ICE_Ire_bio)
##   id file.speaker.id text.id spk.ref             zone      date    sex   age
## 1  1     <S1A-001$A> S1A-001       A northern ireland 1990-1994   male 34-41
## 2  2     <S1A-001$B> S1A-001       B northern ireland 1990-1994 female 34-41
## 3  3     <S1A-002$?> S1A-002       ?             <NA>      <NA>   <NA>  <NA>
## 4  4     <S1A-002$A> S1A-002       A northern ireland 2002-2005 female 26-33
## 5  5     <S1A-002$B> S1A-002       B northern ireland 2002-2005 female 19-25
## 6  6     <S1A-002$C> S1A-002       C northern ireland 2002-2005   male   50+
##   word.count
## 1        765
## 2       1298
## 3         23
## 4        391
## 5         47
## 6        200

We can also use the head function to inspect more or less elements and we can specify the number of elements (or rows) that we want to inspect as a second argument. In the example below, the 4 tells R that we only want to see the first 4 rows of the data.

head(ICE_Ire_bio, 4)
##   id file.speaker.id text.id spk.ref             zone      date    sex   age
## 1  1     <S1A-001$A> S1A-001       A northern ireland 1990-1994   male 34-41
## 2  2     <S1A-001$B> S1A-001       B northern ireland 1990-1994 female 34-41
## 3  3     <S1A-002$?> S1A-002       ?             <NA>      <NA>   <NA>  <NA>
## 4  4     <S1A-002$A> S1A-002       A northern ireland 2002-2005 female 26-33
##   word.count
## 1        765
## 2       1298
## 3         23
## 4        391

Exercise Time!

Download and inspect the first 7 rows of the data set that you can find under this URL: https://slcladal.github.io/data/lmemdata.txt. Can you guess what the data is about?

Accessing individual cells in a table

If you want to access specific cells in a table, you can do so by typing the name of the object and then specify the rows and columns in square brackets (i.e. data[row, column]). For example, ICE_Ire_bio[2, 4] would show the value of the cell in the second row and fourth column of the object ICE_Ire_bio. We can also use the colon to define a range (as shown below, where 1:5 means from 1 to 5 and 1:3 means from 1 to 3) The command ICE_Ire_bio[1:5, 1:3] thus means:

Show me the first 5 rows and the first 3 columns of the data-object that is called ICE_Ire_bio.

ICE_Ire_bio[1:5, 1:3]
##   id file.speaker.id text.id
## 1  1     <S1A-001$A> S1A-001
## 2  2     <S1A-001$B> S1A-001
## 3  3     <S1A-002$?> S1A-002
## 4  4     <S1A-002$A> S1A-002
## 5  5     <S1A-002$B> S1A-002

Exercise

How would you inspect the content of the cells in 4th column, rows 3 to 5 of the ICE_Ire_bio data set?

Inspecting the structure of data

You can use the str function to inspect the structure of a data set. This means that this function will show the number of observations (rows) and variables (columns) and tell you what type of variables the data consists of

  • int = integer
  • chr = character string
  • num = numeric
  • fct = factor
str(ICE_Ire_bio)
## 'data.frame':    1332 obs. of  9 variables:
##  $ id             : int  1 2 3 4 5 6 7 8 9 10 ...
##  $ file.speaker.id: chr  "<S1A-001$A>" "<S1A-001$B>" "<S1A-002$?>" "<S1A-002$A>" ...
##  $ text.id        : chr  "S1A-001" "S1A-001" "S1A-002" "S1A-002" ...
##  $ spk.ref        : chr  "A" "B" "?" "A" ...
##  $ zone           : chr  "northern ireland" "northern ireland" NA "northern ireland" ...
##  $ date           : chr  "1990-1994" "1990-1994" NA "2002-2005" ...
##  $ sex            : chr  "male" "female" NA "female" ...
##  $ age            : chr  "34-41" "34-41" NA "26-33" ...
##  $ word.count     : int  765 1298 23 391 47 200 464 639 308 78 ...

The summary function summarizes the data.

summary(ICE_Ire_bio)
##        id         file.speaker.id      text.id            spk.ref         
##  Min.   :   1.0   Length:1332        Length:1332        Length:1332       
##  1st Qu.: 333.8   Class :character   Class :character   Class :character  
##  Median : 666.5   Mode  :character   Mode  :character   Mode  :character  
##  Mean   : 666.5                                                           
##  3rd Qu.: 999.2                                                           
##  Max.   :1332.0                                                           
##      zone               date               sex                age           
##  Length:1332        Length:1332        Length:1332        Length:1332       
##  Class :character   Class :character   Class :character   Class :character  
##  Mode  :character   Mode  :character   Mode  :character   Mode  :character  
##                                                                             
##                                                                             
##                                                                             
##    word.count    
##  Min.   :   0.0  
##  1st Qu.:  66.0  
##  Median : 240.5  
##  Mean   : 449.9  
##  3rd Qu.: 638.2  
##  Max.   :2565.0

Tabulating data

We can use the table function to create basic tables that extract raw frequency information. The following command tells us how many instances there are of each level of the variable date in the ICE_Ire_bio.

Note: in order to access specific columns of a data frame, you can first type the name of the data set followed by a $ symbol and then the name of the column (or variable).

table(ICE_Ire_bio$date) 
## 
## 1990-1994 1995-2001 2002-2005 
##       905        67       270

Alternatively, you could, of course, index the column by using its position in the data set like this: ICE_Ire_bio[, 6] - the result of table(ICE_Ire_bio[, 6]) and table(ICE_Ire_bio$date) are the same! Also note that here we leave out indexes for rows to tell R that we want all rows.

When you want to cross-tabulate columns, it is often better to use the ftable function (ftable stands for frequency table).

ftable(ICE_Ire_bio$age, ICE_Ire_bio$sex)
##        female male
##                   
## 0-18        5    7
## 19-25     163   65
## 26-33      83   36
## 34-41      35   58
## 42-49      35   97
## 50+        63  138

Exercise

  1. Using the table function, how many women are in the data collected between 2002 and 2005?

  2. Using the ftable function, how many men are are from northern Ireland in the data collected between 1990 and 1994?

Saving data to your computer

To save tabular data on your computer, you can use the write.table function. This function requires the data that you want to save as its first argument, the location where you want to save the data as the second argument and the type of delimiter as the third argument.

write.table(ICE_Ire_bio, "data/ICE_Ire_bio.txt", sep = "\t") 

Loading data from your computer

To load tabular data from within your project folder (if it is in a tab-separated txt-file) you can also use the read.delim function. The only difference to loading from the web is that you use a path instead of a URL. If the txt-file is in the folder called data in your project folder, you would load the data as shown below.

ICE_Ire_bio <- read.delim("data/ICE_Ire_bio.txt", sep = "\t", header = T)

However, you can always just use the full path (and you must do this is the data is not in your project folder).

NOTE: you may have to change the path to the data!

ICE_Ire_bio <- read.delim("D:\\Uni\\UQ\\SLC\\LADAL\\SLCLADAL.github.io\\data/ICE_Ire_bio.txt", 
                      sep = "\t", header = T)

To if this has worked, we will use the head function to see first 6 rows of the data

head(ICE_Ire_bio)
##   id file.speaker.id text.id spk.ref             zone      date    sex   age
## 1  1     <S1A-001$A> S1A-001       A northern ireland 1990-1994   male 34-41
## 2  2     <S1A-001$B> S1A-001       B northern ireland 1990-1994 female 34-41
## 3  3     <S1A-002$?> S1A-002       ?             <NA>      <NA>   <NA>  <NA>
## 4  4     <S1A-002$A> S1A-002       A northern ireland 2002-2005 female 26-33
## 5  5     <S1A-002$B> S1A-002       B northern ireland 2002-2005 female 19-25
## 6  6     <S1A-002$C> S1A-002       C northern ireland 2002-2005   male   50+
##   word.count
## 1        765
## 2       1298
## 3         23
## 4        391
## 5         47
## 6        200

Loading Excel data

To load Excel spreadsheets, you can use the read_excel function from the readxl package as shown below. However, it may be necessary to install and activate the readxl package first.

ICE_Ire_bio <- read_excel("data/ICEdata.xlsx")

We now briefly check column names to see if the loading of the data has worked.

colnames(ICE_Ire_bio)
## [1] "id"              "file.speaker.id" "text.id"         "spk.ref"        
## [5] "zone"            "date"            "sex"             "age"            
## [9] "word.count"

Renaming, Piping, and Filtering

To rename existing columns in a table, you can use the rename command which takes the table as the first argument, the new name as the second argument, the an equal sign (=), and finally, the old name es the third argument. For example, renaming a column OldName as NewName in a table called MyTable would look like this: rename(MyTable, NewName = OldName).

Piping is done using the %>% sequence and it can be translated as and then. In the example below, we create a new object (ICE_Ire_bio_edit) from the existing object (ICE_Ire_bio) and then we rename the columns in the new object. When we use piping, we do not need to name the data we are using as this is provided by the previous step.

ICE_Ire_bio_edit <- ICE_Ire_bio %>%
  rename(Id = id,
         FileSpeakerId = file.speaker.id,
         File = colnames(ICE_Ire_bio)[3],
         Speaker = colnames(ICE_Ire_bio)[4])
# inspect data
ICE_Ire_bio_edit[1:5, 1:6]
## # A tibble: 5 x 6
##      Id FileSpeakerId File    Speaker zone             date     
##   <dbl> <chr>         <chr>   <chr>   <chr>            <chr>    
## 1     1 <S1A-001$A>   S1A-001 A       northern ireland 1990-1994
## 2     2 <S1A-001$B>   S1A-001 B       northern ireland 1990-1994
## 3     3 <S1A-002$?>   S1A-002 ?       NA               NA       
## 4     4 <S1A-002$A>   S1A-002 A       northern ireland 2002-2005
## 5     5 <S1A-002$B>   S1A-002 B       northern ireland 2002-2005

A very handy way to rename many columns simultaneously, you can use the str_to_title function which capitalizes first letter of a word. In the example below, we capitalize all first letters of the column names of our current data.

colnames(ICE_Ire_bio_edit) <- str_to_title(colnames(ICE_Ire_bio_edit))
# inpsect data
ICE_Ire_bio_edit[1:5, 1:6]
## # A tibble: 5 x 6
##      Id Filespeakerid File    Speaker Zone             Date     
##   <dbl> <chr>         <chr>   <chr>   <chr>            <chr>    
## 1     1 <S1A-001$A>   S1A-001 A       northern ireland 1990-1994
## 2     2 <S1A-001$B>   S1A-001 B       northern ireland 1990-1994
## 3     3 <S1A-002$?>   S1A-002 ?       NA               NA       
## 4     4 <S1A-002$A>   S1A-002 A       northern ireland 2002-2005
## 5     5 <S1A-002$B>   S1A-002 B       northern ireland 2002-2005

To remove rows based on values in columns you can use the filter function.

ICE_Ire_bio_edit2 <- ICE_Ire_bio_edit %>%
  filter(Speaker != "?",
         Zone != is.na(Zone),
         Date == "2002-2005",
         Word.count > 5)
# inspect data
head(ICE_Ire_bio_edit2)
## # A tibble: 6 x 9
##      Id Filespeakerid File   Speaker Zone          Date   Sex   Age   Word.count
##   <dbl> <chr>         <chr>  <chr>   <chr>         <chr>  <chr> <chr>      <dbl>
## 1     4 <S1A-002$A>   S1A-0~ A       northern ire~ 2002-~ fema~ 26-33        391
## 2     5 <S1A-002$B>   S1A-0~ B       northern ire~ 2002-~ fema~ 19-25         47
## 3     6 <S1A-002$C>   S1A-0~ C       northern ire~ 2002-~ male  50+          200
## 4     7 <S1A-002$D>   S1A-0~ D       northern ire~ 2002-~ fema~ 50+          464
## 5     8 <S1A-002$E>   S1A-0~ E       mixed betwee~ 2002-~ male  34-41        639
## 6     9 <S1A-002$F>   S1A-0~ F       northern ire~ 2002-~ fema~ 26-33        308

To select specific columns you can use the select function.

ICE_Ire_bio_selection <- ICE_Ire_bio_edit2 %>%
  select(File, Speaker, Word.count)
# inspect data
head(ICE_Ire_bio_selection)
## # A tibble: 6 x 3
##   File    Speaker Word.count
##   <chr>   <chr>        <dbl>
## 1 S1A-002 A              391
## 2 S1A-002 B               47
## 3 S1A-002 C              200
## 4 S1A-002 D              464
## 5 S1A-002 E              639
## 6 S1A-002 F              308

You can also use the select function to remove specific columns.

ICE_Ire_bio_selection2 <- ICE_Ire_bio_edit2 %>%
  select(-Id, -File, -Speaker, -Date, -Zone, -Age)
# inspect data
head(ICE_Ire_bio_selection2)
## # A tibble: 6 x 3
##   Filespeakerid Sex    Word.count
##   <chr>         <chr>       <dbl>
## 1 <S1A-002$A>   female        391
## 2 <S1A-002$B>   female         47
## 3 <S1A-002$C>   male          200
## 4 <S1A-002$D>   female        464
## 5 <S1A-002$E>   male          639
## 6 <S1A-002$F>   female        308

Ordering data

To order data, for instance, in ascending order according to a specific column you can use the arrange function.

ICE_Ire_bio_ordered_asc <- ICE_Ire_bio_selection2 %>%
  arrange(Word.count)
# inspect data
head(ICE_Ire_bio_ordered_asc)
## # A tibble: 6 x 3
##   Filespeakerid Sex    Word.count
##   <chr>         <chr>       <dbl>
## 1 <S1B-009$D>   female          6
## 2 <S1B-005$C>   female          7
## 3 <S1B-009$C>   male            7
## 4 <S1B-020$F>   male            7
## 5 <S1B-006$G>   female          9
## 6 <S2A-050$B>   male            9

To order data in descending order you can also use the arrange function and simply add a - before the column according to which you want to order the data.

ICE_Ire_bio_ordered_desc <- ICE_Ire_bio_selection2 %>%
  arrange(-Word.count)
# inspect data
head(ICE_Ire_bio_ordered_desc)
## # A tibble: 6 x 3
##   Filespeakerid Sex    Word.count
##   <chr>         <chr>       <dbl>
## 1 <S2A-055$A>   female       2355
## 2 <S2A-047$A>   male         2340
## 3 <S2A-035$A>   female       2244
## 4 <S2A-048$A>   male         2200
## 5 <S2A-015$A>   male         2172
## 6 <S2A-054$A>   female       2113

The output shows that the female speaker in file S2A-005 with the speaker identity A has the highest word count with 2,355 words.

Exercise

Using the data called ICE_Ire_bio, create a new data set called ICE_Ire_ordered and arrange the data in descending order by the number of words that each speaker has uttered. Who is the speaker with the highest word count?

Creating and changing variables

New columns are created, and existing columns can be changed, by using the mutate function. The mutate function takes two arguments (if the data does not have to be specified): the first argument is the (new) name of column that you want to create and the second is what you want to store in that column. The = tells R that the new column will contain the result of the second argument.

In the example below, we create a new column called Texttype.

This new column should contains

  • the value PrivateDialoge if Filespeakerid contains the sequence S1A,

  • the value PublicDialogue if Filespeakerid contains the sequence S1B,

  • the value UnscriptedMonologue if Filespeakerid contains the sequence S2A,

  • the value ScriptedMonologue if Filespeakerid contains the sequence S2B,

  • the value of Filespeakerid if Filespeakerid neither contains S1A, S1B, S2A, nor S2B.

ICE_Ire_bio_texttype <- ICE_Ire_bio_selection2 %>%
  mutate(Texttype = ifelse(str_detect(Filespeakerid ,"S1A"), "PrivateDialoge",
                    ifelse(str_detect(Filespeakerid ,"S1B"), "PublicDialogue",
                    ifelse(str_detect(Filespeakerid ,"S2A"), "UnscriptedMonologue",
                    ifelse(str_detect(Filespeakerid ,"S2B"), "ScriptedMonologue",
                           Filespeakerid)))))
# inspect data
head(ICE_Ire_bio_texttype)
## # A tibble: 6 x 4
##   Filespeakerid Sex    Word.count Texttype      
##   <chr>         <chr>       <dbl> <chr>         
## 1 <S1A-002$A>   female        391 PrivateDialoge
## 2 <S1A-002$B>   female         47 PrivateDialoge
## 3 <S1A-002$C>   male          200 PrivateDialoge
## 4 <S1A-002$D>   female        464 PrivateDialoge
## 5 <S1A-002$E>   male          639 PrivateDialoge
## 6 <S1A-002$F>   female        308 PrivateDialoge

If-statements

We should briefly talk about if-statements (or ifelse in the present case). The ifelse function is both very powerful and extremely helpful as it allows you to assign values based on a test. As such, ifelse-statements can be read as:

If X is the case, then do A and if X is not the case do B! (If -> Then -> Else)

The nice thing about ifelse-statements is that they can be used in succession as we have done above. This can then be read as:

If X is the case, then do A, if Y is the case, then do B, else do Z

Exercise

Using the data called ICE_Ire_bio, create a new data set called ICE_Ire_AgeGroup in which you create a column called AgeGroup where all speakers who are younger than 42 have the value young and all speakers aged 42 and over old.

Tip: use if-statements to assign the old and young values.

Summarizing data

Summarizing is really helpful and can be done using the summarise function.

ICE_Ire_bio_summary1 <- ICE_Ire_bio_texttype %>%
  summarise(Words = sum(Word.count))
# inspect data
head(ICE_Ire_bio_summary1)
## # A tibble: 1 x 1
##    Words
##    <dbl>
## 1 141876

To get summaries of sub-groups or by variable level, we can use the group_by function and then use the summarise function.

ICE_Ire_bio_summary2 <- ICE_Ire_bio_texttype %>%
  group_by(Texttype, Sex) %>%
  summarise(Speakers = n(),
            Words = sum(Word.count))
# inspect data
head(ICE_Ire_bio_summary2)
## # A tibble: 6 x 4
## # Groups:   Texttype [3]
##   Texttype            Sex    Speakers Words
##   <chr>               <chr>     <int> <dbl>
## 1 PrivateDialoge      female      105 60024
## 2 PrivateDialoge      male         18  9628
## 3 PublicDialogue      female       63 24647
## 4 PublicDialogue      male         41 16783
## 5 UnscriptedMonologue female        3  6712
## 6 UnscriptedMonologue male         16 24082

Exercise

  1. Use the ICE_Ire_bio and determine the number of words uttered by female speakers from Northern Ireland above an age of 50.

  2. Load the file exercisedata.txt and determine the mean scores of groups A and B.

Tip: to extract the mean, combine the summary function with the mean function.

Gathering and spreading data

The tidyr package has two very useful functions for gathering and spreading data that can be sued to transform data to long and wide formats (you will see what this means below). The functions are called gather and spread.

We will use the data set called ICE_Ire_bio_summary2, which we created above, to demonstrate how this works.

We will first check out the spread-function to create different columns for women and men that show how many of them are represented in the different text types.

ICE_Ire_bio_summary_wide <- ICE_Ire_bio_summary2 %>%
  select(-Words) %>%
  spread(Sex, Speakers)
# inspect
ICE_Ire_bio_summary_wide
## # A tibble: 3 x 3
## # Groups:   Texttype [3]
##   Texttype            female  male
##   <chr>                <int> <int>
## 1 PrivateDialoge         105    18
## 2 PublicDialogue          63    41
## 3 UnscriptedMonologue      3    16

The data is now in what is called a wide-format as values are distributed across columns.

To reformat this back to a long-format where each column represents exactly one variable, we use the gather-function:

ICE_Ire_bio_summary_long <- ICE_Ire_bio_summary_wide %>%
  gather(Sex, Speakers, female:male)
# inspect
ICE_Ire_bio_summary_long
## # A tibble: 6 x 3
## # Groups:   Texttype [3]
##   Texttype            Sex    Speakers
##   <chr>               <chr>     <int>
## 1 PrivateDialoge      female      105
## 2 PublicDialogue      female       63
## 3 UnscriptedMonologue female        3
## 4 PrivateDialoge      male         18
## 5 PublicDialogue      male         41
## 6 UnscriptedMonologue male         16

Working with text

We have now worked though how to load, save, and edit tabulated data. However, R is also perfectly equipped for working with textual data which is what we going to concentrate on now.

Loading text data

To load text data from the web, we can use the read_file function which takes the URL of the text as its first argument. In this case will will load the 2016 rally speeches Donald Trump.

Trump <- read_file("https://slcladal.github.io/data/Trump.txt")
# inspect data
str(Trump)
##  chr "?SPEECH 1\n...Thank you so much.  That's so nice.  Isn't he a great guy.  He doesn't get a fair press; he doesn"| __truncated__

It is very easy to extract frequency information and to create frequency lists. We can do this by first using the unnest_tokens function which splits texts into individual words, an then use the count function to get the raw frequencies of all word types in a text.

tibble(text = Trump) %>%
  unnest_tokens(word, text) %>%
  count(word, sort=T)
## # A tibble: 5,981 x 2
##    word      n
##    <chr> <int>
##  1 i      6156
##  2 the    5924
##  3 to     5460
##  4 and    5438
##  5 we     3774
##  6 a      3592
##  7 it     3550
##  8 you    3476
##  9 they   3007
## 10 of     2953
## # ... with 5,971 more rows

Extracting N-grams is also very easy as the unnest_tokens function can an argument called token in which we can specify that we want to extract n-grams, If we do this, then we need to specify the n as a separate argument. Below we specify that we want the frequencies of all 4-grams.

tibble(text = Trump) %>%
  unnest_tokens(word, text, token="ngrams", n=4) %>%
  count(word, sort=T)
## # A tibble: 145,727 x 2
##    word                 n
##    <chr>            <int>
##  1 we re going to     552
##  2 i m going to       122
##  3 i don t know       110
##  4 they re going to   105
##  5 it s going to      103
##  6 you re going to    102
##  7 and we re going     94
##  8 s going to be       90
##  9 re going to have    88
## 10 i don t want        87
## # ... with 145,717 more rows

Splitting-up texts

We can use the str_split function to split texts. However, there are two issues when using this (very useful) function:

  • the pattern that we want to split on disappears

  • the output is a list (a special type of data format)

To remedy these issues, we

  • combine the str_split function with the unlist function

  • add something right at the beginning of the pattern that we use to split the text. To add something to the beginning of the pattern that we want to split the text by, we use the str_replace_all function. The str_replace_all function takes three arguments, 1. the text, 2. the pattern that should be replaced, 3. the replacement. In the example below, we add ~~~ to the sequence SPEECH and then split on the ~~~ rather than on the sequence “SPEECH” (in other words, we replace SPEECH with ~~~SPEECH and then split on ~~~).

Trump_split <- unlist(str_split(
  str_replace_all(Trump, "SPEECH", "~~~SPEECH"),
  pattern = "~~~"))
# inspect data
nchar(Trump_split)#; str(Trump_split)
##  [1]      1  21181  26671   2709   5956   1313  32494  28246 421128   1397
## [11] 308390  43361

Cleaning texts

When working with texts, we usually need to clean the data. Below, we do some very basic cleaning using a pipeline.

Trump_split_clean <- Trump_split %>%
  # replace elements
  str_replace_all(fixed("\n"), " ") %>%
  # remove strange symbols
  str_replace_all("[^[:alnum:][:punct:]]+", " ") %>%
  # combine contractions
  str_replace_all(" re ", "'re ") %>%
  str_replace_all(" ll ", "'ll ") %>%
  str_replace_all(" d ", "'d ") %>%
  str_replace_all(" m ", "'m ") %>%
  str_replace_all(" s ", "'s ") %>%
  str_replace_all("n t ", "n't ") %>%
  # remove \"
  str_remove_all("\"") %>%
  # remove superfuous white spaces
  str_squish()
# remove very short elements
Trump_split_clean <- Trump_split_clean[nchar(Trump_split_clean) > 5]
# inspect data
nchar(Trump_split_clean)
##  [1]  20845  26625   2680   5944   1312  32253  28097 418178   1384 306084
## [11]  43281

Inspect text

Trump_split_clean[5]

Concordancing and KWICs

Creating concordances or key-word-in-context displays is one of the most common practices when dealing with text data. Fortunately, there exist ready-made functions that make this a very easy task in R. We will use the kwic function from the quanteda package to create kwics here.

kwic_multiple <- as.data.frame(
  kwic(Trump_split_clean, 
       pattern = phrase("great again"),
       window = 3, 
       valuetype = "regex"))
# inspect data
head(kwic_multiple)
##   docname from   to               pre     keyword            post     pattern
## 1   text1 3041 3042  make our country great again       . We have great again
## 2   text1 4464 4465   to make America great again        . We can great again
## 3   text1 4472 4473 make this country great again . The potential great again
## 4   text2 5242 5243 will make America great again        . And if great again
## 5   text4  620  621   to make America great again       , folks , great again
## 6   text4  635  636   to make America great again   . And another great again

We can now also select concordances based on specific features. For example, we only want those instances of “great again” if the preceding word was “america”.

kwic_multiple_select <- kwic_multiple %>%
  # last element before search term is "america"
  filter(str_detect(pre, "america$"))
# inspect data
head(kwic_multiple_select)
## [1] docname from    to      pre     keyword post    pattern
## <0 rows> (or 0-length row.names)

Again, we can use the write.table function to save our kwics to disc.

write.table(kwic_multiple_select, "data/kwic_multiple_select.txt", sep = "\t")

As most of the data that we use is on out computers (rather than being somewhere on the web), we now load files with text from your computer. It is important to note that you need to use \\ when you want to load data from a Windows PC (rather than single \).

To load many files, we first create a list of all files in a the directory that we want to load data from and then use the sapply function (which works just like a loop). The sapply function takes a a vector of elements and then performs a sequence of steps on each of these elements. In the example below, we feed the file locations to the sapply function and then we scan each text (i.e. we read it into R), then we paste all the content of one file together.

NOTE: you may have to change the path to the data!

files <- list.files("data\\ICEIrelandSample",
                    pattern = ".txt", full.names = T)
ICE_Ire_sample <- sapply(files, function(x) {
  x <- scan(x, what = "char")
  x <- paste(x, sep = " ", collapse = " ")
  })
# inspect data
str(ICE_Ire_sample)
##  Named chr [1:20] "<S1A-001 Riding> <I> <S1A-001$A> <#> Well how did the riding go tonight <S1A-001$B> <#> It was good so it was <"| __truncated__ ...
##  - attr(*, "names")= chr [1:20] "data\\ICEIrelandSample/S1A-001.txt" "data\\ICEIrelandSample/S1A-002.txt" "data\\ICEIrelandSample/S1A-003.txt" "data\\ICEIrelandSample/S1A-004.txt" ...

As the texts do not have column names (but simply names), we can clean these by removing everything before a / and by removing the .txt.

names(ICE_Ire_sample) <- names(ICE_Ire_sample) %>%
  str_remove_all(".*/") %>%
  str_remove_all(".txt")
# inspect
names(ICE_Ire_sample)
##  [1] "S1A-001" "S1A-002" "S1A-003" "S1A-004" "S1A-005" "S1A-006" "S1A-007"
##  [8] "S1A-008" "S1A-009" "S1A-010" "S1A-011" "S1A-012" "S1A-013" "S1A-014"
## [15] "S1A-015" "S1A-016" "S1A-017" "S1A-018" "S1A-019" "S1A-020"

Further splitting of texts

To split the texts into speech units where each speech unit begins with the speaker that has uttered it, we again use the sapply function.

ICE_Ire_split <- as.vector(unlist(sapply(ICE_Ire_sample, function(x){
 x <- as.vector(str_split(str_replace_all(x, "(<S1A-)", "~~~\\1"), "~~~"))  
})))
# inspect
head(ICE_Ire_split)
## [1] ""                                                                                                                                                                                    
## [2] "<S1A-001 Riding> <I> "                                                                                                                                                               
## [3] "<S1A-001$A> <#> Well how did the riding go tonight "                                                                                                                                 
## [4] "<S1A-001$B> <#> It was good so it was <#> Just I I couldn't believe that she was going to let me jump <,> that was only the fourth time you know <#> It was great <&> laughter </&> "
## [5] "<S1A-001$A> <#> What did you call your horse "                                                                                                                                       
## [6] "<S1A-001$B> <#> I can't remember <#> Oh Mary s Town <,> oh\n"

Basics of regular expressions

Next, we extract the File and the Speaker and combine Text, File, and Speaker in a table.

We use this to show the power of regular expressions (to learn more about regular expression, have a look at this very recommendable tutorial). Regular expressions are symbols or sequences of symbols that stand for

  • symbols or patterns (e.g. [a-z] stands for any lowercase character)
  • the frequency of symbols or patterns (e.g. {1,3} stands for between 1 and 3)
  • classes of symbols (e.g. [:punct:] stands for any punctuation symbol)
  • structural properties (e.g. [^[:blank:]] stands for any non-space character, \t stands for tab-stop and \n stands for a line break)

We can not go into any detail here and only touch upon the power of regular expressions.

The symbol . is one of the most powerful and most universal regular expressions as it represents (literally) any symbol or character and it thus stands for a pattern. The * is a regular expression that refers to the frequency of a pattern and it stands for 0 to an infinite number of instances. Thus, .* stands for 0 to an infinite number of any character. You can find an overview of the regular expressions that you can use in R here.

Also, if you put patterns in round brackets, R will remember the sequence within brackets and you can paste it back into a string from memory when you replace something.

When referring to symbols that are used a regular expressions such as \ or \$, you need to inform R that you actually mean the real symbol and not the regular expression and you do that by typing two \\ before the sequence in question. Have a look at the example below and try to see what the regular expressions (.*(S1A-[0-9]{3,3}).*, \n, and .*\\$([A-Z]{1,2}\\?{0,1})>.*) stand for.

ICE_Ire_split_tb <- ICE_Ire_split %>%
  as.data.frame()
# add columnn names
colnames(ICE_Ire_split_tb)[1] <- "Text"
# add file and speaker
ICE_Ire_split_tb <- ICE_Ire_split_tb %>%
  filter(!str_detect(Text, "<I>"),
         Text != "") %>%
  mutate(File = str_replace_all(Text, ".*(S1A-[0-9]{3,3}).*", "\\1"),
         File = str_remove_all(File, "\\\n"),
         Speaker = str_replace_all(Text, ".*\\$([A-Z]{1,2}\\?{0,1})>.*", "\\1"),
         Speaker = str_remove_all(Speaker, "\\\n"))
# inspect
head(ICE_Ire_split_tb)
##                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       Text
## 1                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      <S1A-001$A> <#> Well how did the riding go tonight 
## 2                                                                                                                                                                                                                                                                                                                                                                                                                                     <S1A-001$B> <#> It was good so it was <#> Just I I couldn't believe that she was going to let me jump <,> that was only the fourth time you know <#> It was great <&> laughter </&> 
## 3                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            <S1A-001$A> <#> What did you call your horse 
## 4                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             <S1A-001$B> <#> I can't remember <#> Oh Mary s Town <,> oh\n
## 5                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   <S1A-001$A> <#> And how did Mabel do\n
## 6 <S1A-001$B> <#> Did you not see her whenever she was going over the jumps <#> There was one time her horse refused and it refused three times <#> And then <,> she got it round and she just lined it up straight and she just kicked it and she hit it with the whip <,> and over it went the last time you know <#> And Stephanie told her she was very determined and very well-ridden <&> laughter </&> because it had refused the other times you know <#> But Stephanie wouldn t let her give up on it <#> She made her keep coming back and keep coming back <,> until <,> it jumped it you know <#> It was good 
##      File Speaker
## 1 S1A-001       A
## 2 S1A-001       B
## 3 S1A-001       A
## 4 S1A-001       B
## 5 S1A-001       A
## 6 S1A-001       B

Combining tables

We often want to combine different tables. This is very easy in R and we will show how it can be done by combining our bio data about speakers that are represented in the ICE Ireland corpus with the texts themselves so that we get a table which holds both the text as well as the speaker information.

Thus, we now join the text data with the bio data by using the left_join function. We join the text with the bio data based on the contents of the File and the Speaker columns. In contract to right_join, and full_join, left_join will drop all rows from the right table that are not present in left table (and vice verse for right_join. In contrast, full_join will retain all rows from both the left and the right table.

ICE_Ire <- left_join(ICE_Ire_split_tb, ICE_Ire_bio_edit, by = c("File", "Speaker"))
# inspect
head(ICE_Ire)
##                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       Text
## 1                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      <S1A-001$A> <#> Well how did the riding go tonight 
## 2                                                                                                                                                                                                                                                                                                                                                                                                                                     <S1A-001$B> <#> It was good so it was <#> Just I I couldn't believe that she was going to let me jump <,> that was only the fourth time you know <#> It was great <&> laughter </&> 
## 3                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            <S1A-001$A> <#> What did you call your horse 
## 4                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             <S1A-001$B> <#> I can't remember <#> Oh Mary s Town <,> oh\n
## 5                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   <S1A-001$A> <#> And how did Mabel do\n
## 6 <S1A-001$B> <#> Did you not see her whenever she was going over the jumps <#> There was one time her horse refused and it refused three times <#> And then <,> she got it round and she just lined it up straight and she just kicked it and she hit it with the whip <,> and over it went the last time you know <#> And Stephanie told her she was very determined and very well-ridden <&> laughter </&> because it had refused the other times you know <#> But Stephanie wouldn t let her give up on it <#> She made her keep coming back and keep coming back <,> until <,> it jumped it you know <#> It was good 
##      File Speaker Id Filespeakerid             Zone      Date    Sex   Age
## 1 S1A-001       A  1   <S1A-001$A> northern ireland 1990-1994   male 34-41
## 2 S1A-001       B  2   <S1A-001$B> northern ireland 1990-1994 female 34-41
## 3 S1A-001       A  1   <S1A-001$A> northern ireland 1990-1994   male 34-41
## 4 S1A-001       B  2   <S1A-001$B> northern ireland 1990-1994 female 34-41
## 5 S1A-001       A  1   <S1A-001$A> northern ireland 1990-1994   male 34-41
## 6 S1A-001       B  2   <S1A-001$B> northern ireland 1990-1994 female 34-41
##   Word.count
## 1        765
## 2       1298
## 3        765
## 4       1298
## 5        765
## 6       1298

You can then perform concordancing on the Text column in the table.

kwic_iceire <- as.data.frame(
  kwic(ICE_Ire$Text, 
       pattern = phrase("Irish"),
       window = 5, 
       valuetype = "regex"))
# inspect data
head(kwic_iceire)
##    docname from to                     pre keyword                       post
## 1 text1430   37 37 Ireland you know it was   Irish    bacon and it was lovely
## 2 text1760   62 62        / & > being good   Irish Catholics we always had to
## 3 text1784   13 13      > We should do the   Irish                < . > ver <
## 4 text1784   23 23             < / . > the   Irish    version of the Matrix <
##   pattern
## 1   Irish
## 2   Irish
## 3   Irish
## 4   Irish

Tokenization and counting words

We will now use the tokenize_words function from the tokenizer package to find out how many words are in each file. Before we count the words, however, we will clean the data by removing everything between pointy brackets (e.g. <#>) as well as all punctuation.

words <- as.vector(sapply(Trump_split_clean, function(x){
  x <- removeNumbers(x)
  x <- removePunctuation(x)
  x <- unlist(tokenize_words(x))
  x <- length(x)}))
words
##  [1]  3847  4659   522  1125   241  5931  5345 77964   273 57756  7849

The nice thing about the tokenizer package is that it also allows to split texts into sentences. To show this, we return to the rally speeches by Donald Trump and split the first of his rally speeches into sentences.

Sentences <- unlist(tokenize_sentences(Trump_split_clean[6]))
# inspect
head(Sentences)
## [1] "SPEECH 6 Thank you."                                                                            
## [2] "It's true, and these are the best and the finest."                                              
## [3] "When Mexico sends its people, they're not sending their best."                                  
## [4] "They're not sending you."                                                                       
## [5] "They're not sending you."                                                                       
## [6] "They're sending people that have lots of problems, and they're bringing those problems with us."

We now want to find associations between words. To do this, we convert all characters to lower case, remove (some) non lexical words (also called stop words), remove punctuation, and superfluous white spaces and then create a document-term-matrix (DTM) which shows how often any word occurs in any of the sentences (in this case, the sentences are treated as documents).

Once we have a DTM, we can then use the findAssocs function to see which words associate most strongly with target words that we want to investigate. We can use the argument “corlimit” to show the terms that are most strongly associated with our target words.

# clean sentences
Sentences <- Sentences %>%
  # convert to lowercase
  tolower() %>%
  # remove stop words
  removeWords(stopwords("english")) %>%
  # remove punctiation
  removePunctuation() %>%
  # remove numbers
  removeNumbers() %>%
  # remove superfuous white spaces
  str_squish()
# create DTM
DTM <- DocumentTermMatrix(VCorpus(VectorSource(Sentences)))
findAssocs(DTM, c("problems", "america"), corlimit = c(.5, .5))
## $problems
## russia 
##   0.63 
## 
## $america
## americas   avenue     bank 
##     0.71     0.57     0.57

We now turn to data visualization basics.

Working with figures

There are numerous function in R that we can use to visualize data. We will use the ggplot function from the ggplot2 package here to visualize the data.

The ggplot2 package was developed by Hadley Wickham in 2005 and it implements the graphics scheme described in the book The Grammar of Graphics by Leland Wilkinson.

The idea behind the Grammar of Graphics can be boiled down to 5 bullet points (see Wickham 2016: 4):

  • a statistical graphic is a mapping from data to aesthetic attributes (location, color, shape, size) of geometric objects (points, lines, bars).

  • the geometric objects are drawn in a specific coordinate system.

  • scales control the mapping from data to aesthetics and provide tools to read the plot (ie, axes and legends).

  • the plot may also contain statistical transformations of the data (means, medians, bins of data, trend lines).

  • faceting can be used to generate the same plot for different subsets of the data.

Basics of ggplot2 syntax

Specify data, aesthetics and geometric shapes

ggplot(data, aes(x=, y=, color=, shape=, size=)) +
geom_point(), or geom_histogram(), or geom_boxplot(), etc.

  • This combination is very effective for exploratory graphs.

  • The data must be a data frame.

  • The aes() function maps columns of the data frame to aesthetic properties of geometric shapes to be plotted.

  • ggplot() defines the plot; the geoms show the data; each component is added with +

  • Some examples should make this clear

Practical examples

We will now create some basic visualizations or plots.

Before we start plotting, we will create data that we want to plot. In this case, we will extract the mean word counts by gender and age.

plotdata <- ICE_Ire %>%
  # only private dialogue
  filter(str_detect(File, "S1A"),
         # without speaker younger than 19
         Age != "0-18",
         Age != "NA") %>%
  group_by(Sex, Age) %>%
  summarise(Words = mean(Word.count))
## `summarise()` regrouping output by 'Sex' (override with `.groups` argument)
# inspect
head(plotdata)
## # A tibble: 6 x 3
## # Groups:   Sex [2]
##   Sex    Age   Words
##   <chr>  <chr> <dbl>
## 1 female 19-25  608.
## 2 female 26-33  461.
## 3 female 34-41  691.
## 4 female 50+    648.
## 5 male   19-25  628.
## 6 male   26-33 1075

In the example below, we specify that we want to visualize the plotdata and that the x-axis should represent Age and the y-axis Words(the mean frequency of words). We also tell R that we want to group the data by Sex (i.e. that we want to distinguish between men and women). Then, we add geom_line which tells R that we want a line graph. The result of this is shown below.

ggplot(plotdata, aes(x = Age, y = Words, color = Sex, group = Sex)) +
  geom_line()

Once you have a basic plot like the one above, you can prettify the plot. For example, you can

  • change the width of the lines (size = 1.25)

  • change the y-axis limits (coord_cartesian(ylim = c(0, 1000)))

  • use a different theme (theme_bw() means black and white theme)

  • move the legend to the top

  • change the default colors to colors you like (*scale_color_manual …`)

  • change the linetype (scale_linetype_manual ...)

ggplot(plotdata, aes(x = Age, y = Words,
                     color = Sex, 
                     group = Sex, 
                     linetype = Sex)) +
  geom_line(size = 1.25) +
  coord_cartesian(ylim = c(0, 1500)) +
  theme_bw() + 
  theme(legend.position = "top") + 
  scale_color_manual(breaks = c("female", "male"),
                     values = c("gray20", "gray50")) +
  scale_linetype_manual(breaks = c("female", "male"),
                        values = c("solid", "dotted"))

An additional and very handy feature of this way of producing graphs is that you

  • can integrate them into pipes

  • can easily combine plots.

ICE_Ire %>%
  filter(Sex != "NA",
         Age != "NA",
         is.na(Sex) == F,
         is.na(Age) == F) %>%
  mutate(Age = factor(Age),
         Sex = factor(Sex)) %>%
  ggplot(aes(x = Age, 
             y = Word.count,
             color = Sex,
             linetype = Sex)) +
  geom_point() +
  stat_summary(fun.y=mean, geom="line", aes(group=Sex)) +
  coord_cartesian(ylim = c(0, 2000)) +
  theme_bw() + 
  theme(legend.position = "top") + 
  scale_color_manual(breaks = c("female", "male"),
                     values = c("indianred", "darkblue")) +
  scale_linetype_manual(breaks = c("female", "male"),
                        values = c("solid", "dotted"))
## Warning: `fun.y` is deprecated. Use `fun` instead.

You can also create different types of graphs very easily and split them into different facets.

ICE_Ire %>%
  drop_na() %>%
  filter(Age != "NA") %>%
  mutate(Date = factor(Date)) %>%
  ggplot(aes(x = Age, 
             y = Word.count, 
             fill = Sex)) +
  facet_grid(vars(Date)) +
  geom_boxplot() +
  coord_cartesian(ylim = c(0, 2000)) +
  theme_bw() + 
  theme(legend.position = "top") + 
  scale_fill_manual(breaks = c("female", "male"),
                     values = c("#E69F00", "#56B4E9"))

Exercise

  1. Create a box plot showing the Date on the x-axis and the words uttered by speakers on the y-axis and group by Sex.

  2. Create a scatter plot showing the Date on the x-axis and the words uttered by speakers on the y-axis and create different facets for Sex.

Advanced

  1. Create a bar plot showing the number of men and women by Date.

Ending R sessions

At the end of each session, you can extract information about the session itself (e.g. which R version you used and which versions of packages). This can help others (or even your future self) to reproduce the analysis that you have done.

Extracting session information

You can extract the session information by running the sessionInfo function (without any arguments)

sessionInfo()
## R version 4.0.2 (2020-06-22)
## Platform: x86_64-w64-mingw32/x64 (64-bit)
## Running under: Windows 10 x64 (build 18362)
## 
## Matrix products: default
## 
## locale:
## [1] LC_COLLATE=German_Germany.1252  LC_CTYPE=German_Germany.1252   
## [3] LC_MONETARY=German_Germany.1252 LC_NUMERIC=C                   
## [5] LC_TIME=German_Germany.1252    
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
##  [1] tokenizers_0.2.1 tm_0.7-7         NLP_0.2-0        readxl_1.3.1    
##  [5] quanteda_2.1.1   tidytext_0.2.6   forcats_0.5.0    stringr_1.4.0   
##  [9] dplyr_1.0.2      purrr_0.3.4      readr_1.3.1      tidyr_1.1.2     
## [13] tibble_3.0.3     ggplot2_3.3.2    tidyverse_1.3.0 
## 
## loaded via a namespace (and not attached):
##  [1] Rcpp_1.0.5         lubridate_1.7.9    lattice_0.20-41    utf8_1.1.4        
##  [5] assertthat_0.2.1   digest_0.6.25      slam_0.1-47        R6_2.4.1          
##  [9] cellranger_1.1.0   backports_1.1.10   reprex_0.3.0       evaluate_0.14     
## [13] httr_1.4.2         pillar_1.4.6       rlang_0.4.7        curl_4.3          
## [17] rstudioapi_0.11    data.table_1.13.0  blob_1.2.1         Matrix_1.2-18     
## [21] rmarkdown_2.3      labeling_0.3       munsell_0.5.0      broom_0.7.0       
## [25] compiler_4.0.2     janeaustenr_0.1.5  modelr_0.1.8       xfun_0.16         
## [29] pkgconfig_2.0.3    htmltools_0.5.0    tidyselect_1.1.0   fansi_0.4.1       
## [33] crayon_1.3.4       dbplyr_1.4.4       withr_2.3.0        SnowballC_0.7.0   
## [37] grid_4.0.2         jsonlite_1.7.1     gtable_0.3.0       lifecycle_0.2.0   
## [41] DBI_1.1.0          magrittr_1.5       scales_1.1.1       RcppParallel_5.0.2
## [45] cli_2.0.2          stringi_1.5.3      farver_2.0.3       fs_1.5.0          
## [49] xml2_1.3.2         ellipsis_0.3.1     stopwords_2.0      generics_0.0.2    
## [53] vctrs_0.3.4        fastmatch_1.1-0    tools_4.0.2        glue_1.4.2        
## [57] hms_0.5.3          parallel_4.0.2     yaml_2.2.1         colorspace_1.4-1  
## [61] rvest_0.3.6        knitr_1.30         haven_2.3.1        usethis_1.6.3

Going further

If you want to know more, would like to get some more practice, or would like to have another approach to R, please check out the workshops and resources on R provided by the UQ library. In addition, there are various online resources available to learn R (you can check out a very recommendable introduction here).

Here are also some additional resources that you may find helpful:

Citation & Session Info

Schweinberger, Martin. 2020. Getting started with R - for (absolute) beginners. Brisbane: The University of Queensland. url: https://slcladal.github.io/introquant.html.

sessionInfo()
## R version 4.0.2 (2020-06-22)
## Platform: x86_64-w64-mingw32/x64 (64-bit)
## Running under: Windows 10 x64 (build 18362)
## 
## Matrix products: default
## 
## locale:
## [1] LC_COLLATE=German_Germany.1252  LC_CTYPE=German_Germany.1252   
## [3] LC_MONETARY=German_Germany.1252 LC_NUMERIC=C                   
## [5] LC_TIME=German_Germany.1252    
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
##  [1] tokenizers_0.2.1 tm_0.7-7         NLP_0.2-0        readxl_1.3.1    
##  [5] quanteda_2.1.1   tidytext_0.2.6   forcats_0.5.0    stringr_1.4.0   
##  [9] dplyr_1.0.2      purrr_0.3.4      readr_1.3.1      tidyr_1.1.2     
## [13] tibble_3.0.3     ggplot2_3.3.2    tidyverse_1.3.0 
## 
## loaded via a namespace (and not attached):
##  [1] Rcpp_1.0.5         lubridate_1.7.9    lattice_0.20-41    utf8_1.1.4        
##  [5] assertthat_0.2.1   digest_0.6.25      slam_0.1-47        R6_2.4.1          
##  [9] cellranger_1.1.0   backports_1.1.10   reprex_0.3.0       evaluate_0.14     
## [13] httr_1.4.2         pillar_1.4.6       rlang_0.4.7        curl_4.3          
## [17] rstudioapi_0.11    data.table_1.13.0  blob_1.2.1         Matrix_1.2-18     
## [21] rmarkdown_2.3      labeling_0.3       munsell_0.5.0      broom_0.7.0       
## [25] compiler_4.0.2     janeaustenr_0.1.5  modelr_0.1.8       xfun_0.16         
## [29] pkgconfig_2.0.3    htmltools_0.5.0    tidyselect_1.1.0   fansi_0.4.1       
## [33] crayon_1.3.4       dbplyr_1.4.4       withr_2.3.0        SnowballC_0.7.0   
## [37] grid_4.0.2         jsonlite_1.7.1     gtable_0.3.0       lifecycle_0.2.0   
## [41] DBI_1.1.0          magrittr_1.5       scales_1.1.1       RcppParallel_5.0.2
## [45] cli_2.0.2          stringi_1.5.3      farver_2.0.3       fs_1.5.0          
## [49] xml2_1.3.2         ellipsis_0.3.1     stopwords_2.0      generics_0.0.2    
## [53] vctrs_0.3.4        fastmatch_1.1-0    tools_4.0.2        glue_1.4.2        
## [57] hms_0.5.3          parallel_4.0.2     yaml_2.2.1         colorspace_1.4-1  
## [61] rvest_0.3.6        knitr_1.30         haven_2.3.1        usethis_1.6.3

Go back to the main page


References

Gillespie, Colin, and Robin Lovelace. 2016. Efficient R Programming: A Practical Guide to Smarter Programming. " O’Reilly Media, Inc.".

Wickham, Hadley, and Garrett Grolemund. 2016. R for Data Science: Import, Tidy, Transform, Visualize, and Model Data. " O’Reilly Media, Inc.".