Introduction

This tutorial shows how to summarize texts automatically using R by extracting the most prototypical sentences. The RNotebook for this tutorial can be downloaded here. If you want to render the Rmarkdown notebook on your machine, i.e. knitting the document to a html or pdf file, you need to make sure that you have R installed and you also need to download the bibliography file and store it in the same folder where you store the Rmd file.

Preparation and session set up

This tutorial is based on R. If you have not installed R or are new to it, you will find an introduction to and more information how to use R here. For this tutorials, we need to install certain packages from an R library so that the scripts shown below are executed without errors. Before turning to the code below, please install the packages by running the code below this paragraph. If you have already installed the packages mentioned below, then you can skip ahead and ignore this section. To install the necessary packages, simply run the following code - it may take some time (between 1 and 5 minutes to install all of the libraries so you do not need to worry if it takes some time).

# set options
options(stringsAsFactors = F)          # no automatic data transformation
options("scipen" = 100, "digits" = 12) # suppress math annotation
# install packages
install.packages("xml2")
install.packages("rvest")
install.packages("lexRankr")
install.packages("textmineR")
install.packages("tidyverse")
install.packages("quanteda")
install.packages("igraph")
install.packages("here")
# install klippy for copy-to-clipboard button in code chunks
remotes::install_github("rlesur/klippy")

Next we activate the packages.

# activate packages
library(xml2)
library(rvest)
library(lexRankr)
library(textmineR)
library(tidyverse)
library(quanteda)
library(igraph)
library(here)
# activate klippy for copy-to-clipboard button
klippy::klippy()

Once you have installed RStudio and have also initiated the session by executing the code shown above, you are good to go.

Basic Text summarization

This section shows an easy to use text summarizing method which extracts the most prototypical sentences from a text. As such, this text summarizer does not generate sentences based on prototypical words but evaluates how prototypical or cetral sentences are and then orders the sentences in a text according to their prototypicality (or centrality).

For this example, we will download text from a Guardian article about a meeting between Angela Merkel and Donald Trump at the G20 summit in 2017. In a first step, we define the url of the webpage hosting the artcile.

# url to scrape
url = "https://www.theguardian.com/world/2017/jun/26/angela-merkel-and-donald-trump-head-for-clash-at-g20-summit"

Next, we extract the text of the article using thexml2 and thervest` packages.

# read page html
page = xml2::read_html(url)
# extract text from page html using selector
page %>%
  # extract paragraphs
  rvest::html_nodes("p") %>%
  # extract text
  rvest::html_text() %>%
  # remove empty elements
  .[. != ""] -> text
# inspect data
head(text)
## [1] "German chancellor plans to make climate change, free trade and mass migration key themes in Hamburg, putting her on collision course with US"                                                                                                        
## [2] "Last modified on Wed 25 Aug 2021 23.54 AEST"                                                                                                                                                                                                         
## [3] "A clash between Angela Merkel and Donald Trump appears unavoidable after Germany signalled that it will make climate change, free trade and the management of forced mass global migration the key themes of the G20 summit in Hamburg next week."   
## [4] "The G20 summit brings together the world’s biggest economies, representing 85% of global gross domestic product (GDP), and Merkel’s chosen agenda looks likely to maximise American isolation while attempting to minimise disunity amongst others. "
## [5] "The meeting, which is set to be the scene of large-scale street protests, will also mark the first meeting between Trump and the Russian president, Vladimir Putin, as world leaders."                                                               
## [6] "Trump has already rowed with Europe once over climate change and refugees at the G7 summit in Italy, and now looks set to repeat the experience in Hamburg but on a bigger stage, as India and China join in the criticism of Washington. "

Now that we have the text, we apply the lexRank function from the lexRankr package to determine the prototypicality (or centrality) and extract the three most central sentences.

# perform lexrank for top 3 sentences
top3sentences = lexRankr::lexRank(text,
                          # only 1 article; repeat same docid for all of input vector
                          docId = rep(1, length(text)),
                          # return 3 sentences
                          n = 3,
                          continuous = TRUE)
## Parsing text into sentences and tokens...DONE
## Calculating pairwise sentence similarities...DONE
## Applying LexRank...DONE
## Formatting Output...DONE
# inspect
top3sentences

Next, we extract and display the sentences from the table.

top3sentences$sentence
## [1] "A clash between Angela Merkel and Donald Trump appears unavoidable after Germany signalled that it will make climate change, free trade and the management of forced mass global migration the key themes of the G20 summit in Hamburg next week."            
## [2] "Trump has already rowed with Europe once over climate change and refugees at the G7 summit in Italy, and now looks set to repeat the experience in Hamburg but on a bigger stage, as India and China join in the criticism of Washington."                    
## [3] "But the G7, and Trump’s subsequent decision to shun the Paris climate change treaty, clearly left a permanent mark on her, leading to her famous declaration of independence four days later at a Christian Social Union (CSU) rally in a Bavarian beer tent."

The output show the three most prototypical (or central) sentences of the article. The articles are already in chronological order - if the sentences were not in chronological order, we could also have ordered them by sentenceId before displaying them using dplyr and stringr package functions as shown below (in our case the order does not change as the prototypicality and the chronological order are identical).

top3sentences %>%
  dplyr::mutate(sentenceId = as.numeric(stringr::str_remove_all(sentenceId, ".*_"))) %>%
  dplyr::arrange(sentenceId) %>%
  dplyr::pull(sentence)
## [1] "A clash between Angela Merkel and Donald Trump appears unavoidable after Germany signalled that it will make climate change, free trade and the management of forced mass global migration the key themes of the G20 summit in Hamburg next week."            
## [2] "Trump has already rowed with Europe once over climate change and refugees at the G7 summit in Italy, and now looks set to repeat the experience in Hamburg but on a bigger stage, as India and China join in the criticism of Washington."                    
## [3] "But the G7, and Trump’s subsequent decision to shun the Paris climate change treaty, clearly left a permanent mark on her, leading to her famous declaration of independence four days later at a Christian Social Union (CSU) rally in a Bavarian beer tent."

EXERCISE TIME!

`

  1. Extract the top 10 sentences from every chapter of Charles Darwin’s On the Origin of Species. You can download the text using this command: darwin <- base::readRDS(url("https://slcladal.github.io/data/origindarwin.rda", "rb")). You will then have to paste the whole text together, split it into chapters, create a list of sentences in each chapter, and then apply text summarization to each element in the list.

Answer

  darwin <- base::readRDS(url("https://slcladal.github.io/data/origindarwin.rda", "rb")) %>%
  # collapse into single document
  paste0(collapse = " ") %>%
  # split into chapters
  stringr::str_split("CHAPTER")
  
  # split chapters into sentences
  chapters <- sapply(darwin, function(x){
    x <- stringi::stri_split_boundaries(x, type = "sentence")
  })
  
  chapters_clean <- lapply(chapters, function(x){
    # remove chapter headings
    x <- stringr::str_remove_all(x, "[A-Z]{2,} {0,1}[0-9]{0,}")
  })
  
  # extract top 3 sentences from each chapter
  top3s <- lapply(chapters_clean, function(x){
    x <- lexRankr::lexRank(x,
                          # only 1 article; repeat same docid for all of input vector
                          #docId = rep(1, length(text)),
                          # return 3 sentences
                          n = 3,
                          continuous = TRUE) %>%
                          dplyr::pull(sentence) %>%
    # remove special characters
    stringr::str_remove_all("[^[:alnum:] ]") %>%
    # remove superfluous white spaces
    stringr::str_squish()
  })
  
  # inspect top 3 sentences of first 5 chapters
  top3s[1:5]

`


You can go ahead and play with the text summarization and see if it is useful for you or if you can trust the results based on your data.

Citation & Session Info

Schweinberger, Martin. 2021. Automated text summarization with R. Brisbane: The University of Queensland. url: https://slcladal.github.io/txtsum.html (Version 2021.12.17).

@manual{schweinberger2021txtsum,
  author = {Schweinberger, Martin},
  title = {Automated Text Summarization with R},
  note = {https://slcladal.github.io/txtsum.html},
  year = {2021},
  organization = "The University of Queensland, Australia. School of Languages and Cultures},
  address = {Brisbane},
  edition = {2021.12.17}
}
sessionInfo()
## R version 4.1.1 (2021-08-10)
## Platform: x86_64-w64-mingw32/x64 (64-bit)
## Running under: Windows 10 x64 (build 19043)
## 
## Matrix products: default
## 
## locale:
## [1] LC_COLLATE=German_Germany.1252  LC_CTYPE=German_Germany.1252   
## [3] LC_MONETARY=German_Germany.1252 LC_NUMERIC=C                   
## [5] LC_TIME=German_Germany.1252    
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
##  [1] here_1.0.1      igraph_1.2.7    quanteda_3.1.0  forcats_0.5.1  
##  [5] stringr_1.4.0   dplyr_1.0.7     purrr_0.3.4     readr_2.1.1    
##  [9] tidyr_1.1.4     tibble_3.1.5    ggplot2_3.3.5   tidyverse_1.3.1
## [13] textmineR_3.0.5 Matrix_1.3-4    lexRankr_0.5.2  rvest_1.0.2    
## [17] xml2_1.3.2     
## 
## loaded via a namespace (and not attached):
##  [1] Rcpp_1.0.7         lubridate_1.8.0    lattice_0.20-44    rprojroot_2.0.2   
##  [5] assertthat_0.2.1   digest_0.6.28      utf8_1.2.2         R6_2.5.1          
##  [9] cellranger_1.1.0   backports_1.3.0    reprex_2.0.1.9000  evaluate_0.14     
## [13] httr_1.4.2         highr_0.9          pillar_1.6.4       rlang_0.4.12      
## [17] curl_4.3.2         readxl_1.3.1       rstudioapi_0.13    klippy_0.0.0.9500 
## [21] rmarkdown_2.5      selectr_0.4-2      RcppProgress_0.4.2 munsell_0.5.0     
## [25] broom_0.7.10       compiler_4.1.1     modelr_0.1.8       xfun_0.26         
## [29] pkgconfig_2.0.3    htmltools_0.5.2    tidyselect_1.1.1   fansi_0.5.0       
## [33] crayon_1.4.2       tzdb_0.2.0         dbplyr_2.1.1       withr_2.4.3       
## [37] SnowballC_0.7.0    grid_4.1.1         jsonlite_1.7.2     gtable_0.3.0      
## [41] lifecycle_1.0.1    DBI_1.1.1          magrittr_2.0.1     scales_1.1.1      
## [45] RcppParallel_5.1.4 cli_3.1.0          stringi_1.7.5      fs_1.5.0          
## [49] ellipsis_0.3.2     stopwords_2.3      generics_0.1.1     vctrs_0.3.8       
## [53] fastmatch_1.1-3    tools_4.1.1        glue_1.4.2         hms_1.1.1         
## [57] fastmap_1.1.0      yaml_2.2.1         colorspace_2.0-2   knitr_1.36        
## [61] haven_2.4.3

Back to top

Back to HOME