--- title: "Web Crawling and Scraping using R" author: "Martin Schweinberger" date: "2022.11.15" output: bookdown::html_document2 bibliography: bibliography.bib link-citations: yes --- ```{r uq1, echo=F, fig.cap="", message=FALSE, warning=FALSE, out.width='100%'} knitr::include_graphics("https://slcladal.github.io/images/uq1.jpg") ``` # Introduction{-} This tutorial introduces web crawling and web scraping with R. Web crawling and web scraping are important and common procedures for collecting text data from social media sites, web pages, or other documents for later analysis. Regarding terminology, the automated download of HTML pages is called *crawling* while the extraction of the textual data and/or metadata (for example, article date, headlines, author names, article text) from the HTML source code (or the DOM document object model of the website) is called *scraping* [see @olston2010web]. ```{r diff, echo=FALSE, out.width= "15%", out.extra='style="float:right; padding:10px"'} knitr::include_graphics("https://slcladal.github.io/images/yr_chili.jpg") ``` This tutorial is aimed at intermediate and advanced users of R with the aim of showcasing how to crawl and scrape web data using R. The aim is not to provide a fully-fledged analysis but rather to show and exemplify selected useful methods associated with crawling and scraping web data.

The entire R Notebook for the tutorial can be downloaded [**here**](https://slcladal.github.io/content/webcrawling.Rmd). If you want to render the R Notebook on your machine, i.e. knitting the document to html or a pdf, you need to make sure that you have R and RStudio installed and you also need to download the [**bibliography file**](https://slcladal.github.io/content/bibliography.bib) and store it in the same folder where you store the Rmd file.


This tutorial builds heavily on and uses materials from [this tutorial](https://tm4ss.github.io/docs/Tutorial_2_Web_crawling.html) on web crawling and scraping using R by Andreas Niekler and Gregor Wiedemann [see @WN17]. [The tutorial](https://tm4ss.github.io/docs/index.html) by Andreas Niekler and Gregor Wiedemann is more thorough, goes into more detail than this tutorial, and covers many more very useful text mining methods. An alternative approach for web crawling and scraping would be to use the `RCrawler` package [@khalil2017rcrawler] which is not introduced here though (inspecting the `RCrawler` package and its functions is, however, also highly recommended). For a more in-depth introduction to web crawling in scraping, @miner2012practical is a very useful introduction. ## Preparation and session set up{-} This tutorial is based on R. If you have not installed R or are new to it, you will find an introduction to and more information how to use R [here](https://slcladal.github.io/intror.html). For this tutorials, we need to install certain *packages* from an R *library* so that the scripts shown below are executed without errors. Before turning to the code below, please install the packages by running the code below this paragraph. If you have already installed the packages mentioned below, then you can skip ahead ignore this section. To install the necessary packages, simply run the following code - it may take some time (between 1 and 5 minutes to install all of the libraries so you do not need to worry if it takes some time). ```{r prep1, eval = F, message=FALSE, warning=FALSE} # install packages install.packages("rvest") install.packages("readtext") install.packages("webdriver") install.packages("tidyverse") install.packages("readtext") install.packages("flextable") install.packages("webdriver") webdriver::install_phantomjs() # install klippy for copy-to-clipboard button in code chunks install.packages("remotes") remotes::install_github("rlesur/klippy") ``` If not done yet, please install the [phantomJS](https://phantomjs.org) headless browser. This needs to be done only once. Now that we have installed the packages (and the [phantomJS](https://phantomjs.org) headless browser), we can activate them as shown below. ```{r prep2, message=FALSE, warning=F} # load packages library(tidyverse) library(rvest) library(readtext) library(flextable) library(webdriver) # activate klippy for copy-to-clipboard button klippy::klippy() ``` Once you have installed R and RStudio and once you have initiated the session by executing the code shown above, you are good to go. # Scraping a single website{-} For web crawling and scraping, we use the package `rvest` and to extract text data from various formats such as PDF, DOC, DOCX and TXT files with the `readtext` package. In a first exercise, we will download a single web page from *The Guardian* and extract text together with relevant metadata such as the article date. Let's define the URL of the article of interest and load the content using the `read_html` function from the `rvest` package, which provides very useful functions for web crawling and scraping. ```{r webc, warning=FALSE, message=FALSE} # define url url <- "https://www.theguardian.com/world/2017/jun/26/angela-merkel-and-donald-trump-head-for-clash-at-g20-summit" # download content webc <- rvest::read_html(url) # inspect webc ``` We download and parse the webpage using the `read_html` function which accepts a URL as a parameter. The function downloads the page and interprets the html source code as an HTML / XML object. However, the output contains a lot of information that we do not really need. Thus, we process the data to extract only the text from the webpage. ```{r webtxt, warning=FALSE, message=FALSE} webc %>% # extract paragraphs rvest::html_nodes("p") %>% # extract text rvest::html_text() -> webtxt # inspect head(webtxt) ``` The output shows the first 6 text elements of the website which means that we were successful in scraping the text content of the web page. We can also extract the headline of the article by running the code shown below. ```{r header, warning=FALSE, message=FALSE} webc %>% # extract paragraphs rvest::html_nodes("h1") %>% # extract text rvest::html_text() -> header # inspect head(header) ``` # Following links{-} Modern websites often do not contain the full content displayed in the browser in their corresponding source files which are served by the web-server. Instead, the browser loads additional content dynamically via javascript code contained in the original source file. To be able to scrape such content, we rely on a headless browser `phantomJS` which renders a site for a given URL for us, before we start the actual scraping, i.e. the extraction of certain identifiable elements from the rendered site. ***

NOTE
In case the website does not fetch or alter the to-be-scraped content dynamically, you can omit the PhantomJS webdriver and just download the the static HTML source code to retrieve the information from there. In this case, replace the following block of code with a simple call of `html_document <- read_html(url)` where the `read_html()` function downloads the unrendered page source code directly.

*** Now we can start an instance of `PhantomJS` and create a new browser session that awaits to load URLs to render the corresponding websites. ```{r startPJS, message=FALSE, warning=FALSE} pjs_instance <- run_phantomjs() pjs_session <- Session$new(port = pjs_instance$port) ``` To make sure that we get the dynamically rendered HTML content of the website, we pass the original source code downloaded from the URL to our `PhantomJS` session first, and the use the rendered source. Usually, we do not want download a single document, but a series of documents. In our second exercise, we want to download all Guardian articles tagged with *Angela Merkel*. Instead of a tag page, we could also be interested in downloading results of a site-search engine or any other link collection. The task is always two-fold: First, we download and parse the tag overview page to extract all links to articles of interest: ```{r singlepage, message=FALSE, warning=FALSE} url <- "https://www.theguardian.com/world/angela-merkel" # go to URL pjs_session$go(url) # render page rendered_source <- pjs_session$getSource() # download text and parse the source code into an XML object html_document <- read_html(rendered_source) ``` Second, we download and scrape each individual article page. For this, we extract all `href`-attributes from `a`-elements fitting a certain CSS-class. To select the right contents via XPATH-selectors, you need to investigate the HTML-structure of your specific page. Modern browsers such as Firefox and Chrome support you in that task by a function called "Inspect Element" (or similar), available through a right-click on the page element. ```{r extractlinks, message=FALSE, warning=FALSE} links <- html_document %>% html_nodes(xpath = "//div[contains(@class, 'fc-item__container')]/a") %>% html_attr(name = "href") # inspect links ``` Now, `links` contains a list of `r length(links)` hyperlinks to single articles tagged with *Angela Merkel*. But stop! There is not only one page of links to tagged articles. If you have a look on the page in your browser, the tag overview page has several more than 60 sub pages, accessible via a paging navigator at the bottom. By clicking on the second page, we see a different URL-structure, which now contains a link to a specific paging number. We can use that format to create links to all sub pages by combining the base URL with the page numbers. ```{r searchresults, message=FALSE, warning=FALSE} page_numbers <- 1:3 base_url <- "https://www.theguardian.com/world/angela-merkel?page=" paging_urls <- paste0(base_url, page_numbers) # inspect paging_urls ``` Now we can iterate over all URLs of tag overview pages, to collect more/all links to articles tagged with *Angela Merkel*. We iterate with a for-loop over all URLs and append results from each single URL to a vector of all links. ```{r getlinks, message=FALSE, warning=FALSE} all_links <- NULL for (url in paging_urls) { # download and parse single overview page pjs_session$go(url) rendered_source <- pjs_session$getSource() html_document <- read_html(rendered_source) # extract links to articles links <- html_document %>% html_nodes(xpath = "//div[contains(@class, 'fc-item__container')]/a") %>% html_attr(name = "href") # append links to vector of all links all_links <- c(all_links, links) } ``` ```{r slr4, echo = F} # inspect data all_links %>% as.data.frame() %>% head(3) %>% flextable() %>% flextable::set_table_properties(width = .75, layout = "autofit") %>% flextable::theme_zebra() %>% flextable::fontsize(size = 12) %>% flextable::fontsize(size = 12, part = "header") %>% flextable::align_text_col(align = "center") %>% flextable::set_caption(caption = "First 6 links.") %>% flextable::border_outer() ``` An effective way of programming is to encapsulate repeatedly used code in a specific function. This function then can be called with specific parameters, process something and return a result. We use this here, to encapsulate the downloading and parsing of a Guardian article given a specific URL. The code is the same as in our exercise 1 above, only that we combine the extracted texts and metadata in a data.frame and wrap the entire process in a function-block. ```{r scrapefun, message=F, warning=F} scrape_guardian_article <- function(url) { # start PhantomJS pjs_session$go(url) rendered_source <- pjs_session$getSource() # read raw html html_document <- read_html(rendered_source) # extract title title <- html_document %>% rvest::html_node("h1") %>% rvest::html_text(trim = T) # extract text text <- html_document %>% rvest::html_node("p") %>% rvest::html_text(trim = T) # extract date date <- url %>% stringr::str_replace_all(".*([0-9]{4,4}/[a-z]{3,4}/[0-9]{1,2}).*", "\\1") # generate data frame from results article <- data.frame( url = url, date = date, title = title, body = text ) return(article) } ``` Now we can use that function `scrape_guardian_article` in any other part of our script. For instance, we can loop over each of our collected links. We use a running variable i, taking values from 1 to `length(all_links)` to access the single links in `all_links` and write some progress output. ```{r downloadarticles, message=F, warning=F} # create container for loop output all_articles <- data.frame() # loop over links for (i in 1:length(all_links)) { # print progress (optional) #cat("Downloading", i, "of", length(all_links), "URL:", all_links[i], "\n") # scrape website article <- scrape_guardian_article(all_links[i]) # append current article data.frame to the data.frame of all articles all_articles <- rbind(all_articles, article) } ``` ```{r slr5, echo = F} # inspect data all_articles %>% as.data.frame() %>% head(3) %>% flextable() %>% flextable::set_table_properties(width = .75, layout = "autofit") %>% flextable::theme_zebra() %>% flextable::fontsize(size = 12) %>% flextable::fontsize(size = 12, part = "header") %>% flextable::align_text_col(align = "center") %>% flextable::set_caption(caption = "First 3 rows of the data.") %>% flextable::border_outer() ``` If you perform the web scraping on your own machine, you can now save the table generated above on your machine using the code below. The code chunk assumes that you have a folder called `data` in your current working directory ```{r save, eval = F, message=F, warning=F} write.table(all_articles, here::here("data", "all_articles.txt"), sep = "\t") ``` The last command write the extracted articles to a tab-separated file in the data directory on your machine for any later use. ***

EXERCISE TIME!

` 1. Try to perform extraction of news articles from another web page, e.g. `https://www.theaustralian.com.au`, `https://www.nytimes.com`, or `https://www.spiegel.de`. For this, investigate the URL patterns of the page and look into the source code with the `inspect element' functionality of your browser to find appropriate XPATH expressions.
` *** # Citation & Session Info {-} Schweinberger, Martin. 2022. *Web Crawling and Scraping using R*. Brisbane: The University of Queensland. url: https://slcladal.github.io/webcrawling.html (Version edition = 2022.11.15). ``` @manual{schweinberger2022webc, author = {Schweinberger, Martin}, title = {Web Crawling and Scraping using R}, note = {https://ladal.edu.au/webcrawling.html}, year = {2022}, organization = "The University of Queensland, School of Languages and Cultures}, address = {Brisbane}, edition = {2022.11.15} } ``` ```{r fin} sessionInfo() ``` *** [Back to top](#introduction) [Back to HOME](https://ladal.edu.au) *** # References{-}