A Bash DIY data extraction tool

Data Collector

© Lead Image © Mike Espenhain, 123RF.com

© Lead Image © Mike Espenhain, 123RF.com

Article from Issue 234/2020
Author(s):

With some simple Bash commands, you can gather, parse, and filter text data into CSV files ready for your favorite statistical application.

If your research involves pulling large amounts of text data from the Internet, you can gather and process that data from the command line with a few simple Bash commands and turn it into a CSV file for your favorite statistical application, such as SPSS, R, or a MySQL table. In this article, I will show how to accomplish this with a project that examines the Romanian university dropout rate.

The data I need comes from 97 universities. For confidentiality reasons, chances are slim that I can get access to each university's database, but I can obtain that information legally from their website. (However keep in mind that many websites have licenses that prohibit web scraping. This article does not attempt to address copyright and other legal issues related to this practice. See the site's permission page and consult the applicable laws for your jurisdiction.) To gather my data, I could search for the word abandon (Romanian for dropout) on each of the 97 websites, but that would be tedious. Furthermore, each website may use a different content management system (CMS), so my search might not return the desired results. Instead, an easier option is to download all 97 websites in their entirety and recursively search their text content on my local hard drive. Linux lets you do this with the command shown in Listing 1.

Listing 1

Downloading Websites

wget -cv --progress=bar --connect-timeout=30 --force-directories --ignore-length -r -l 7 --convert-links --waitretry=61 -R gif,jpg,png,svg,pdf http://www.address.ro

Retrieving Data

In Listing 1, wget is a command-line utility in Linux and other POSIX-compliant operating systems used to download files from servers. It can be used as a mass downloader, and you can specify exactly which type of files you want downloaded and which type of files wget should disregard.

In the case of an interruption, the --continue attribute (-c) allows wget to continue where it left off without re-retrieving the data it has already copied when access is granted again. This can save you a lot of time and ensure the process won't loop due to frequent connection glitches.

The --verbose attribute (-v), when used in conjunction with --progress=bar, lets you see what the command does in real time by watching the Linux command-line interface. This is helpful if you want to see wget's downloading progress and look out for possible errors.

The --connect-timeout attribute is set to 30 seconds, which means that if no TCP connection can be established with the target server in 30 seconds, wget will stop trying. Similarly, --waitretry is set to 61 seconds, which means that wget will wait for 61 seconds before again trying to retrieve a file it failed to download. The 61 seconds are necessary, because some servers might experience temporary disconnections or might limit the number of downloads to just a few per minute. With 61 seconds, wget will wait a little over a minute before trying to download the file again; usually this method works, although it is time-consuming.

The --force-directories attribute, which is very important for gathering research data, ensures that you get an exact replica of the website you are downloading, one that maintains the same directory structure as the original.

--ignore-length guarantees wget will successfully download the target website. Some servers send bogus Content-Length headers that make wget think a file has not fully been retrieved, so wget will try and download it again. The --ignore-length attribute will ignore the Content-Length header and thus circumvent the problem.

--recursive (-r) forces wget to enter each subdirectory and retrieve every file in that subdirectory, not just the main branch. The -l 7 attribute specifies that wget should go up to seven levels deep in the server's directory structure to gather data.

Naturally, you want to store an exact copy of the target web server locally. However, any hyperlink present in a downloaded HTML or PHP file will still point to the online server and not the corresponding copy stored on the local drive. To fix this and make all the links point to their corresponding local counterparts, use --convert-links, which will result in a browsable copy of the website that you can read with the help of your favorite web browser, even without an Internet connection.

Excluding Non-Text Content

Websites also contain non-text content, like image files, other file types (MP3, ZIP, RAR, or PPT), and video formats (AVI, MP4, MKV, or VOB). Since I am only interested in text-based information, I want to exclude these types of files, which will have the added benefit of speeding up the entire process and conserving bandwidth. To do this, use the -R attribute followed by a comma-delimited list of extensions. For example, if I wish to discard image files, I would use the following:

-R gif,jpg,svg,png

For Linux and Unix-based operating systems, which are case-sensitive, you will want to modify this by including capitalized extensions as well:

-R gif,GIF,jpg,JPG,svg,SVG,png,PNG

You can do the same for any other file extension, especially those representing potentially large files:

-R avi,AVI,mpg,MPG,mp4,MP4,mkv,MKV,vob,VOB,iso,ISO,zip,ZIP,rar,RAR,tar,TAR

Exclude everything not representing text content including archives, video files, ISO images, pictures, and Microsoft Office documents.

Downloading the Websites

Finally, at the end of the wget command, you can add the target website's address to inform wget of the address you want cloned. When dealing with several addresses, you can copy and paste each line in a Bash script (one after the other), make the script executable, and execute it. When wget finishes with one website, it will pass on to the next.

Since we are dealing with 97 websites, the easiest way is to have all the URLs in one text file, each one on a separate line, and use the small script shown in Listing 2 to show wget where to get each address. This way you only have to launch wget once instead of 97 times.

Listing 2

Downloading Multiple Websites

wget -cv --progress=bar --connect-timeout=30 --force-directories --ignore-length -r -l 7 --convert-links --waitretry=61 -R gif,jpg,png,svg,pdf $(<addresses.txt)

The separate file addresses.txt should contain only the target server's web addresses, each on its own line, avoiding any special characters, quotes, and/or spaces.

The downloading process may vary depending on your Internet connection's speed, your CPU speed, and the available RAM. Also, be aware that some websites might be down or might even block you. Many websites do try to block scraping, whether because they are protecting their business interests, or because downloading tens of thousands of files with just as many server interrogations might be seen by the server or the server administrator as a potential denial of service (DoS) attack. Consequently, your IP might be blocked at some point. As a reference point, launching 10 such concurrent scripts on a Linux machine running at 800MHz and 256MB RAM memory took six days to download the 97 Romanian university websites – all on a 1GB Internet connection. Better hardware greatly improves the process. In the end, 392,868 files were retrieved with a total 108,447,924,224 characters.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Command Line: Wget

    Wget downloads files and even whole websites from the command line.

  • Swiss File Knife

    Swiss File Knife replaces more than 100 individual command-line tools at once, but it still fits on a USB stick and runs on all major operating systems.

  • Bash Tricks

    The Bash shell is powerful and infinitely expressive. In this article we describe some tricky techniques you can use to enhance and customize your Bash environment.

  • DIY Web Server

    If you want to learn a little bit more about the communication between a web browser and an HTTP server, why not build your own web server and take a closer look.

  • Bash Tuning

    In the old days, shells were capable of little more than calling external programs and executing basic, internal commands. With all the bells and whistles in the latest versions of Bash, however, you hardly need the support of external tools.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News