A Bash DIY data extraction tool

Data Collector

© Lead Image © Mike Espenhain, 123RF.com

© Lead Image © Mike Espenhain, 123RF.com

Author(s):

With some simple Bash commands, you can gather, parse, and filter text data into CSV files ready for your favorite statistical application.

If your research involves pulling large amounts of text data from the Internet, you can gather and process that data from the command line with a few simple Bash commands and turn it into a CSV file for your favorite statistical application, such as SPSS, R, or a MySQL table. In this article, I will show how to accomplish this with a project that examines the Romanian university dropout rate.

The data I need comes from 97 universities. For confidentiality reasons, chances are slim that I can get access to each university's database, but I can obtain that information legally from their website. (However keep in mind that many websites have licenses that prohibit web scraping. This article does not attempt to address copyright and other legal issues related to this practice. See the site's permission page and consult the applicable laws for your jurisdiction.) To gather my data, I could search for the word abandon (Romanian for dropout) on each of the 97 websites, but that would be tedious. Furthermore, each website may use a different content management system (CMS), so my search might not return the desired results. Instead, an easier option is to download all 97 websites in their entirety and recursively search their text content on my local hard drive. Linux lets you do this with the command shown in Listing 1.

Listing 1

Downloading Websites

wget -cv --progress=bar --connect-timeout=30 --force-directories --ignore-length -r -l 7 --convert-links --waitretry=61 -R gif,jpg,png,svg,pdf http://www.address.ro

Retrieving Data

In Listing 1, wget is a command-line utility in Linux and other POSIX-compliant operating systems used to download files from servers. It can be used as a mass downloader, and you can specify exactly which type of files you want downloaded and which type of files wget should disregard.

In the case of an interruption, the --continue attribute (-c) allows wget to continue where it left off without re-retrieving the data it has already copied when access is granted again. This can save you a lot of time and ensure the process won't loop due to frequent connection glitches.

The --verbose attribute (-v), when used in conjunction with --progress=bar, lets you see what the command does in real time by watching the Linux command-line interface. This is helpful if you want to see wget's downloading progress and look out for possible errors.

The --connect-timeout attribute is set to 30 seconds, which means that if no TCP connection can be established with the target server in 30 seconds, wget will stop trying. Similarly, --waitretry is set to 61 seconds, which means that wget will wait for 61 seconds before again trying to retrieve a file it failed to download. The 61 seconds are necessary, because some servers might experience temporary disconnections or might limit the number of downloads to just a few per minute. With 61 seconds, wget will wait a little over a minute before trying to download the file again; usually this method works, although it is time-consuming.

The --force-directories attribute, which is very important for gathering research data, ensures that you get an exact replica of the website you are downloading, one that maintains the same directory structure as the original.

--ignore-length guarantees wget will successfully download the target website. Some servers send bogus Content-Length headers that make wget think a file has not fully been retrieved, so wget will try and download it again. The --ignore-length attribute will ignore the Content-Length header and thus circumvent the problem.

--recursive (-r) forces wget to enter each subdirectory and retrieve every file in that subdirectory, not just the main branch. The -l 7 attribute specifies that wget should go up to seven levels deep in the server's directory structure to gather data.

Naturally, you want to store an exact copy of the target web server locally. However, any hyperlink present in a downloaded HTML or PHP file will still point to the online server and not the corresponding copy stored on the local drive. To fix this and make all the links point to their corresponding local counterparts, use --convert-links, which will result in a browsable copy of the website that you can read with the help of your favorite web browser, even without an Internet connection.

Excluding Non-Text Content

Websites also contain non-text content, like image files, other file types (MP3, ZIP, RAR, or PPT), and video formats (AVI, MP4, MKV, or VOB). Since I am only interested in text-based information, I want to exclude these types of files, which will have the added benefit of speeding up the entire process and conserving bandwidth. To do this, use the -R attribute followed by a comma-delimited list of extensions. For example, if I wish to discard image files, I would use the following:

-R gif,jpg,svg,png

For Linux and Unix-based operating systems, which are case-sensitive, you will want to modify this by including capitalized extensions as well:

-R gif,GIF,jpg,JPG,svg,SVG,png,PNG

You can do the same for any other file extension, especially those representing potentially large files:

-R avi,AVI,mpg,MPG,mp4,MP4,mkv,MKV,vob,VOB,iso,ISO,zip,ZIP,rar,RAR,tar,TAR

Exclude everything not representing text content including archives, video files, ISO images, pictures, and Microsoft Office documents.

Downloading the Websites

Finally, at the end of the wget command, you can add the target website's address to inform wget of the address you want cloned. When dealing with several addresses, you can copy and paste each line in a Bash script (one after the other), make the script executable, and execute it. When wget finishes with one website, it will pass on to the next.

Since we are dealing with 97 websites, the easiest way is to have all the URLs in one text file, each one on a separate line, and use the small script shown in Listing 2 to show wget where to get each address. This way you only have to launch wget once instead of 97 times.

Listing 2

Downloading Multiple Websites

wget -cv --progress=bar --connect-timeout=30 --force-directories --ignore-length -r -l 7 --convert-links --waitretry=61 -R gif,jpg,png,svg,pdf $(<addresses.txt)

The separate file addresses.txt should contain only the target server's web addresses, each on its own line, avoiding any special characters, quotes, and/or spaces.

The downloading process may vary depending on your Internet connection's speed, your CPU speed, and the available RAM. Also, be aware that some websites might be down or might even block you. Many websites do try to block scraping, whether because they are protecting their business interests, or because downloading tens of thousands of files with just as many server interrogations might be seen by the server or the server administrator as a potential denial of service (DoS) attack. Consequently, your IP might be blocked at some point. As a reference point, launching 10 such concurrent scripts on a Linux machine running at 800MHz and 256MB RAM memory took six days to download the 97 Romanian university websites – all on a 1GB Internet connection. Better hardware greatly improves the process. In the end, 392,868 files were retrieved with a total 108,447,924,224 characters.

Filtering the Data

Now that I have gathered the data, I now need to recursively search for occurrences of the word abandon in the downloaded files. For this, I will use another Linux command-line utility, grep. In the directory containing all the folders with the downloaded websites, launch

grep -r -A1 -B1 "abandon" * > results.txt

-r tells grep to search every file and subfolder in the current folder from where the command was launched. grep will search not only for abandon, but also for its variations, including suffixes or prefixes.

However, I am only interested in school dropout and how many times it is mentioned on each university's website. For this, I need to see the word's context and eliminate the unrelated instances. The -A and -B flags are used to verbosely display the line after and before the occurrence of the word. However, the terminal scrolls rapidly in verbose mode. Since I want to take a closer look at the results, I can output everything to a text file named results.txt. As a side note, the * character specifies that grep should search all downloaded files, whether they are in human-readable format or not.

After a bit of processing, the result will be a text file (results.txt) that contains the complete path to the file containing the word abandon and its variations, plus the surrounding context. The occurrences are delimited by the characters --. I need to eliminate these and replace them with something else, because (as you will see later) the next commands tend to interpret -- as a specifier of command attributes. So you can either open the results.txt file in a text editor and do a search and replace or use a Linux command to automatically do this for you. Keep in mind that in doing so you are also replacing the -- characters that might be present in legitimate URLs in the file. I will fix this later.

I chose to replace -- with 12345678 to get rid of all the delimiting lines in the file. I did this, because the next command I use lists the first line after this replaced delimiter – the line containing the local HTTP address of the file:

grep -A 1 -F 12345678 results.txt > 1stline.txt

The above command displays the line immediately after the one beginning with 12345678 and outputs the result in another text file called 1stline.txt. Now I have an almost clean file containing just the addresses of the files that contain the word abandon and all its variations, ready to be inserted into a statistical program. However, wget appends some of the accompanying text at the end of some of the addresses, and I need to get rid of that using the sed editor:

sed 's/<.*//' 1stline.txt > 1stlinefiltered.txt

This command will delete everything after the < character leaving only the addresses, without the beginning of the context text appearing on the same line. The resulting output is put into a new file called 1stlinefiltered.txt.

The instances of abandon and its variations might appear in multiple files, some of which are duplicate addresses. I don't need these, since I only want to work with distinct instances of the word located in distinct files. To delete duplicate lines from 1stlinefiltered.txt, I will use sort and uniq:

sort 1stlinefiltered.txt | uniq -u > address_filtered.txt

The command-line utility sort sorts the file's lines, and uniq -u prints in verbose mode only the unique lines of the specified text file. Everything is outputted through a new file called address_filtered.txt. Because wget sometimes appends a - character at the end of each URL, I must further clean address_filtered.txt. To do this, I will use sed again to delete the last character of each line present in the file:

sed 's/.$//' address_filtered.txt > list_final_address.txt

This time, everything is outputted in list_final_address.txt – a list of URLs pointing to pages that contain the word abandon. Since I previously replaced -- with 12345678 in order to be able to correctly display the first line below each delimiter, some of the web addresses that contained -- in their URL also got their file path changed. All I need to do is take the list_final_address.txt file and do a search and replace for 12345678 with --.

Now list_final_address.txt is clean, and all I have to do is convert it to CSV format for easy importing to a statistical application:

cat list_final_address.txt > address.csv

This final file, address.csv, will give you a column of web addresses in a statistical application like SPSS, each representing a location corresponding to a file containing the word abandon. In addition to the web address column, I also need a column with abandon's corresponding context. The file results.txt still contains this information, and I can use it to extract what I need. You can do this with Bash or a Python script.

Bash Extraction

With Bash, you use grep again (Listing  3). Listing 3 takes results.txt, searches it for the word abandon and all its variations, and displays in verbose mode the 50 characters surrounding the word. One hundred characters with the keyword in the middle should suffice to be able to tell if the context is relevant. Everything is outputted to a new text file called abandon50.txt. From this file, I trim out duplicate lines with:

Listing 3

Bash Extraction

grep -E -o ".{0,50}abandon.{0,50}" results.txt > abandon50.txt
sort abandon50.txt | uniq -u > abandon50_filtered.txt

Success: The resulting file, abandon50_filtered.txt, contains a column of text corresponding to each address in address.csv. The problem with this approach is that each server I initially parsed with wget uses a different CMS. Some university servers might use open source solutions, such as Wordpress or Joomla, while others use custom solutions. Consequently, no two sites are the same.

In addition, most sites contain special characters in their URL (e.g., $ and %), and grep output has difficulties with these characters. A manual search-and-replace for special characters such as %20 should fix the problem. Alternatively, you can use another Linux command-line utility, html2text, that trims special HTML tags from files, leaving only clean human-readable text behind. Once this is done, grep should have no problem in performing correctly.

Python Extraction

If you want to use Python to extract the occurrences of the word abandon surrounded by 50 characters on each side, you can use the Python script in Listing  4. This script will also filter out special characters from URL addresses.

Put the script from Listing 4 in a text file and name it needleinhaystack.py. Make it executable in Linux with the following command:

chmod +x needleinhaystack.py

Listing 4

Python Extraction Script

01 #!/usr/bin/python
02
03 """Custom work for Razvan T. Coloja, placed in the public domain by the author - Radu-Eosif Mihailescu.
04 """
05
06 import sys
07
08 MAGIC_WORD = 'abandon'
09
10 def main(argv):
11 with open(argv[1], 'r') as faddr:
12 addresses = set(l.rstrip() for l in faddr)
13 with open(argv[2], 'r') as fres:
14 the_text = set(l.rstrip() for l in fres)
15
16 for address in addresses:
17 for line in the_text:
18 if line.startswith(address):
19 where_found = line.find(MAGIC_WORD)
20 if where_found != -1:
21 if where_found > 50:
22 start_excerpt = where_found - 50
23 else:
24 start_excerpt = 0
25 print '"%s","%s"' % (
26 address,
27 line[start_excerpt:where_found + len(MAGIC_WORD) + 50])
28
29 if __name__ == '__main__':
30 main(sys.argv)

You need to have Python installed to make this script work. The script will compare the file containing the addresses with the one containing both the addresses and the associated context, trim context to about 100 characters with abandon in the middle, and structure everything into two columns that are ready to be imported into a statistics application.

Putting It All Together

You now have the data you need to do your desired analysis. To save typing each command individually, you can put the above commands into a single Bash script as shown in Listing 5.

Listing 5

Complete Bash Script

01 #!/bin/bash
02 # download the websites specified in addresses.txt one by one
03 wget -cv --progress=bar --connect-timeout=30 --force-directories --ignore-length -r -l 7 --convert-links --waitretry=61 -R gif,jpg,png,svg,pdf $(<addresses.txt)
04 # recursively look for the word "abandon" and its variations and print in verbose mode the line before and after the keyword so we can take a quick look at the context
05 grep -r -A1 -B1 "abandon" * > results.txt
06 # find every line that starts with the "--" delimiter and replace it with "12345678" using your favorite text editor
07 # list the first line after "12345678"
08 grep -A 1 -F 12345678 results2.txt > 1stline.txt
09 # delete everything after the "<" character
10 sed 's/<.*//' 1stline.txt > 1stlinefiltered.txt
11 # list every line only once, without its duplicates
12 sort 1stlinefiltered.txt | uniq -u > address_filtered.txt
13 # remove last character form each line (.html-)
14 sed 's/.$//' address_filtered.txt > list_final_address.txt
15 # create a CSV file containing the web addresses
16 cat list_final_address.txt > address.csv
17 # replace "12345678" with "--" in address.csv because "--" might appear in the URL

Then add the addresses to addresses.txt, each on one line, and save the file in the same folder as the Bash script in Listing 5. Make the script executable with

chmod +x scriptname.sh

Then launch it with ./scriptname.sh.

With a few simple Bash commands, you have a DIY text data collection tool that delivers a CSV file for use in your favorite statistical application.

The Author

Razvan T. Coloja is a psychologist currently finishing his Bachelor's degree and PhD candidacy in sociology. He has been a passionate Linux user and OSS supporter since 1998 and has an interest in SBC clusters, CircuitPython, and machine learning.