Scraping the web for data

More City Data

As quick and dirty as it is, the Wikipedia city scraper did its job pretty well. Unfortunately, I needed additional data for each city that was not available on the Wikipedia pages: an email contact address for each city and demographic information. Eventually, I found another portal [3] that published that data, plus other data I did not need. Figure 6 shows the section of that portal that contains the email contact; Figure 7 shows the page with the demographic data.

Figure 6: The website that provides the official email address.
Figure 7: The website where I found the demographic data.

Listing 6 (again omitting lines 1-28 shown in Listing 1) shows the scraper I wrote to extract the additional data. Since it has the same basic structure as Listing 4, I'll only outline its main parts, leaving the details as an exercise for the reader. This website provides one single list of all the cities as one sequence of 164 numbered pages, whose URLs have the format The loop starting in line 3 loads those pages one at a time and then loads the URLs of the individual cities' pages from the first table it finds (line 9). When the script loads a city page, the demobox section in lines 17 to 24 extracts the demographic data, and lines 26 to 29 detect and print all the email addresses on the page. The result, again, is a CSV text file with fields separated by pipe characters with rows (Listing 7). At this point, the outputs of the two city-scraping scripts can be easily merged, with the Bash join command or another script, into one single database with all the data in one coherent format. Since this is a task not limited to web scraping, I leave it as an exercise for the reader.

Listing 6

Email/Demographic Information Scraper


Listing 7

Sample Output from



The official Beautiful Soup documentation contains additional information, but with these examples, you now know enough to use it productively. If you decide to do large scale web scraping, I recommend checking out how to use shared proxies. You should set your User Agent headers, possibly changing their value at random interval, as follows:

myheader = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.3; Win64; x64) ...

Add "headers=myheader" to the parameters of your get(url) calls (for details, see the documentation). This will make your requests look as if they were coming from several normal web browsers, in different locations, instead of one voracious script. Happy scraping!

The Author

Marco Fioretti ( is a freelance author, trainer, and researcher based in Rome, Italy, who has been working with Free/Open Source software since 1995, and on open digital standards since 2005. Marco also is a board member of the Free Knowledge Institute (

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.