Scraping the web for data
More City Data
As quick and dirty as it is, the Wikipedia city scraper did its job pretty well. Unfortunately, I needed additional data for each city that was not available on the Wikipedia pages: an email contact address for each city and demographic information. Eventually, I found another portal [3] that published that data, plus other data I did not need. Figure 6 shows the section of that portal that contains the email contact; Figure 7 shows the page with the demographic data.
Listing 6 (again omitting lines 1-28 shown in Listing 1) shows the scraper I wrote to extract the additional data. Since it has the same basic structure as Listing 4, I'll only outline its main parts, leaving the details as an exercise for the reader. This website provides one single list of all the cities as one sequence of 164 numbered pages, whose URLs have the format https://www.comuniecitta.it/comuni-italiani?pg=N
. The loop starting in line 3 loads those pages one at a time and then loads the URLs of the individual cities' pages from the first table it finds (line 9). When the script loads a city page, the demobox
section in lines 17 to 24 extracts the demographic data, and lines 26 to 29 detect and print all the email addresses on the page. The result, again, is a CSV text file with fields separated by pipe characters with rows (Listing 7). At this point, the outputs of the two city-scraping scripts can be easily merged, with the Bash join
command or another script, into one single database with all the data in one coherent format. Since this is a task not limited to web scraping, I leave it as an exercise for the reader.
Listing 6
Email/Demographic Information Scraper
Listing 7
Sample Output from comuniecitta.it
Conclusions
The official Beautiful Soup documentation contains additional information, but with these examples, you now know enough to use it productively. If you decide to do large scale web scraping, I recommend checking out how to use shared proxies. You should set your User Agent
headers, possibly changing their value at random interval, as follows:
myheader = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.3; Win64; x64) ...
Add "headers=myheader"
to the parameters of your get(url)
calls (for details, see the documentation). This will make your requests look as if they were coming from several normal web browsers, in different locations, instead of one voracious script. Happy scraping!
Infos
- Beautiful Soup: http://www.crummy.com/software/BeautifulSoup/bs4/doc/
- Micro-encyclopedia project: http://stop.zona-m.net/2017/12/5000-concepts-for-europe-a-book-proposal/
- Italian Municipalities and Cities: http://www.comuniecitta.it
« Previous 1 2 3
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
Linux Servers Targeted by Akira Ransomware
A group of bad actors who have already extorted $42 million have their sights set on the Linux platform.
-
TUXEDO Computers Unveils Linux Laptop Featuring AMD Ryzen CPU
This latest release is the first laptop to include the new CPU from Ryzen and Linux preinstalled.
-
XZ Gets the All-Clear
The back door xz vulnerability has been officially reverted for Fedora 40 and versions 38 and 39 were never affected.
-
Canonical Collaborates with Qualcomm on New Venture
This new joint effort is geared toward bringing Ubuntu and Ubuntu Core to Qualcomm-powered devices.
-
Kodi 21.0 Open-Source Entertainment Hub Released
After a year of development, the award-winning Kodi cross-platform, media center software is now available with many new additions and improvements.
-
Linux Usage Increases in Two Key Areas
If market share is your thing, you'll be happy to know that Linux is on the rise in two areas that, if they keep climbing, could have serious meaning for Linux's future.
-
Vulnerability Discovered in xz Libraries
An urgent alert for Fedora 40 has been posted and users should pay attention.
-
Canonical Bumps LTS Support to 12 years
If you're worried that your Ubuntu LTS release won't be supported long enough to last, Canonical has a surprise for you in the form of 12 years of security coverage.
-
Fedora 40 Beta Released Soon
With the official release of Fedora 40 coming in April, it's almost time to download the beta and see what's new.
-
New Pentesting Distribution to Compete with Kali Linux
SnoopGod is now available for your testing needs