Scraping the web for data
More City Data
As quick and dirty as it is, the Wikipedia city scraper did its job pretty well. Unfortunately, I needed additional data for each city that was not available on the Wikipedia pages: an email contact address for each city and demographic information. Eventually, I found another portal [3] that published that data, plus other data I did not need. Figure 6 shows the section of that portal that contains the email contact; Figure 7 shows the page with the demographic data.
Listing 6 (again omitting lines 1-28 shown in Listing 1) shows the scraper I wrote to extract the additional data. Since it has the same basic structure as Listing 4, I'll only outline its main parts, leaving the details as an exercise for the reader. This website provides one single list of all the cities as one sequence of 164 numbered pages, whose URLs have the format https://www.comuniecitta.it/comuni-italiani?pg=N
. The loop starting in line 3 loads those pages one at a time and then loads the URLs of the individual cities' pages from the first table it finds (line 9). When the script loads a city page, the demobox
section in lines 17 to 24 extracts the demographic data, and lines 26 to 29 detect and print all the email addresses on the page. The result, again, is a CSV text file with fields separated by pipe characters with rows (Listing 7). At this point, the outputs of the two city-scraping scripts can be easily merged, with the Bash join
command or another script, into one single database with all the data in one coherent format. Since this is a task not limited to web scraping, I leave it as an exercise for the reader.
Listing 6
Email/Demographic Information Scraper
Listing 7
Sample Output from comuniecitta.it
Conclusions
The official Beautiful Soup documentation contains additional information, but with these examples, you now know enough to use it productively. If you decide to do large scale web scraping, I recommend checking out how to use shared proxies. You should set your User Agent
headers, possibly changing their value at random interval, as follows:
myheader = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.3; Win64; x64) ...
Add "headers=myheader"
to the parameters of your get(url)
calls (for details, see the documentation). This will make your requests look as if they were coming from several normal web browsers, in different locations, instead of one voracious script. Happy scraping!
Infos
- Beautiful Soup: http://www.crummy.com/software/BeautifulSoup/bs4/doc/
- Micro-encyclopedia project: http://stop.zona-m.net/2017/12/5000-concepts-for-europe-a-book-proposal/
- Italian Municipalities and Cities: http://www.comuniecitta.it
« Previous 1 2 3
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
Rhino Linux Announces Latest "Quick Update"
If you prefer your Linux distribution to be of the rolling type, Rhino Linux delivers a beautiful and reliable experience.
-
Plasma Desktop Will Soon Ask for Donations
The next iteration of Plasma has reached the soft feature freeze for the 6.2 version and includes a feature that could be divisive.
-
Linux Market Share Hits New High
For the first time, the Linux market share has reached a new high for desktops, and the trend looks like it will continue.
-
LibreOffice 24.8 Delivers New Features
LibreOffice is often considered the de facto standard office suite for the Linux operating system.
-
Deepin 23 Offers Wayland Support and New AI Tool
Deepin has been considered one of the most beautiful desktop operating systems for a long time and the arrival of version 23 has bolstered that reputation.
-
CachyOS Adds Support for System76's COSMIC Desktop
The August 2024 release of CachyOS includes support for the COSMIC desktop as well as some important bits for video.
-
Linux Foundation Adopts OMI to Foster Ethical LLMs
The Open Model Initiative hopes to create community LLMs that rival proprietary models but avoid restrictive licensing that limits usage.
-
Ubuntu 24.10 to Include the Latest Linux Kernel
Ubuntu users have grown accustomed to their favorite distribution shipping with a kernel that's not quite as up-to-date as other distros but that changes with 24.10.
-
Plasma Desktop 6.1.4 Release Includes Improvements and Bug Fixes
The latest release from the KDE team improves the KWin window and composite managers and plenty of fixes.
-
Manjaro Team Tests Immutable Version of its Arch-Based Distribution
If you're a fan of immutable operating systems, you'll be thrilled to know that the Manjaro team is working on an immutable spin that is now available for testing.