Smart research using Elasticsearch

Structure of the Index

Invoking mlt-index (Listing 3) from the command line grabs the files one by one and feeds them via the index() method to the Elasticsearch server installed on my desktop computer. (See "Installing the Elasticsearch Server" box.)

Installing the Elasticsearch Server

The Elasticsearch server can be easily installed on a desktop computer or a virtual machine using the following simple steps:

sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java7-installerjava -version
wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.1.debsudo dpkg -i *.deb
sudo /etc/init.d/elasticsearch start

If you would rather not clutter your filesystem with yet more software, you can simply boot a VM using vagrant up with the Vagrant file shown in Listing 2, and install the server in the VM. The forwarded_port mapping ensures that the Elasticsearch server in the VM listing on port 9200 is also responsive on the host system on port 9200.

Listing 2

Vagrant File

 

Looking at Listing 3, I simply called the newly created index in the server blog. Line 15 deletes it first of all if it is already there – which is typically after previously calling mlt-index.

Listing 3

mlt-index

 

The find() function in line 17, originating from the File::Find module accompanying Perl, then recursively works through all the files under the specified directory and, behind the scenes, switches to the directory in which they're located during the user-defined callback using chdir. It also sets the name of the file currently being edited in the variable $_.

Feeding by Slurping

The index() method in line 23 adds the file's text content slurped with slurp() from the Sysadm::Install CPAN module's treasure trove, along with the file name to the search engine's index. Later, the search functions return the file names with matches, and they are easy to recognize because the names were chosen according to the content of each blog entry (e.g., 10-cents-for-a-grocery-bag.txt).

The index() feed function also creates a new index (if it doesn't already exist) and defines both the item's name (blog) and a type (set to text) – which in Elasticsearch is nothing more than an arbitrarily named partition in the index.

More of That!

Feeding data to Elasticsearch took about a tenth of a second per 2KB text file on my Linux desktop; this therefore really tested my patience with the 877 blog entries. By contrast, feeding data on my laptop with a faster solid state disk and plenty of memory was much faster – it was all over within 10 seconds.

Queries can be submitted as soon as the index is ready. The output from the mlt-search command in Figure 3 shows that Listing 4 unsurprisingly rediscovers the document I provided as a reference – content related to my family dealing earthquakes in the San Francisco Bay area. But, it also dug up some other earth-shaking results, such as a report about how to traverse the bureaucratic jungle to obtain a driver's license in California, about how to distinguish good and bad neighborhoods to live in, and the annoying habit of some motorcycle riders to let their engines roar at earth-shattering levels. The results appear in fractions of a second, meaning the function is certainly also useful on busy websites.

Listing 4

mlt-search

 

Figure 3: Elasticsearch found similar articles for a blog entry about earthquakes in the San Francisco Bay area.

Listing 4 expects a text file's path at the command line and imports this file using slurp, while telling Perl to keep the utf8 encoding intact. It sends the query to the Elasticsearch server in line 13 using the search() method and then collects the names of the files from the matches with comparable content from the result returned as JSON.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Perl: Elasticsearch

    The Elasticsearch full-text search engine quickly finds expressions even in huge text collections. With a few tricks, you can even locate photos that have been shot in the vicinity of a reference image.

  • ELK Stack

    A powerful search engine, a tool for processing and normalizing protocols, and another for visualizing the results – Elasticsearch, Logstash, and Kibana form the ELK stack, which helps admins manage logfiles on high-volume systems.

  • Logstash

    When something goes wrong on a system, the logfile is the first place to look for troubleshooting clues. Logstash, a log server with built-in analysis tools, consolidates logs from many servers and even makes the data searchable.

  • Index Search with Lucene

    Even state-of-the-art computers need to use clever methods to process ever-increasing amounts of document data. The open source Lucene framework uses inverted indexing for fast searches of document collections.

  • Tutorials – Recoll

    Even in the age of cloud computing, personal computers often hold thousands of files: text files, spreadsheets, word processing docs, configuration files, and HTML files, as well as email and other message formats. If it takes too long to find the file you need, chase it down with the Recoll local search engine.

comments powered by Disqus

Direct Download

Read full article as PDF:

Price $2.95

News