Smart research using Elasticsearch

Structure of the Index

Invoking mlt-index (Listing 3) from the command line grabs the files one by one and feeds them via the index() method to the Elasticsearch server installed on my desktop computer. (See "Installing the Elasticsearch Server" box.)

Installing the Elasticsearch Server

The Elasticsearch server can be easily installed on a desktop computer or a virtual machine using the following simple steps:

sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java7-installerjava -version
wget dpkg -i *.deb
sudo /etc/init.d/elasticsearch start

If you would rather not clutter your filesystem with yet more software, you can simply boot a VM using vagrant up with the Vagrant file shown in Listing 2, and install the server in the VM. The forwarded_port mapping ensures that the Elasticsearch server in the VM listing on port 9200 is also responsive on the host system on port 9200.

Listing 2

Vagrant File

3 Vagrant::configure(VAGRANTFILE_API_VERSION) do |config|
5     # 32-bit Ubuntu Box
6 = "ubuntu/trusty64"
7 "forwarded_port", guest: 9200, host: 9200
8 end

Looking at Listing 3, I simply called the newly created index in the server blog. Line 15 deletes it first of all if it is already there – which is typically after previously calling mlt-index.

Listing 3


01 #!/usr/local/bin/perl -w
02 use strict;
03 use Search::Elasticsearch;
04 use File::Find;
05 use Sysadm::Install qw( slurp );
06 use Cwd;
07 use File::Basename;
09 my $idx = "blog";
10 my $base_dir = getcwd;
11 my $base = $base_dir . "/idx";
13 my $es = Search::Elasticsearch->new( );
14 eval {
15   $es->indices->delete( index => $idx ) };
17 find sub {
18   my $file = $_;
20   return if ! -f $file;
21   my $content = slurp $file, { utf8 => 1 };
23   $es->index(
24     index => $idx,
25     type  => 'text',
26     body  => {
27       content => $content,
28       file    => $file,
29     }
30   );
31   print "Added $file\n";
32 }, $base;

The find() function in line 17, originating from the File::Find module accompanying Perl, then recursively works through all the files under the specified directory and, behind the scenes, switches to the directory in which they're located during the user-defined callback using chdir. It also sets the name of the file currently being edited in the variable $_.

Feeding by Slurping

The index() method in line 23 adds the file's text content slurped with slurp() from the Sysadm::Install CPAN module's treasure trove, along with the file name to the search engine's index. Later, the search functions return the file names with matches, and they are easy to recognize because the names were chosen according to the content of each blog entry (e.g., 10-cents-for-a-grocery-bag.txt).

The index() feed function also creates a new index (if it doesn't already exist) and defines both the item's name (blog) and a type (set to text) – which in Elasticsearch is nothing more than an arbitrarily named partition in the index.

More of That!

Feeding data to Elasticsearch took about a tenth of a second per 2KB text file on my Linux desktop; this therefore really tested my patience with the 877 blog entries. By contrast, feeding data on my laptop with a faster solid state disk and plenty of memory was much faster – it was all over within 10 seconds.

Queries can be submitted as soon as the index is ready. The output from the mlt-search command in Figure 3 shows that Listing 4 unsurprisingly rediscovers the document I provided as a reference – content related to my family dealing earthquakes in the San Francisco Bay area. But, it also dug up some other earth-shaking results, such as a report about how to traverse the bureaucratic jungle to obtain a driver's license in California, about how to distinguish good and bad neighborhoods to live in, and the annoying habit of some motorcycle riders to let their engines roar at earth-shattering levels. The results appear in fractions of a second, meaning the function is certainly also useful on busy websites.

Listing 4


01 #!/usr/local/bin/perl -w
02 use strict;
03 use Search::Elasticsearch;
04 use Sysadm::Install qw( slurp ) ;
06 my $idx = "blog";
08 my( $doc ) = @ARGV;
09 die "usage: $0 doc" if !defined $doc;
11 my $es = Search::Elasticsearch->new( );
13 my $results = $es->search(
14   index => $idx,
15   body  => {
16     query => {
17       more_like_this => {
18         like_text =>
19           slurp( $doc, { utf8 => 1 } ),
20         min_term_freq   => 5,
21         max_query_terms => 20,
22       }
23     }
24   }
25 );
27 for my $result (
28   @{ $results->{ hits }->{ hits } } ) {
30   print $result->{ _source }->{ file },
31     "\n";
32 }
Figure 3: Elasticsearch found similar articles for a blog entry about earthquakes in the San Francisco Bay area.

Listing 4 expects a text file's path at the command line and imports this file using slurp, while telling Perl to keep the utf8 encoding intact. It sends the query to the Elasticsearch server in line 13 using the search() method and then collects the names of the files from the matches with comparable content from the result returned as JSON.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Perl: Elasticsearch

    The Elasticsearch full-text search engine quickly finds expressions even in huge text collections. With a few tricks, you can even locate photos that have been shot in the vicinity of a reference image.

  • ELK Stack

    A powerful search engine, a tool for processing and normalizing protocols, and another for visualizing the results – Elasticsearch, Logstash, and Kibana form the ELK stack, which helps admins manage logfiles on high-volume systems.

  • ELK Stack Workshop

    ELK Stack is a powerful monitoring system known for efficient log management and versatile visualization. This hands-on workshop will help you take your first steps with setting up your own ELK Stack monitoring solution.

  • Logstash

    When something goes wrong on a system, the logfile is the first place to look for troubleshooting clues. Logstash, a log server with built-in analysis tools, consolidates logs from many servers and even makes the data searchable.

  • Tube Archivist

    Tube Archivist indexes videos or entire channels from YouTube in order to download them with the help of the yt-dlp tool.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More