An indexing search engine with Nutch and Solr
Go Find It
Build you own search engine using Apache's Nutch web crawler and Solr search platform.
CMS, wikis, text files … modern companies store important data in many different places, and that data must be accessible down to the tiniest detail through a single search. Commercial software vendors such as Google [1] offer tools that will index the data and store the index on an external server. But many organizations prefer to keep control of the search capabilities – for security and privacy reasons, but also to add flexibility and promote innovation and customization.
A handy constellation of open source tools from the Apache project will help you build your own search index for the assorted documents and data on your network: Nutch, Solr, Apache, and Lucene.
Nutch [2] is a powerful web crawler, and Apache Solr [3] is a search engine based on Apache Lucene [4]. You can combine Nutch with Solr to create a complete search engine – a miniature Google, if you like.
The Nutch crawler uses HTTP and FTP to discover information. If you want Nutch to inspect your local files, you need to store the files on an HTTP or FTP server and point to the directories you want Nutch to crawl. Nutch fetches data that is then searched and indexed by Solr. Solr depends on the Apache Lucene search libraries and is written in Java, and it requires a Java Servlet container server. The Jetty Java Servlet container tool is installed by default, but many users prefer a more robust solution such as Apache Tomcat. (See "A Note of Caution" box for more info.)
A Note of Caution
The crawler indexes data accessible to the daemons associated with the process. Depending on your security system, the search results could be more than you would want non-privileged users to see, so you might need to adjust your configuration to rule out access to highly secure files and directories.
This workshop shows how to build your own search engine using on an Ubuntu 14.04.2 LTS system.
Installing the Components
On Canonical's Enterprise Linux, Solr is available from the package sources; you only need to install Nutch manually (Listing 1, lines 1-4). Then back up Solr's default XML schema and replace it with the file supplied by Nutch (Lines 6 and 7).
Listing 1
Installing Solr and Nutch
apt-get install solr-tomcat wget http://www.eu.apache.org/dist/nutch/1.9/apache-nutch-1.9-bin.tar.gz tar vfx apache-nutch-1.9-bin.tar.gz mv apache-nutch-1.9 /opt/nutch mv /etc/solr/conf/schema.xml /etc/solr/conf/schema.xml.orig cp /opt/nutch/conf/schema.xml /etc/solr/conf/schema.xml
By default, the server does not save the content of pages or documents it finds. When it re-indexes, it transfers all the contents again. If you want to enable caching, you can do so in the /etc/solr/conf/schema.xml
configuration file by changing the stored="false"
entry in the following line:
<field name="content" type="text" stored="false" indexed="true"/>
line to "true"
; then restart Tomcat by typing service tomcat6 restart
.
Configuring the Nutch Crawler
Although you can control the crawler's default behavior with the /opt/nutch/conf/nutch-default.xml
file, it makes more sense to customize the /opt/nutch/conf/nutch-site.xml
file with site-specific details.
The example in Listing 2 shows how you can configure the name of the HTTP agent. This name will appear in the web server's logfiles.
Listing 2
nutch-site.xml
01 <?xml version="1.0"?> 02 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 03 <!-- Put site-specific property overrides in this file. --> 04 <configuration> 05 <property> 06 <name>http.agent.name</name> 07 <value>Company Search Agent</value> 08 </property> 09 </configuration>
The nutch-default.xml
file contains various settings that control the crawler's behavior. In nutch-site.xml
, you need to do this:
<property> <name>file.content.ignored</name> <value>false</value> </property>
Additionally, you need Nutch to remove any documents that the users have deleted in the meantime from the search engine's database:
<property> <name>db.update.purge.404</name> <value>true</value> </property>
On a local network, with few servers and clients compared with the Internet, the five-second default setting between two requests to the same server leads to an unnecessarily large number of inactive threads, which slows down the search engine. The fetcher.server.delay
is useful for ensuring that the search engine will not overload a server with requests:
<property> <name>fetcher.server.delay</name> <value>0.0</value> </property>
It makes sense to disable this value and only re-enable it if problems occur.
Large Documents
On the Internet, it is sometimes useful to index large documents, but you need to be careful not to let the crawler get hung up on a gigantic tome with no useful information. Nutch lets you define the content.limit
class parameters that define the maximum size of the content that crawler processes (Listing 3). You can also define the length of the document title, say, to achieve a more informative view in the search results – the value is in characters not in bytes:
<property> <name>indexer.max.title.length</name> <value>150</value> </property>
Another useful variable, fetcher.threads.fetch
, defines the number of concurrent threads reading content. The http.timeout
reduces the time the thread needs to wait for a request to time out.
Listing 3
File Lengths
01 <property> 02 <name>file.content.limit</name> 03 <value>131072</value> 04 </property> 05 <property> 06 <name>http.content.limit</name> 07 <value>131072</value> 08 </property> 09 <property> 10 <name>ftp.content.limit</name> 11 <value>131072</value> 12 </property>
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
Linux Kernel 6.13 Offers Improvements for AMD/Apple Users
The latest Linux kernel is now available, and it includes plenty of improvements, especially for those who use AMD or Apple-based systems.
-
Gnome 48 Debuts New Audio Player
To date, the audio player found within the Gnome desktop has been meh at best, but with the upcoming release that all changes.
-
Plasma 6.3 Ready for Public Beta Testing
Plasma 6.3 will ship with KDE Gear 24.12.1 and KDE Frameworks 6.10, along with some new and exciting features.
-
Budgie 10.10 Scheduled for Q1 2025 with a Surprising Desktop Update
If Budgie is your desktop environment of choice, 2025 is going to be a great year for you.
-
Firefox 134 Offers Improvements for Linux Version
Fans of Linux and Firefox rejoice, as there's a new version available that includes some handy updates.
-
Serpent OS Arrives with a New Alpha Release
After months of silence, Ikey Doherty has released a new alpha for his Serpent OS.
-
HashiCorp Cofounder Unveils Ghostty, a Linux Terminal App
Ghostty is a new Linux terminal app that's fast, feature-rich, and offers a platform-native GUI while remaining cross-platform.
-
Fedora Asahi Remix 41 Available for Apple Silicon
If you have an Apple Silicon Mac and you're hoping to install Fedora, you're in luck because the latest release supports the M1 and M2 chips.
-
Systemd Fixes Bug While Facing New Challenger in GNU Shepherd
The systemd developers have fixed a really nasty bug amid the release of the new GNU Shepherd init system.
-
AlmaLinux 10.0 Beta Released
The AlmaLinux OS Foundation has announced the availability of AlmaLinux 10.0 Beta ("Purple Lion") for all supported devices with significant changes.