Protecting your network with the Suricata intrusion detection system
Suricata and libhtp
As you might have noticed, intruders have three primary ways to attack a client system. The most basic way is a direct attack, which rarely works for external attackers, since almost everyone is behind a firewall, NAT, or both. Furthermore, virtually all operating systems now include a firewall that is enabled by default, so even if the attacker is on an internal network, exposure of systems should be minimal. The second main method is email: Simply send a malicious attachment (such as a PDF) that exploits client software in some way, and you can easily hijack a client system for further access. The third technique is a web-based attack. Web-based attacks are especially attractive for a number of factors; you can serve malicious content using compromised servers, either those clients connect to or other servers, such as ad servers, metrics servers, or servers specified via cross-site scripting attacks.
Suricata supports the use of libhtp, a "security-aware parser for the HTTP protocol and the related bits and pieces" [6]; however, the web page also says "DO NOT USE IN PRODUCTION." (But then, a lot of software says that and is still used to run critical systems that keep the world going.)
The advantage of libhtp is that you get a lot of easy access to the HTTP protocol, which makes it easy to do something like logging every HTTP request by simply editing the suricata.yaml
config file (see Listing 1).
Listing 1
Logging Every HTTP Request
Libhtp also allows you to do really nice things like easily match files based on file name and extension and file
magic (e.g., the file contents are checked using the file
command to identify the content). This is important because many web browsers and operating systems do clever things like open foo.txt
as an executable if it is an executable. Libhtp lets you block files that are executables, regardless of what the attacker names them or claims they are.
Suricata and Files
Enabling file logging and storage in Suricata [7] is straightforward. First you need to disable NIC offloading using ethtool
:
ethtool -K p2p1 tso off ethtool -K p2p1 gro off ethtool -K p2p1 lro off ethtool -K p2p1 gso off ethtool -K p2p1 rx off ethtool -K p2p1 tx off ethtool -K p2p1 sg off ethtool -K p2p1 rxvlan off ethtool -K p2p1 txvlan off
You'll then need to enable file-store
in suricata.yaml
:
- file-store: enabled: yes # set to yes to enable log-dir: files # directory to store the files force-magic: no # force logging magic on all stored files force-md5: no # force logging of md5 checksums waldo: file.waldo # waldo file to store the file_id across runs
Alternatively, if you don't want to log the contents of all the files (which would consume a lot of storage space!), you can simply log the file data:
- file-log: enabled: yes filename: files-json.log append: yes #filetype: regular # 'regular', 'unix_stream' or 'unix_dgram' force-magic: no # force logging magic on all logged files force-md5: no # force logging of md5 checksums
I would advise logging the MD5 checksums if you don't keep a copy of the file. Logging checksums will increase load significantly; however, it will allow much better identification of malicious files in postmortems (currently many AV firms use MD5 sums of malware to identify them).
If you are keeping the files, wouldn't it be better to scan them in as near real time as possible rather than after the fact? Suricata doesn't provide a way (that I can find) to trigger an external program such as ClamAV once a file is logged, but because this is Linux and Linux is awesome, I have two easy workarounds. The first is to watch the logfile:
tail -f /var/log/suricata/files-json.log | logprocessor-program
Have a program process the log lines (which include the file name). A second method is to watch the filesystem directory for activity using inotify and then take action once a file is written; utilities such as pyinotify [8] can accomplish this:
pyinotify -e IN_CLOSE_WRITE -v /suricata/file/dir -c clamav
You'll only want to watch for file write closing since you need the entire file to be present.
Suricata TLS/SSL
Logging every HTTP request, cookie, and file downloaded is great, but more and more of the web is encrypted HTTPS. In fact, as I write this, Google is changing their search ranking algorithms to favor HTTPS encrypted sites, and if there's one thing that will make a lot of websites take action, it's anything that lets them improve their search rankings.
So first the bad news: Suricata doesn't really have any way to act as a transparent TLS/SSL man-in-the-middle proxy. More bad news: It wouldn't really matter, because web browsers are increasingly taking defensive measures, such as TLS/SSL certificate pinning, to prevent exactly these types of man-in-the-middle interceptions of TLS/SSL. I suspect at this point two things will happen around TLS/SSL interception: The first is that browsers (especially Google Chrome) will add increasing levels of protection to prevent any interception or alert users if it is taking place. Second, there are ways to intercept TLS/SSL traffic legitimately without having to do things like a man-in-the-middle attack, the simplest of which is simply to require users to use an HTTPS proxy in their web browser. I prefer this approach because it generally ensures that either the user is informed or their system is under central control, making it likely that this monitoring is legitimate and not a hostile organization attacking you.
The good news is that Suricata has a number of features to help deal with TLS/SSL traffic. The first and simplest is logging; you can log (and probably should log) all TLS/SSL certificate easily in the suricata.yaml
file with:
- tls-log: enabled: yes filename: tls.log extended: yes
You can also alert on specific certificates using their SHA1 fingerprint, information in the subject field (such as the common name for the site), and the IssuerDN field (the name of the issuing authority). For example, if you use an internal certificate authority and want to track usage of it (to ensure no weird abuse happens), you can track any certificates signed using it via a rule such as:
alert tls any any -> any any (msg:"forged internal CA certificate"; \ tls.subject:"CN=*.example.org"; tls.issuerdn:!"CN=Internal CA"; \ sid:8; rev:1;)
To get the information about the certificate, assuming you have imported it into Firefox (or Chrome, which uses NSS) simply use the NSS certutil
program to list it,
certutil -d sql:$HOME/.pki/nssdb/ -L
and then list the certificate information using the certificate nickname listed from the preceding command:
certutil -d sql:$HOME/.pki/nssdb/ -L -n "Internal CA"
You can also find this information via the GUI in web browsers.
Also, as I mentioned in a previous article [9], the average web browser trusts more than 100 root certificate authorities (currently 170 in Mozilla products [10]). Then, you'll have several hundred to several thousand intermediate signing authorities (the root certificate authority signs a certificate that can, in turn, be used to sign more certificates). The truth is, nobody knows exactly how many intermediate certificate authorities there are. Using the fingerprint and IssuerDN alerts, you can create rules to alert you to any certificates signed by authorities you might not want to trust, such as foreign governments.
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.
News
-
Fedora 39 Beta is Now Available for Testing
For fans and users of Fedora Linux, the first beta of release 39 is now available, which is a minor upgrade but does include GNOME 45.
-
Fedora Linux 40 to Drop X11 for KDE Plasma
When Fedora 40 arrives in 2024, there will be a few big changes coming, especially for the KDE Plasma option.
-
Real-Time Ubuntu Available in AWS Marketplace
Anyone looking for a Linux distribution for real-time processing could do a whole lot worse than Real-Time Ubuntu.
-
KSMBD Finally Reaches a Stable State
For those who've been looking forward to the first release of KSMBD, after two years it's no longer considered experimental.
-
Nitrux 3.0.0 Has Been Released
The latest version of Nitrux brings plenty of innovation and fresh apps to the table.
-
Linux From Scratch 12.0 Now Available
If you're looking to roll your own Linux distribution, the latest version of Linux From Scratch is now available with plenty of updates.
-
Linux Kernel 6.5 Has Been Released
The newest Linux kernel, version 6.5, now includes initial support for two very exciting features.
-
UbuntuDDE 23.04 Now Available
A new version of the UbuntuDDE remix has finally arrived with all the updates from the Deepin desktop and everything that comes with the Ubuntu 23.04 base.
-
Star Labs Reveals a New Surface-Like Linux Tablet
If you've ever wanted a tablet that rivals the MS Surface, you're in luck as Star Labs has created such a device.
-
SUSE Going Private (Again)
The company behind SUSE Linux Enterprise, Rancher, and NeuVector recently announced that Marcel LUX III SARL (Marcel), its majority shareholder, intends to delist it from the Frankfurt Stock Exchange by way of a merger.