The professional filesystem ZFS


Article from Issue 189/2016

ZFS is a first-class filesystem for big iron, but for various reasons, it is still waiting for widespread Linux adoption.

With over 15 years of development, ZFS [1] is one of the oldest of the current Unix-like filesystems. The ZFS filesystem was originally developed by Sun Microsystems for the Solaris operating system and was published for the first time in 2005. ZFS was originally intended as a closed-source, proprietary filesystem for high-end Solaris storage environments.

When Sun open-sourced Solaris in 2005 with the OpenSolaris project, ZFS went with it. Oracle acquired Sun in 2009 and, in 2010, Oracle declared it was returning ZFS to a closed-source development model. Thanks to the beauty of open source licensing, the ZFS community continued to develop and maintain the open source version of ZFS. The umbrella project for ZFS development is now known as OpenZFS.

The open version of ZFS is licensed under the Common Development and Distribution License (CDDL) [2]. The CDDL is recognized as a Free and Open Source license by the Free Software Foundation and the Open Source Initiative, but the CDDL has limited copyleft protection and is thus considered incompatible with the Linux kernel's GPL. Because of this incompatibility, it isn't easy for developers to integrate ZFS directly with the Linux kernel. Although various workarounds are possible, the license incompatibility has slowed ZFS adoption in Linux. As you will learn in this article, ZFS is typically implemented as a separate add-on module or as a Filesystem in Userspace (FUSE) [3] in Linux environments.


ZFS was the first of a new generation of filesystems built for our era of large disk drives and inexpensive storage. Until the end of the 1980s, the extremely high cost of storage space meant that most popular filesystems were designed for economy. Storage capacities for IDE and SCSI hard drives continued to grow through the beginning of the 1990s, and conventional filesystems were increasingly pushed to their limits: The call for more data security came in larger companies with their own IT infrastructure and data centers with large storage clusters, and this additional protection was primarily achieved using data mirroring.

At the same time, concepts of volume management were developed that, with the help of a logical volume manager (LVM), condensed several physical disks into a logical network and thus overcame the capacity limits of individual mass storage systems. However, the LVM was frequently combined with the old filesystems, meaning that their restrictions also determined the overall network performance.

As the complexity of the overall mass storage subsystem grew, so too did the administrative overhead and the risk of a lack of data integrity due to transmission and memory errors. The concepts of data mirroring also required outrageously expensive additional hardware in the early years that was usually offered in the form of plug-in cards for high-end systems.

These printed circuit boards normally supported both data mirroring and the distribution of data stocks via various physical mass storage techniques, meaning it was possible to achieve a significant speed gain when retrieving data through parallel access mechanisms.

The ZFS developers took into account all these vulnerabilities and integrated functions in the filesystem that had previously only been possible using external solutions. ZFS is therefore not a filesystem in the original sense of the word, but rather a combined solution: The ZFS filesystem integrates an internal LVM and creates a storage pool that automatically manages the mass storage.

The individual pools automatically adjust their size as soon as the total capacity changes, such as when a new physical mass storage device is added to the system. ZFS performs the modification of the pools transparently, meaning you won't need to do any manual administrative work [4]. ZFS also automatically creates redundancies in order to improve data security.

Using snapshots, it is possible to duplicate a defined state of the system so that you can reconstruct the system based on the snapshot in the event of a failure. This technology, referred to as copy-on-write, works transparently in the background while the filesystem is active. The filesystem can also ensure economical use of the existing space thanks to the integrated data compression, which can lead to significant resource savings, depending on the type of data.

The integrity check is another innovation of ZFS: ZFS ensures data integrity between mass and working storage using checksums for all blocks. The filesystem even provides mechanisms for self-healing. If, based on various checksums, a variant turns out to be corrupt among the redundantly stored data, ZFS repairs the error using the stored copy. As a user, you can also trigger a manual test run so that the data integrity is guaranteed at all times. These mechanisms eliminate the need for costly and time-consuming manual filesystem checks, which can take days in large data networks.

Last but not least, the developers have trimmed ZFS for high operating speeds: The filesystem provides significant speed increases during data transfer thanks to various cache levels, both in the working memory (adaptive replacement cache, ARC) and on device-based caches (cache-vdev, L2ARC). You won't risk data loss if the system fails, because the cached content is duplicated for both cache levels.

The Btrfs filesystem implements some of the advanced features pioneered by ZFS with a licensing model that is more compatible with Linux. (See the article on Btrfs elsewhere in this issue.) But Btrfs is relatively new to the mainstream and was only declared stable in 2013. ZFS, on the other hand, has been in active use for years.


ZFS was designed for servers and data centers, and it needs plenty of resources, including a 64-bit processor and sufficiently large working memory. You should provide 1GB of RAM for every terabyte of capacity. You might be able to use ZFS with fewer hardware resources, but you wouldn't be able to make full use of its benefits.

ZFS works with 128-bit-wide pointers and manages to deal with enormous storage capacities: The maximum capacity of a ZFS filesystem is 16 exabytes, where the maximum size of a file is identical. The hardware industry is unlikely to exhaust such capacities in the near future. Despite its origins as a filesystem for huge storage systems, ZFS is sometimes a competitve option for home servers, especially if you're using a recent computer with multiple hard disks or SSDs.

In Linux

So far, ZFS is widely available on Unix-like operating systems, especially in various BSD derivatives. It is, for example, integrated in FreeBSD from version 8.0 as a stable filesystem, and it is the default filesystem in PC-BSD. In Linux, the copyright and licensing problems interfere with the direct integration of ZFS with the kernel.

ZFS on FUSE [5] is a popular option that avoids the licensing problems associated with Linux kernel integration. Like other FUSE systems, ZFS on FUSE operates ZFS in userspace. You can even use ZFS on FUSE to run ZFS on a 32-bit system. However, ZFS on FUSE is only available in an obsolete ZFS version; the project was terminated some time ago, but you can still find binary packages in the repositories of many distributions. Of course, putting ZFS in userspace causes some speed and stability issues.

The ZFS on Linux project, which integrates a self-compiled kernel module, proved much more sustainable. ZFS on Linux uses a newer version of ZFS that ensures compatibility with Solaris 10, FreeBSD, and OpenSolaris. The software has been regarded as stable since version 0.6.3 and can therefore be used in productive environments. Pre-compiled packages exist for various Linux distributions [6].

Canonical made an attempt to increase the dissemination of ZFS under Linux in the winter of 2016. In an announcement from mid-February the company behind Ubuntu said that, as of the Ubuntu 16.04 LTS "Xenial Xerus" release, the ZFS filesystem will be implemented in the kernel. Canonical planned to use a kernel module by OpenZFS that has been under development since 2013.

But because this kernel module is also under the CDD license, the announcement provoked massive opposition from the community, as well as from the Linux developer group. The Free Software Foundation (FSF) also expressed worries about this step. Canonical, however, believes the integration of ZFS in Linux is long overdue: ZFS's security and enormous capacity make it an ideal filesystem for cloud environments and clusters.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • ZFS on Linux

    License issues prevent the integration of ZFS with the Linux kernel, but Linux users can try the highly praised filesystem in userspace.

  • Red Hat to Drop Support for Btrfs

    The company is building their own storage solution called Stratis.

  • Managing Linux Filesystems

    Even with all the talk of Big Data and the storage revolution, a steady and reliable block-based filesystem is still a central feature of most Linux systems.

  • Btrfs

    The Btrfs filesystem offers advanced features such as RAID, subvolumes, snapshots, checksums, and transparent compression, but do desktop users really need all that power?

  • Snapshot Tools

    Experts agree that you should keep a copy of your data, but restoring from incremental backups takes time and sometimes doesn't work as expected. Alternatively, you can capture your data in a snapshot. Read on for a review of some leading Linux snapshot tools.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More