Your NAS isn't enough – you still need to back up your data!

Power Failure

Modern filesystems are moderately resistant to power failure, but even the mighty ZFS could suffer from a blackout [9]. A UPS will help, but beware of cheap units: Many budget domestic UPS are not prepared to handle continuous operation and will wear out, eventually bringing down the NAS with them. According to a Ponemon Institute's 2016 survey, UPS failure is the top cause of unplanned data center outages [10]. What this means in practice is that blackout protection reduces the risk of suffering data loss from power loss, but it does not remove the threat entirely.

In enterprise scenarios, administrators are aware that trying to make a NAS bulletproof is not enough to guarantee true high availability. In practice, the enterprise uses Storage Area Networks (SAN) or distributed filesystems such as Ceph [11]. Such tools are deployed in computer clusters, in such a way that if a server goes down, the rest of the cluster remains operational.

The minimal (and, for serious purposes, insufficient) storage cluster that can be deployed is described in Figure 7. This is known as a Primary-Replica topology, in which the primary performs services for the clients. The replica's contents are periodically synchronized with the primary's. Should the primary go down, the load balancer will promote the replica and turn it into the new primary (Figure 8).

Figure 7: A naive high-availability cluster. A load balancer directs all traffic to a file server designated as the primary. The file server designated as a replica contains a copy of the primary's contents.
Figure 8: If the primary server goes offline, the replica is promoted to primary and the traffic is transferred to it.

The Cloud Option

Real life high-availability systems are not something you are likely to be able to run at home: typically they feature redundant load balancers and might require some Border Gateway Protocol (BGP) magic thrown in. Even the naive and simple method I just described multiplies the cost of the storage by more than two, because it requires a redundant server and a load balancer (at which point you are likely to need a server rack in a server room).

It is therefore not a surprise that many users, especially small businesses, turn to professional storage vendors, who offer cloud storage for a fee and take care of ensuring the storage systems are perpetually available. Professional storage vendors might also be very cost effective. For example, cloud storage might cost you around $1,500 in four years, which is less than what you are likely to spend on a good NAS. As I assume a NAS is likely to need an upgrade around the fourth year, the cloud option is not entirely unreasonable. Sadly, storage vendors come with their own issues: Uploading your data to them can take much longer than uploading it to a local server, and some vendor environments might present privacy concerns.

Humans and Software

Even if you were to assume that your chosen storage solution is completely indestructible, it would still not eliminate the need for a proper backup system. If you manually delete a file by mistake, or if you lost the file to a software bug or malware, it makes no difference whether it was stored on a regular laptop, a high-end NAS, or a cloud storage provider. Experience shows that human mistakes force you to restore from backups much more often than hardware failures. Certain storage vendors know this and keep a historical registry of every file uploaded to them, so you can retrieve an old version of a file if you discover you have uploaded a corrupt version or deleted something important by accident. Interestingly, the vendor is actually running a backup policy for you.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Desktop RAID

    Linux offers several options for fulfilling the RAID promise of fast hard disk access and data security.

  • RAID Performance

    You can improve performance up to 20% by using the right parameters when you configure the filesystems on your RAID devices.


    This month we look at filesystems for SSDs and show you how to get connected with a Windows Active Directory file server.

  • MergerFS

    MergerFS is a simple tool for bunching together disks, volumes, and arrays.


    Klaus Knopper is the creator of Knoppix and co-founder of the LinuxTag expo. He currently works as a teacher, programmer, and consultant. If you have a configuration problem, or if you just want to learn more about how Linux works, send your questions to: klaus@linux-magazine. com

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More