Using Wake-on-LAN for a NAS backup
Power Saver
Put your backup server to sleep when you don't need it and then wake it on demand using the Wake-on-LAN feature built into network adapters.
After recently upgrading my main storage server bit by bit, I found myself with a pile of parts that basically added up to another storage server. I had a capable Network Attached Storage (NAS) solution for files and media, but I needed a second server running on-site as a backup for the main NAS. What I did not want, however, was to pay for a second server running 24/7 gobbling up energy. This article explains how to set up a pair of servers in a primary/backup configuration so that the backup will synchronize itself with the primary server each day, week, month, or however often you like. Once the two are synced, the backup server will turn back off until needed again, thus saving most of the energy costs. This approach has an added benefit in the event of a ransomware encryption attack, because the backup server will most likely be turned off at the time of the attack, making it more likely to escape encryption.
Hardware Configuration
The two servers I describe in this article are used in a home lab and are non-critical. If you are using servers for a business, school, or professional agency, you will have different needs and, ideally, a more expansive budget that could point you toward a different solution. This article is intended as a proof of concept – and as a way to explore some of the tools available in the Linux environment. This basic approach might very well be feasible for other secondary or off-site backups with slight modifications.
The main NAS server has six 6TB HGST SATA drives in a RAIDZ2 pool with approximately 24TB of usable storage space (Figure 1). The pool can lose two drives and still retain all of the data, but obviously doing this means 33 percent of the disks' raw space is unavailable. Being in a RAID array means that there is redundancy, but redundancy is not the same as a backup. Both servers have 10Gb networking, and the primary NAS runs a Proxmox virtual environment [1], which uses SMB to share to the backup.
The backup NAS has 22TB of disks in total, consisting of five drives of varying sizes with no redundancy whatsoever, just a bunch of disks. These disks are all mounted together at startup with mergerfs
[2] to appear as one single large drive to the system, while still giving direct access to each individual disk. For each file, mergerfs
automatically chooses which disk is used based on the space available on each member of the merged array, per the mergerfs
schema that is suitable for this use case. The backup server is bare bones, as dictated by the use case and is running Ubuntu Server 22.04 LTS [3].
Networking is an important aspect of any multi-node storage system, even simple ones such as this, and can oftentimes be a bottleneck. Because this is in a home lab, the networking here is not complex: Each server connects directly to a pfSense firewall appliance with a 10Gb port (Figure 2). However, daisy-chaining is not suggested in this case, because it would mean that a failure of the primary server would restrict access to the backup server or vice versa, defeating the purpose of having a backup in the first place. With this particular hardware, 10Gb is perfect, because the primary NAS can see max write speeds of about 260MB/s and read speeds of up to about 500MB/s in real-world use with the backup maxing out around only 170MB/s. Plenty of bandwidth exists for the sync and for the primary NAS to continuously operate as a storage server without needing to use multiple 10Gb connections.
The Commands and Process
One requirement is that the backup NAS server must have Wake-on-LAN (WoL) capability on one of its connected network cards. Apart from this capability, WoL needs to be turned on in the UEFI or BIOS and needs to have a system service enabled in Ubuntu to actually function. The UEFI or BIOS settings needed are oftentimes not enabled from the factory, and the location and wording of WoL settings in a system's UEFI or BIOS varies by manufacturer, model, network interface card (NIC), and even BIOS revision. There is normally more than one setting that needs to be changed for WoL to work properly. Consult the mainboard manufacturer's documentation to see how to turn on WoL for your system.
In my use case, S5 energy savings was disabled, the remote wake-up source was set to the hard drive, PCIe power management was disabled, and WoL was enabled for the onboard gigabit NIC. While the gigabit NIC is used for waking the server up, it doesn't get an IP address or perform any function other than acting as a power button for the server. The 10Gb NICs were used for file transfers here, but because they are not WoL-capable, the onboard card still needs to be connected to the same network. Your gear will vary, but this is something to keep in mind. On the backup server, a service was created that would allow for remote wake-up for the following boot.
To set up the WoL service, you first need to install ethtool
[4] with
sudo apt update && sudo apt install ethtool
To find out your network adapter's name, use the command ip a
. To see in-depth information on your NIC, use the command ethtool <NIC name>
; ethtool
determines if WoL is available for your NIC, which will look for something similar to this:
Supports Wake-on: pumbg Wake-on: d
which shows that WoL is available as denoted by the g
at the end of pumbg
, but it is not enabled as seen by the d
following Wake-on
. Listing 1 shows the output from both of these commands.
Listing 1
Getting NIC Information Output
adam@ubuntuserver:~$ ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:5f:f4:67:df:90 brd ff:ff:ff:ff:ff:ff inet 192.168.0.42/20 brd 192.168.15.255 scope global eno1 valid_lft forever preferred_lft forever adam@ubuntuserver:~$ ethtool eno1 Settings for eno1: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supported pause frame use: Symmetric Receive-only Supports auto-negotiation: Yes Supported FEC modes: Not reported Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised pause frame use: Symmetric Receive-only Advertised auto-negotiation: Yes Advertised FEC modes: Not reported Link partner advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Link partner advertised pause frame use: Symmetric Link partner advertised auto-negotiation: Yes Link partner advertised FEC modes: Not reported Speed: 1000Mb/s Duplex: Full Auto-negotiation: on Port: Twisted Pair PHYAD: 1 Transceiver: internal MDI-X: on (auto) Supports Wake-on: pumbg Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes
To enable WoL one time, use the command
/sbin/ethtool --change eno1 wol g
However, this command only enables the service once, so you need to make a service that ensures that WoL is always available. This service file is created using
sudo nano /etc/systemd/system/wakeup.service
Copy Listing 2 into the newly created empty service file, changing your <NIC name>
appropriately as determined by the output from the ip a
command. Save and close this new file using Ctrl+x followed by Y and then Enter.
Listing 2
Making WoL Always Available
[Unit] Description=Allow WoL [Service] Type=oneshot ExecStart = /sbin/ethtool --change <NIC name> wol g [Install] WantedBy=basic.target
You now need to enable the service by running the following commands one after another:
sudo systemctl daemon-reload sudo systemctl enable wakeup.service sudo systemctl start wakeup.service sudo systemctl status wakeup.service
After running the last status command, you should see that the service is enabled. Your backup server should now be able to be awakened with a "magic packet," a specific type of packet sent over the network to a specific MAC address, after the server is turned off. This is the key to having a backup server that doesn't need to run constantly.
Now you need to add a small shell script to the backup server, as well as a cron job that runs the script. I will create the script in the backup server's /usr/sbin directory
using
sudo nano /usr/sbin/syncscript.sh
and then adding the code in Listing 3 to that file. Note: You will need to customize /mediapool/ /media/virt
and /mediapool/synclog.txt
for your particular use case. Then, save and close the file using Ctrl+x followed by Y and then Enter.
Listing 3
syncscript.sh
#!/bin/bash # Pause for 15 seconds sleep 15 # Sync directories rsync -aH --delete /mediapool/ /media/virt # Write the current date to a file date >> /mediapool/synclog.txt #Pause for 5 minutes in case I need to do some shit sleep 300 # Shutdown the computer poweroff
You now need to make the script executable by running
sudo chmod +x /usr/sbin/syncscript.sh
Otherwise, the script won't be able to run its contents and perform the needed actions.
This simple script has a few steps to it. First, the script will pause for 15 seconds after booting to ensure that mergerfs
has had a chance to run and to merge the disks that make up the filesystem as mentioned above. You may not need this 15-second pause in your use case, but it doesn't hurt to have it in case there is some service or function that needs to be completed prior to the synchronization process.
Next, the server will synchronize the directories that you specify. In this case, it syncs the /mediapool/ directory
from the main NAS server with the backup server's /media/virt
directory. The /media/virt
directory is where the merged disk filesystem is mounted, but you can sync any directory. The /mediapool/
directory from the main server is mapped in the fstab
file on the backup server so that it mounts it as soon as the backup boots. To achieve this, you will need to edit the /etc/fstab
file with
sudo nano /etc/fstab
Once opened in nano, you will need to add the following line to the end of the fstab
file:
//192.168.20.40/myshare /mediapool cifs noauto,guest,uid=1000,iocharset=utf8 0 0
A reboot is now required to verify that mounting occurs each time the backup server boots.
This will differ depending on your network file share location and guest and user permissions; it may require a credentials file or username and password if the share does not allow guest connections as my use case does. You will need to change the server location and directory where you intend to mount the file share to suit your needs as well. Be very careful making any changes to the fstab
file because an error here can cause your system to not boot properly and may require a recovery boot to fix it from the command line. Read up on the filesystem table fstab
entries if it is your first time attempting to change the filesystem table.
It is crucial to note that using the -aH -delete
flags in the synchronize line means that recursion will take place and hard links will be preserved but any files on the receiving side which do not exist on the sending side will be removed from the directory on the receiving server. Use these flags very carefully and test first! See the rsync
man page [5] for more information on the flags that can be used, because there may be a different set of flags that better suits your use case.
After the main server and backup are synchronized, the date and time are written to a text file, which is kept on the primary NAS but written by the backup server. That file, synclog.txt
, is located on the primary server, but you may change its name and location or remove it altogether to suit your needs. All that synclog.txt
does is allow a user to quickly and easily check to see if the sync occurred, and, if so, when the sync completed. The more data that is added to the primary NAS within the time between backups, the more time that the sync will require to complete, which you will notice because of the varying sync completion times shown in synclog.txt
.
The backup server will then sleep for five minutes (300 seconds). This lag time is intentionally added so that there is a buffer that enables a user to log on for any number of reasons before the power is turned off in the final step. This value may be changed as needed, but it is not recommended to leave this step out of the script. After this lag time has elapsed, the backup server finally shuts down with the poweroff
command.
One last thing to note is the trailing /
at the end of the /mediapool/
in the following code:
rsync -ah --delete /mediapool/ /media/virt
That /
is important because it tells the rsync
command to copy only the contents of the /mediapool
directory into /media/virt
and not to copy the directory itself. If the /
was missing, then the contents of the /mediapool
directory would all reside in /media/virt/mediapool
on the backup server. Including the trailing /
means that the contents are written directly to the /media/virt
directory, which in this case is where the mergerfs
array is mounted. This is a small difference but it can be important (or annoying).
Now that the sync script has been added and permissions changed so it can be run, you need to add the cron job so the script runs after the backup server boots. On the backup server in the terminal, enter
sudo crontab -e
This command will open the file where cron jobs are added. Cron jobs are set to occur at a user-defined time, such as a specific time, date, day of the week, or after certain events. I will use the @reboot
flag rather than use a specific time for this cron job because I want to initiate it every time the backup server boots. Add a line to the very bottom of the cron job file as follows:
@reboot syncscript.sh
Now use Ctrl+x followed by Y and then Enter to save the crontab file if using the nano editor. Note that the first time you add a cron job, it will ask you which editor you would like to use. Hit enter at the prompt to continue using the nano editor as you have done in previous steps. If you would like to view but not edit the crontab file, you can use
sudo crontab -l
to make sure that your changes have been saved properly. Also note that cron jobs can be set as regular users without sudo
or as the root user with sudo
added. All of the commands here are being run as root. However, that may be problematic or insecure for your use case, so it may make more sense to run the commands as a non-root user.
Initiating a Backup
To begin the process, I need something to send a magic packet to the backup server to wake it up. In this case, the packet is sent to the NIC in the backup server with the MAC address 00:5f:f4:67:df:90
, which wakes up the device after being turned off. I will use a utility aptly named wakeonlan
[6] to send the magic packet from the primary NAS server, and I will add a cron job to the primary server so that it sends the magic packet once a day (you can change the frequency and time to suit your needs). On the primary NAS, enter the following command in the terminal:
sudo apt update && sudo apt install wakeonlan -y
This installs the needed utility for easily sending magic packets. Now on the primary NAS, use the same cron
command to create a new cron job (see Figure 3):
sudo crontab -e
This time, I will add the following command to the crontab file:
55 03 * * * wakeonlan 00:5f:f4:67:df:90
This cron job will send the magic packet to your backup server's NIC with MAC address 00:5f:f4:67:df:90
at 03:55 AM. If needed, the Crontab Guru website [7] can help you with correctly setting the time and frequency of your cron job. In my use case, the 03:55 AM start time makes sense because the primary NAS is used for personal files, programs, and media; all of the users are in the same time zone and will typically be accessing those files during the day; and syncs normally take less than an hour. You will need to decide what frequency and timing is best for syncing in your use case. It may make sense in some cases to set this to run twice a day (or more) or perhaps only weekly. Crontab Guru is a huge help for scheduling tasks such as this. Furthermore, the amount of time that a sync takes may dictate your backup schedule to an extent. A seven-hour sync job shouldn't be initiated every six hours. If the data is that critical and the time it takes to sync is that long, then it may simply make sense to keep the backup server on 24/7 and sync in real time. Some careful consideration and planning is needed here.
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
New Slimbook EVO with Raw AMD Ryzen Power
If you're looking for serious power in a 14" ultrabook that is powered by Linux, Slimbook has just the thing for you.
-
The Gnome Foundation Struggling to Stay Afloat
The foundation behind the Gnome desktop environment is having to go through some serious belt-tightening due to continued financial problems.
-
Thousands of Linux Servers Infected with Stealth Malware Since 2021
Perfctl is capable of remaining undetected, which makes it dangerous and hard to mitigate.
-
Halcyon Creates Anti-Ransomware Protection for Linux
As more Linux systems are targeted by ransomware, Halcyon is stepping up its protection.
-
Valve and Arch Linux Announce Collaboration
Valve and Arch have come together for two projects that will have a serious impact on the Linux distribution.
-
Hacker Successfully Runs Linux on a CPU from the Early ‘70s
From the office of "Look what I can do," Dmitry Grinberg was able to get Linux running on a processor that was created in 1971.
-
OSI and LPI Form Strategic Alliance
With a goal of strengthening Linux and open source communities, this new alliance aims to nurture the growth of more highly skilled professionals.
-
Fedora 41 Beta Available with Some Interesting Additions
If you're a Fedora fan, you'll be excited to hear the beta version of the latest release is now available for testing and includes plenty of updates.
-
AlmaLinux Unveils New Hardware Certification Process
The AlmaLinux Hardware Certification Program run by the Certification Special Interest Group (SIG) aims to ensure seamless compatibility between AlmaLinux and a wide range of hardware configurations.
-
Wind River Introduces eLxr Pro Linux Solution
eLxr Pro offers an end-to-end Linux solution backed by expert commercial support.