I just purchased an iXsystems TrueNAS Mini X to replace my 5 year old Synology RAID box. There were a lot of reasons for this; here are the biggest ones.
- TrueNAS is based on FreeBSD; Synology is based on Linux. FreeBSD is a more stable OS, and is better designed.
- TrueNAS supports the ZFS file system; Synology doesn’t. ZFS is a complex file system, but it has a lot of advantages compared to ext4 and other file systems standard on Synology, which I’ll explain below.
- TrueNAS has excellent support for running additional servers in jails (separate environments isolated from the main server).
- TrueNAS has excellent support for standard Unix tools.
- TrueNAS is open source.
- My TrueNAS box uses ECC memory, Synology doesn’t. Memory errors can be a significant issue for storage systems, which must be correct.
FreeBSD jails are a mechanism to provide a separate name space for sets of processes running under a single instance of a FreeBSD operating system. The guiding principle is that, absent a bug in the OS, it’s impossible to do something with an object you can’t name. For example, a jail has its own set of process identifiers; it can’t name processes external to the jail, so it can’t do anything to them. Similarly, each jail has its file system; “external” directory trees must be manually added into the jail to make them accessible from internal processes. Users and groups are, similarly, internal to a jail, so root within a jail has “absolute” power only on the files and processes it can name—those accessible from within the jail.
ZFS (Zettabyte File System) was originally written by Sun for Solaris, but has since been ported to FreeBSD (and Linux). The Wikipedia page has far more detail than I’m going to include here, so I’ll just describe why you want it for a small-scale file server. First, ZFS does a great job isolating individual parts of your storage system. You can create a single storage pool, and then easily carve datasets out of it. This allows you to isolate a single dataset from others on the storage system, dynamically moving space between them as needed. You can set policies such as compression and deduplication on an individual data set basis, and you can easily set quotas (maximum space) and reserved minimum space on a per-dataset basis. When used with jails, datasets become even more powerful, since you can easily allow a jail access to some datasets but not others, restricting the ability of a jailed process to modify data it shouldn’t. For example, I have a jail that runs Borg backups, which only has access to the dataset storing the jail itself and to the Borg backup dataset. A bug in the backup process can only affect Borg data, but not other stored data. I have a personal jail with access to my datasets (home directory, photos / videos) but not others on the system; someone who gets access can’t harm the overall system, only the visible data.
The other big thing about ZFS is that it has strong features supporting data integrity. I have my ZFS configured as RAID-Z2, which means that data is written on disk so that it can survive any two drives failing. Better still, ZFS maintains cryptographic checksums on stored files, so it can tell if a data block is bad, even if the disk itself doesn’t know. Synology doesn’t support anything like this. ZFS also has excellent support for efficient snapshots, allowing me to schedule regular “backups” of stored data in case of user error. A snapshot won’t help if I lose 3 (out of 5) disks, but it will help if someone accidentally deletes data. And because the snapshot sizes are proportional to “unique” data, it’s not expensive to take snapshots unless there’s a lot of data being changed between snapshots, which isn’t usually the case on this file server.
My server has 5 hard drives (WD Red 12TB) and two 1TB SSDs, all SATA (that’s what the NAS supports). The hard drives are in a single ZFS pool, set up as RAIDz2, which can survive two separate hard drive failures—a straightforward configuration.
I wanted to use the SSDs for both the ZFS Intent Log (ZIL) and a store for small ZFS objects to improve efficiency. The only way to do this is to partition each SSD (independently) into two partitions using gpart from the command line. One partition is 32 GiB, with the other occupying the remainder of the SSD. The two 32GiB partitions were mirrored to form the ZIL device, and the other partitions were mirrored to form a special vdev that’s dedicated to storing small objects.
The ZFS pool is carved into a lot of smaller file systems to allow for finer grain control of options and quota, as well as provide isolation. This can be done at any time after the ZFS pool is initialized, and more file systems can be added while the system remains online.
There are currently four jails on my system. The big advantage of using jails is that the software in each jail can be specifically configured for a single task, without worrying about the impact on software running in other jails. This requires a bit more space—there can be multiple copies of each software package installation—but a few extra GiB per jail isn’t an issue on a NAS with over 30TiB of available storage.
By default, each jail gets its own set of private ZFS subpools. I strongly recommend placing limits on how much storage each jail’s subpool can use to avoid denial of service issues if a jail writes tons of data.
I’m using zoneminder to monitor the webcams in my home. The storage requirements aren’t too large, so I limited it to 250 GiB of overall storage—plenty of space for two webcams to keep multiple weeks of clips.
I’m using weewx to download data from my Davis weather system and upload it to the Internet. Weather monitoring doesn’t take much space, so a 20 GiB quota is more than enough.
The Borg backup system is probably the best choice for backing up a combination of Mac, Linux, and BSD clients. It supports multiple backup repositories, each with multiple archives (backups), possibly from multiple different systems. The primary Borg jail subpool is small, with actual backup data going into a separate subpool not under the jail. That way, it’s easier to associate the backup storage with a different jail if necessary.
This jail is used for running BSD stuff that’s independent of the system. It’s independent of the other jails, so anything that damages it won’t have any impact on critical systems. If necessary, though, it’s possible to attach any ZFS subpool to it (using the control panel) so that it can be accessed from a jail with a wider set of tools installed.