[fusion_builder_container hundred_percent=”no” equal_height_columns=”no” menu_anchor=”” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=”” background_color=”” background_image=”” background_position=”center center” background_repeat=”no-repeat” fade=”no” background_parallax=”none” parallax_speed=”0.3″ video_mp4=”” video_webm=”” video_ogv=”” video_url=”” video_aspect_ratio=”16:9″ video_loop=”yes” video_mute=”yes” overlay_color=”” video_preview_image=”” border_size=”” border_color=”” border_style=”solid” padding_top=”” padding_bottom=”” padding_left=”” padding_right=””][fusion_builder_row][fusion_builder_column type=”1_1″ layout=”1_1″ background_position=”left top” background_color=”” border_size=”” border_color=”” border_style=”solid” border_position=”all” spacing=”yes” background_image=”” background_repeat=”no-repeat” padding_top=”” padding_right=”” padding_bottom=”” padding_left=”” margin_top=”0px” margin_bottom=”0px” class=”” id=”” animation_type=”” animation_speed=”0.3″ animation_direction=”left” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” center_content=”no” last=”no” min_height=”” hover_type=”none” link=””][fusion_text]
If you’ve a fan of storage specialist Synology, you would no doubt be excited about the latest version of DiskStation Manager (DSM), the operating platform for Synology’s network-attached storage (NAS) appliances. In open beta for a number of months now, it is currently in release candidate (RC) with an official release touted to be announced this week, according to company officials.
Aside from its core ability to serve as a standalone file server, there is no question that DSM has grown to represent different things to diverse groups of users. From hobbyists to power users, small businesses to mid-sized enterprises, it offers an extensive set of capabilities such as its ability to serve as an always-on peer-to-peer node, perform batch video transcoding over the weekend, record footage from networked surveillance cameras, or even function as a corporate RADIUS server to authentication Wi-Fi clients on the wireless network.
Ironically, the sheer diversity that makes DSM so popular is also arguably its weakness, as the rapid evolution in its capabilities and increased complexity can confuse small businesses with limited IT resource. It is for this reason that businesses may be forgiven for missing out the new and revamped business-centric capabilities in DSM 6.0 that organizations looking at disaster recovery (DR) and business continuity (BC) will do well to take a closer look at.
To be clear, the last couple of versions of DSM had substantially enhanced its hybrid cloud capabilities, adding in the ability to replicate files with a second NAS over the network, and gradually increasing the list of public and private cloud services that it supports. DSM 6.0 strengthens these capabilities further, offering advanced storage and replication capabilities that further narrows the gap between what a Synology NAS can do and an enterprise-centric SAN.
With this in mind, we want take a look at the business-centric enhancements in the latest iteration of DSM, as well as new capabilities that businesses can leverage to blend NAS and cloud to guard their data against modern threats to data loss. Two of these would be Hyper Backup and Snapshot Replication, which can help to protect from “plain” disasters such as fire and flood, sophisticated data encrypting ransomware, and possibly even insider sabotage.
An eye towards business storage
While Synology started off with high-quality, affordable NAS appliances that probably catered more to hobbyists than businesses in the early days, the company has worked hard over the years to position its solutions to meet the needs of businesses. DSM 6.0 continues in this vein with business-centric enhancements that takes a serious dig at enterprise-level SAN appliances.
Anchoring one of its key new capabilities for businesses – we will talk about both later – would be DSM 6.0’s support for a file system called Btrfs, which you can pronounce as “b-tree F S” or by simply spelling out the individual letters. Work on Btrfs initially began at Oracle where it was envisioned as a file system to eventually replace the popular but ageing Ext3 and Ext4 file systems on Linux. Btrfs was actually first introduced in DSM 5.2, but it is only in DSM 6.0 that its capabilities are fully integrated and leveraged.
While Ex3 and Ext4 used by previous versions of DSM are solid, mature file systems, they lack the advanced capabilities found in newer file systems. Some of these include volume snapshots, checksums (Ext4 only supports checksum on the journal), self-healing, and support for online volume growth and shrinking. The clincher here is that existing users will have to create a new volume if they want to use Btrfs, as there is no option in DSM to convert an existing Ext4 volume.
DSM 6.0 harness the capabilities that Btrfs offers in two key ways: To support snapshots without compromising system performance, and to enhance data reliability. The latter is increasingly important given the higher chances of failure as storage drives push beyond 10TB. Indeed, you can initiate online data scrubbing from the Storage Manager, which uses CRC32 checksum data to correct memory or storage errors that may have crept in. Together with the self-healing capabilities of Btrfs, this should translate into a greater level of data reliability than using Ext4.
As mentioned earlier, a complaint from some quarters would be the bloat of features making its way into each successive version of DSM. Critically, implementing too many services can only increase its vulnerability footprint, which is a concern from a security point of view. It is probably feedback that saw some of the platform’s built-in features being modularized into packages in DSM 6.0. Beyond enhancing the maintainability of the core operating platform, it should also offer a less confusing user experience as only services that are required are installed.
Two other features bear mentioning from a business perspective is the ability to assign up to 12 solid-state drive (SSD) as an SSD cache for significantly better IOPS. This is up from just 2 SSDs previously, and is great for businesses with IOPS-heavy workloads such as virtual machines. Synology says enabling SSD caching on an iSCSI LUN volume will offer a performance improvement of up to 30%. We did not have the spare SSD drives around to validate this, though we understand that different SSD models can also be used to create an SSD cache now.
In addition, a new PetaSpace package now makes it possible to create extremely large shared folders across multiple volumes to the tune of more than a petabyte of storage destination. This means that organizations with large media files such as 4K videos or long-term storage of archived files will no longer have to juggle their files between multiple volumes. PetaSpace is not file system dependent, though it also does not support file system specific features such as Btrfs’s snapshot and compression capabilities.
Hyper Backup to protect your shared folders
One of the two new features mentioned earlier, Hyper Backup is probably best described as an attempt to bring block-level incremental backup and deduplication to the masses. Created with an eye for shared folders, Hyper Backup works on all supported file systems and can be seen as a souped-up backup service that is extremely efficient when it comes to storing data.
The efficient usage of space is possible because only block-level changes to files are changed. Moreover, Hyper Backup also recognizes different versions of the same file, allowing for some measure of protection from file-encrypting ransomware. Indeed, Hyper Backup should make it possible to restore files messed up by an insider saboteur too, assuming the saboteur does not have administrator level access to the NAS.
The below graph from Synology’s website illustrates what we are trying to say about its storage efficiency.
As it is, backups of storage folders are made at predetermined intervals with up to 65535 versions supported for each file. Alternatively, you can set backups to rotate around a preset number of versions, and which you are allowed to change at a later date. Of course, you will first need to install the Hyper Backup package in order to use this service. A companion package, the Hyper Backup Vault allows you to look into the Hyper Backup backups made from a remote NAS. (Note: Even though Hyper Backup is distributed as a package, we were told that Hyper Backup will not be available on pre-6.0 editions of DSM)
An important aspect of Hyper Backup is how you can choose from a number of backup destinations. While the simplest configuration would be to back up to a non-user accessible folder, backing up your business data to a remote destination – such as a second Synology NAS – is necessary to protect against disaster. Other supported destinations include a rsync server, S3 storage, Microsoft Azure, OpenStack Swift, and IBM Softlayer, among others.
The beauty is how Hyper Backup makes it easier to do remote backups by allowing administrators to create an initial backup that can be manually shipped to the remote destination using a USB flash drive or portable hard disk. This means that initial backups that range from 10s of gigabytes or even multiple terabytes of data can be done within days, instead of weeks or months over your typical broadband connection.
A common fear of storing data at a remote location or cloud storage provider is the heightened risk of hackers or unauthorized parties getting hold of the data. To assuage businesses of such concerns, Hyper Backup lets you to set a password to protect data backups. If enabled, data from each backup will be encrypted by AES using a new 256-bit key that is randomly generated with each run.
The AES key is stored with the backed up data, and is itself encrypted using 2048-bit RSA. This is possible as a RSA key pair is automatically generated for each Hyper Backup task; the public key is kept at the source NAS and used to encrypt all AES keys, while the private key is stored at the Hyper Backup destination where it is encrypted by a symmetric key derived from the password.
The system appears to be designed to ensure that the encryption keys to decrypt the backed up data remain accessible even if the source NAS is destroyed or stolen. In addition, a hacker or rogue employee at a cloud storage service attempting a brute force attack would have no way to determine if a given password is the correct one, short of taking the extremely time intensive step of attempting to decrypt the data for each guess.
Still, administrators will be well advised to choose a robust password to protect against brute-force attacks though – and not to lose it.
Snapshot Replication for frequent backups, remote failover
Similar to Hyper Backup, the Snapshot Replication package is another new feature that will make its debut in DSM 6.0. Its difference is how it relies on the inherent support for snapshot in the Btrfs filesystem to create snapshots of a LUN volume (or shared folder) as part of a business continuity strategy. Specifically, Snapshot Replication lets you make snapshots of shared folders as frequently as once every 5 minutes, or 15 minutes for LUN volumes, with up 1024 copies per folder.
A key concern with enterprises or mid-sized business faced with a disaster is often the recovery time objective (RTO) and recovery point objectives (RPO). The RTO is pegged to how long a business can continue without access to its data, while the latter is concerned with how far back in time are backups available. The snapshot component of Snapshot Replication is perfect for making it dead simple for businesses to attain a very low RTO without complicated (or expensive) setups by simply selecting the desired snapshot after a data disaster, and rolling back in time.
The Replication capability of Snapshot Replication comes into play for recovery from physical disasters such as a fire or flood, or if the local NAS suffers a catastrophic failure that puts it out of action. To defend against this, Snapshot Replication lets you replicate your snapshots to a second Synology NAS deployed in Active configuration at a remote site, making automatic failover to a recovery site a reality.
That’s not all however, as Replication supports a number of deployments, namely:
- Extended Replication: NAS #1 is replicated to NAS #2, NAS #2 replicated to NAS #3
- Hub and Spoke: NAS #1 and NAS #2 are separately replicated to a dedicated NAS (NAS #3)
- One to Many: NAS #1 is replicated to both NAS #2 and NAS #3
A number of recovery scenarios are supported if disaster strikes, including reverse sync of Snapshots from the recovery site back to the primary site, or the ability to initiate a manual switchover. The latter could be useful to physically redeploy Synology NAS to different locations, allowing uptime to be maintained.
It is worth noting that some threats to data loss – think insider sabotage – can either take place suddenly or over a lengthy period of time. This makes the presence of regular backups as well as having backups that go far back in time essential to businesses today.
Putting it together
We have really only covered a cross-section of what DSM 6.0 is capable of. Hyper Backup and Snapshot Replication are but two of the features that can be used to keep data in lockstep between either two Synology NAS appliances, or a NAS and a remote cloud location. What is not immediately obvious is how they were created as building blocks for building a robust backup strategy that can be used for disaster recovery and business continuity.
This is because Hyper Backup and Snapshot Replication can be used in tandem, such as having Hyper Backup uploading snapshots created by Snapshot Replication to S3 where it can be archived to Amazon Glacier. As it is, Hyper Backup will work with a good number of cloud destinations and also supports robust encryption to keep them safe, while Snapshot Replication allows for replication to more than one destination.
As it is, Synology itself proposes a number of multi-faceted backup strategy that businesses can deploy by using its support for cloud-based storage services, as well as the more basic NAS-to-NAS file backup capability called Cloud Station ShareSync. We will take a deeper look at its Cloud Station ShareSync in DSM 6.0 and its stronger cloud-native capabilities in a future blog post.
Conclusion
We find it impressive that Synology is releasing such valuable business-grade capabilities with DSM 6.0 as another free update. To be clear, the company had always skirted around our questions about whether DSM upgrades will always be free. Still, it does have a solid track record of releasing frequent, fuss free updates for many years now. Importantly, there is none of the artificial deprecation of older hardware that so many other vendors seem fond of doing these days.
Finally, an often missed advantage is how larger businesses looking for capabilities not currently found in DSM could actually build their own software packages and install it on their Synology NAS. If anything, Synology has shown that the sky’s the limit with its Package Center architecture by implementing some of the most powerful features in DSM 6.0 such as Hyper Backup and Snapshot Replication as a package.
[/fusion_text][/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]
We’ve started using the HyperBackup to push multiple offsite backups to secondary NAS, couple of TBs. Everything has been working really nicely.
Nice post
Jamie