StarWind SAN & NAS enables you to repurpose existing hardware running industry-standard hypervisor into high-performing storage -- of course, a regular RAID array still canât match the NVMe performance, but GRAID Technology SupremeRAID⢠SR-1000 surely does.
Posted by Volodymyr Khrystenko for StarWind on April 13, 2022
Introduction
StarWind has just announced that GRAID SupremeRAID⢠â the worldâs first NVMe-oF RAID card â has become part of its StarWind All-NVMe Backup Appliance. However, to better understand why itâs such a big deal and what the implications of the cutting-edge GRAID technology are, we first need some context.

Everybody knows that even though not a backup solution per se, RAID technology is quite regularly used on servers recording data backups. For decades RAID arrays have been used to protect data from failure.
After all, while you were working with HDD, a single RAID card is pretty much enough since you couldnât have probably gotten more I/O than it couldâve handled. However, despite once being an uncommon option, NVMe SSDs have become the new black. Now, these can deliver around one million IOPS per single card, which is why trying to handle one using traditional RAID system configuration will probably result in the most massive performance bottleneck you have ever seen.
Thatâs why the situation is quite ironic. While the NVMe SSDs can reach the highest performance levels possible today, there is just no way to secure them with a shared storage environment. Hardware options (RAID controller) are limited to a few disks maximum, and software alternatives canât reach the top of NVMeâs potential performance either. From the looks of it, it seems that this is one of those cases when you canât have your cake and eat it too. Enterprises and businesses now face a less than pleasant situation since thereâs no possibility to tap into the full potential of the NVMe SSD performance without sacrificing their data security in exchange. Or is there?
StarWind SAN & NAS is designed to enable you with repurposing the existing hardware running industry-standard hypervisor into high-performing storage. Essentially, it turns the server with internal storage into a redundant storage array presented as NAS or SAN. Of course, a regular RAID array still canât match the NVMe performance, but GRAID Technology SupremeRAID⢠SR-1000 surely does.
The worldâs first NVMe RAID card to deliver maximum available NVMe SSD performance is a plug & play solution with Software-Composable Infrastructure (SCI) that provides you with high performance and removes performance bottlenecks altogether. A working Backup Appliance configuration with GRAID Technology over StarWind SAN & NAS can arrange your NVMe into shared storage while youâre enjoying the highest level of IOPS and throughput you can get.
Purpose
Assembling a working Backup Appliance prototype based on StarWind SAN&NAS is only half of the job. Our primary goal here is to benchmark this prototypeâs performance for different scenarios with the best hardware we can get. Within the scenario for SMB, the backup repository is presented as an SMB share. In the case of iSCSI, itâs a disk connected via iSCSi protocol to the VM containing Veeam Backup & Replication (VBR). As for the Linux node, the backup repository is presented as a catalog on the remote Linux host (in this here case, itâs StarWind SAN&NAS, obviously).
Benchmarking Methodology, Details & Results
The VM with StarWind SAN&NAS, containing 5 NVMe (Intel Optane SSD DC P5800X) and GPU NVIDIA Quadro T1000, has been transferred to the backup-node. Thanks to GRAID NVMe, these were arranged into the RAID5 array, which will serve as the backup repository. Furthermore, 2 VMFS datastores were built over 2 NVMe (Intel Optane SSD DC P5800X) on the compute-node to contain VMs for backup/restore operations. We are going to use Veeam Backup & Replication as a backup solution (VM containing Veeam Backup & Replication has been deployed on the compute-node). ESXi hosts, VBR VM, and StarWind SAN&NAS VM connect with each other within the 100GbE network.
Testbed

VM with VBR:
For the backup/restore operations, VBR uses the Backup Proxy role. This one is installed by default along with the VBR, but you can additionally deploy/transfer it to the other VMs, if necessary. VM with the Backup Proxy role should be deployed on the ESXi hosts with direct access to the ESXi datastores within which we plan to perform our backup/restore operations (here itâs the compute-node and DS Optane â 01,02). Thanks to this, we were able to use the âhotaddâ technology instead of ânbdâ and reduce restoration time by 5 times. Since the chosen scenario doesnât require us to use the additional VM for the Backup Proxy role, the VM with VBR goes straight to the compute-node.
âhotaddâ & ânbdâ:
If the Backup Proxy role is on the backup-node, ânbdâ is used. Restoration time â 335MB\s.

If the Backup Proxy role is on the compute-node, âhotaddâ is used. Restoration time â 2GB\s.

Benchmarking Methodology:
For all three configurations, we are going to measure the performance time for the following operations:
Backup:
1 VM â 1 backup job;
10 VMs â 1 backup job;
10 VMs â 10 backup jobs;
Restore:
1 VM;
10 VMs simultaneously;
Instant recovery:
1 VM (time from the beginning of the job to the downloading of the VM into the Login screen);
For benchmarking, there will be used 10 identical VMs with Windows Server 2019 and 50GB virtual disk:

To get the information about the throughput, we are disabling deduplication, compression, etc:

...