News White Papers Blog |

White Paper: MySQL Performance Testing: SupremeRAID™ versus Linux MD RAID

The results are in! Turns out, SupremeRAID™ RAID 6 delivers more transactions per second than Linux MD RAID 10, while delivering nearly 2x faster rebuild speeds AND supporting 2-3X the number of concurrent users. ⚡ Read the white paper here.


Executive Summary

SupremeRAID™ by Graid Technology offers a higher-performance alternative for NVMe SSD data protection of database workloads like MySQL, especially compared to Linux MD RAID. The benefits are significant, with SupremeRAID™ RAID 6 delivering more transactions per second than Linux MD RAID 10 during optimal (non-degraded) and recovery (degraded) states. Also, SupremeRAID™ RAID 6 delivers almost 2x faster rebuild speeds than Linux MD RAID 10. The performance benefits of SupremeRAID™ are even more compelling when comparing similar RAID levels (e.g., RAID 6 with SupremeRAID™ vs. MD RAID).

About this Test

In this test, we will deploy a MySQL 8 server on SupremeRAID™ SR-1010 RAID 6, Linux MD RAID 10, and Linux MD RAID 6. We will use sysbench, a popular database benchmark tool, to perform the OLTP Read/Write test and evaluate RAID performance.

Testing Background

Hardware Specifications

  • Server: Dell PowerEdge R750 x 1
  • Processor: Intel® Xeon® Gold 6338 CPU @ 2.00GHz x 2
  • Memory: Samsung M393A4G43BB4-CWE 32GB DDR4 3200Mhz x 16
  • SupremeRAID™: SR-1010 SR-BUN-1010-12-FD32 x 1
  • SSD: Intel® SSD D7-P5510 SSDPF2KX038TZ 3.84TB x 8

Software Configurations

  • OS: Ubuntu 20.04.4 LTS
  • Kernel: 5.4.0-131-generic
  • SupremeRAID™: Driver Version: 1.3.0-473.gbf5466fc.010
  • Linux MD RAID: mdadm version v4.1 – 2018-10-01
  • Filesystem: xfs 5.3.0-1ubuntu2
  • MySQL version: 8.0.30-0ubuntu0.20.04.2
  • Benchmark Tool: sysbench 1.1.0

Hardware Configurations

  • MADT Core Enumeration: Linear
  • Logical Processor: Enabled
  • Device location:
    • Four Intel® SSD D7-P5510 located at CPU0
    • Four Intel® SSD D7-P5510 located at CPU1
    • One SupremeRAID™ SR-1010 located at CPU1


Benchmark Scenario and MySQL Tuning

  • Test workload: Sysbench OLTP_RW uniform
  • InnoDB page size: 16K
  • Concurrent users: 64, 128, 256, 512, 1024
  • Dataset: 8 tables, 50M rows each, 100GB total
  • InnoDB Buffer Pool (BP): 32GB (about 32% of data cached in the buffer pool)
  • Worker threads:
    • innodb_buffer_pool_instances=48
    • innodb_page_cleaners=48
    • innodb_read_io_threads=32
    • innodb_write_io_threads=16
    • innodb_purge_threads=16
  • Testing modes in optimal and rebuilding 1 SSD states:
    • SupremeRAID™ RAID 6 with 8 SSDs and 4k chunks
    • Linux MD RAID 6 with 8 SSDs and 4k chunks
    • Linux MD RAID 6 with 8 SSDs and 16k chunks
    • Linux MD RAID 10 with 8 SSDs and 4k chunks
    • Linux MD RAID 10 with 8 SSDs and 16k chunks

Testing Result

The results show that SR-1010 RAID 6 can provide almost twice the performance of Linux MD RAID SR-1010 RAID 6 performance is competitive with Linux MD RAID 10 while providing more usable capacity and better data security.


Transactions Per Second in Optimal State

In the lower concurrent users test cases (from 64 to 256), MD RAID 10 performed well because
it is not a parity-based RAID. Still, as the number of concurrent users increases, the SQL service
consumes CPU resources and competes with MD RAID. This resource contention significantly
drops performance to levels lower than SupremeRAID™ RAID 6.
Compared to MD RAID 6, SupremeRAID™ RAID 6 is faster in all cases and can deliver more than
twice the performance with high concurrent users.

RAID Configuration*64 Users128 Users256 Users512 Users1024 Users
SupremeRAID™ RAID 10 8476.3914627.6219889.5622048.9014729.23
SupremeRAID™ RAID 6 7669.5113408.2618813.2318878.6113807.17
Linux MD RAID 10 9838.4510510.4210556.9810221.508269.07
Linux MD RAID 6 8305.709872.398826.248455.828046.76
* RAID with 4K chunks.

Transactions Per Second in Rebuilding State

RAID performance declines during rebuilds because of degraded reading and rebuilding task workloads running in the background. Testing showed performance declined up to 22% for SupremeRAID™ configured for RAID 10 and up to 87% for MD RAID configured for RAID 10.
With RAID 6, performance declined up to 50% for SupremeRAID™ and up to 95% for MD RAID. However, with SupremeRAID™ configured for RAID 6 performance remained similar to 9x higher than MD RAID configured for RAID 10 for all test cases.

RAID Configuration*64 Users128 Users256 Users512 Users1024 Users
SupremeRAID™ RAID 10 7009.9711803.3715445.5716119.14 14875.35
SupremeRAID™ RAID 6 3793.368384.4312623.3312953.34 11616.73
Linux MD RAID 10 3308.862441.161851.951370.231253.52
Linux MD RAID 6 466.36484.88471.77598.75  973.79
* RAID with 4K chunks.

Rebuilding Speed

During rebuilding, although MD RAID 10 is not affected by degraded reads, its performance
still dropped due to the rebuilding traffic. As for SupremeRAID™ RAID 10, which benefits from
the computing power of GPU, performance dropped less.

SupremeRAID™ RAID 6 simultaneously maintained 12,953 transactions per second and 700Mb/s rebuilding throughput (about 2.5TB per hour) at 512 users. However, MD RAID 6 at 512 users supported just 598 transactions per second and 340Mb/s rebuilding throughout.

The performance of MD RAID 6 severely declines because degraded reading and rebuilding
tasks consume many CPU resources to calculate parity.

RAID Configuration*Transaction/Sec
 in Optimal State
Transactions/Sec
 in Recovery State
SSD Rebuild Speed (MB/s)
SupremeRAID™ RAID 10 22048.9016119.14880
SupremeRAID™ RAID 6 18878.6112953.34700
Linux MD RAID 10 15994.3810108.93420
Linux MD RAID 6 8455.82598.75340
* RAID with 4K chunks.

Conclusion

Databases like MySQL benefit from access to the fastest-performing storage possible, so using multiple NVMe SSDs with RAID for data protection is standard. Selecting SupremeRAID™ enables using more effective and efficient RAID 6 for data protection, delivering typically higher performance than Linux MD RAID 10.

Other benefits include:

  • Data loss prevention during the simultaneous failure of two SSDs.
  • 50% higher usable capacity with 8 SSDs (75% higher with 16 SSDs).
  • Up to 85% faster transactions per second in RAID optimal state
  • Up to 28% faster transactions per second in RAID recovery state
  • Up to 945% faster transactions per second in RAID rebuilding state
  • Up to 66% faster rebuild rates with a low impact on performance.

Reference: MySQL Performance: Benchmark kit (BMK-kit)

Download the White Paper


About Graid Technology
Graid Technology, creator of SupremeRAID™ next-generation GPU-based RAID, is led by a team of experts in the storage industry and is headquartered in Silicon Valley, California with an R&D center in Taipei, Taiwan. Designed for performance-demanding workloads, SupremeRAID™ is the world’s fastest NVMe and NVMeoF RAID solution for PCIe Gen 3, 4, and 5 servers. A single SupremeRAID™ card delivers up to 19M IOPS and 110GB/s and supports up to 32 native NVMe drives, delivering superior NVMe/NVMeoF performance while increasing scalability, improving flexibility, and lowering TCO. For more information on Graid Technology, visit Graid Technology or connect with us on Twitter or LinkedIn.


Related Posts

Videos | Jan 18, 2022

VIDEO: ? Eliminate bottlenecks caused by traditional ASIC-based RAID with Graid Technology SupremeRAID™

Are you looking to eliminate bottlenecks caused by traditional ASIC-based RAID, but don’t want to sacrifice maximum data protection?

Read More
News | Apr 29, 2022

NEWS: Graid Technology SupremeRAID™️ SR-1010 Featured on PCGuia.pt

“This SSD from Graid is one of the fastest PCI Express 4 storage devices ever.” The M.2 SSDs that are…

Read More
Events | Feb 17, 2023

CloudFest EMEA 2023:️ SupremeRAID™ Featured in Kioxia Booth C02

Hope to see you at #CloudFest, the world’s #1 cloud computing and internet infrastructure conference. Find SupremeRAID™ in Kioxia Booth…

Read More