Storage server

  • Huawei 5288 v5
  • 36 x 14 HDDs
  • 192 GB RAM
  • Intel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz

sizes

set size (TiB)
9 x ( 3 + 1 ) 322.576
18 x ( 1 + 1 ) 221.783
6 x ( 4 + 2 ) 295.725
6 x ( 5 + 1 ) 354.696
4 x ( 7 + 2 ) 338.242
3 x ( 10 + 2 ) 338.253
1 x ( 33 + 3 ) 394.629
12 x ( 2 + 1 ) 295.664

preformance test

  • arccache 64G (previous test show 4GB and more yield same performance)
  • see test suite

randomread (IO/s)

zpool setup 12 files 24 48 96 192 384 768
12x2p1 1034.55+-46.04 1037.91+-45.94 1040.16+-45.27 1036.82+-46.59 1040.31+-46.87 1039.04+-46.70 925.14+-116.59
18x1p1 1487.45+-37.86 1487.07+-39.55 1480.71+-37.44 1459.17+-45.87 1444.80+-48.36 1443.39+-52.37 1332.57+-123.04
1x33p3 217.18+-38.65 220.80+-38.03 218.80+-35.53 219.27+-34.35 220.03+-32.49 218.92+-34.79 174.29+-33.01
3x10p2 482.41+-57.58 486.35+-55.83 486.32+-56.37 484.06+-55.47 484.56+-56.00 482.27+-54.53 361.16+-73.55
4x7p2 586.10+-53.44 584.92+-51.90 584.67+-50.83 586.86+-49.41 586.27+-51.42 585.80+-51.82 460.52+-83.13
6x4p2 793.73+-50.74 756.89+-57.26 760.46+-58.62 758.14+-58.83 755.63+-57.32 758.87+-57.45 633.92+-105.01
6x5p1 709.75+-57.43 708.84+-55.15 708.78+-57.53 710.51+-59.53 706.78+-57.00 711.59+-59.57 580.94+-100.29
9x3p1 894.94+-58.67 879.84+-63.84 881.65+-58.35 880.28+-62.46 883.58+-59.32 881.67+-60.36 755.65+-116.57

streamread

zpool setup 12 files 24 48 96 192 384 768
12x2p1 2.50+-0.24 GB/s 2.49+-0.24 GB/s 2.47+-0.24 GB/s 2.48+-0.25 GB/s 2.45+-0.24 GB/s 2.48+-0.25 GB/s 2.08+-0.22 GB/s
18x1p1 2.63+-0.25 GB/s 2.65+-0.24 GB/s 2.67+-0.24 GB/s 2.65+-0.24 GB/s 2.67+-0.25 GB/s 2.67+-0.25 GB/s 2.56+-0.44 GB/s
1x33p3 2.22+-0.09 GB/s 2.22+-0.09 GB/s 2.21+-0.09 GB/s 2.22+-0.09 GB/s 2.21+-0.09 GB/s 2.21+-0.10 GB/s 1.24+-0.24 GB/s
3x10p2 2.39+-0.27 GB/s 2.35+-0.27 GB/s 2.34+-0.26 GB/s 2.34+-0.27 GB/s 2.34+-0.27 GB/s 2.34+-0.27 GB/s 1.37+-0.22 GB/s
4x7p2 2.43+-0.26 GB/s 2.42+-0.26 GB/s 2.41+-0.26 GB/s 2.41+-0.26 GB/s 2.40+-0.26 GB/s 2.40+-0.26 GB/s 1.49+-0.22 GB/s
6x4p2 2.98+-0.39 GB/s 3.01+-0.35 GB/s 2.99+-0.37 GB/s 3.00+-0.37 GB/s 3.05+-0.34 GB/s 3.01+-0.38 GB/s 2.41+-0.26 GB/s
6x5p1 2.58+-0.30 GB/s 2.58+-0.29 GB/s 2.58+-0.29 GB/s 2.56+-0.28 GB/s 2.56+-0.27 GB/s 2.56+-0.27 GB/s 1.79+-0.23 GB/s
9x3p1 2.47+-0.30 GB/s 2.50+-0.29 GB/s 2.51+-0.28 GB/s 2.52+-0.28 GB/s 2.53+-0.28 GB/s 2.54+-0.27 GB/s 1.79+-0.19 GB/s

streamwrite

zpool setup 12 files 24 48 96 192 384 768 1536 3072 6144
12x2p1 2.60+-0.13 GB/s 2.46+-0.18 GB/s 2.55+-0.12 GB/s 2.52+-0.12 GB/s 2.50+-0.11 GB/s 2.49+-0.16 GB/s 2.52+-0.12 GB/s 2.52+-0.13 GB/s 2.52+-0.12 GB/s 2.10+-0.14 GB/s
18x1p1 2.11+-0.11 GB/s 2.07+-0.11 GB/s 2.06+-0.11 GB/s 2.04+-0.13 GB/s 2.05+-0.11 GB/s 2.04+-0.12 GB/s 2.05+-0.11 GB/s 2.02+-0.12 GB/s 2.04+-0.13 GB/s 1.79+-0.10 GB/s
1x33p3 1.48+-0.06 GB/s 1.47+-0.06 GB/s 1.47+-0.06 GB/s 1.47+-0.06 GB/s 1.46+-0.07 GB/s 1.45+-0.07 GB/s 1.45+-0.07 GB/s 1.44+-0.09 GB/s 1.46+-0.06 GB/s 1.20+-0.06 GB/s
3x10p2 2.76+-0.13 GB/s 2.71+-0.14 GB/s 2.73+-0.15 GB/s 2.72+-0.14 GB/s 2.70+-0.13 GB/s 2.70+-0.15 GB/s 2.69+-0.14 GB/s 2.70+-0.15 GB/s 2.68+-0.13 GB/s 2.13+-0.18 GB/s
4x7p2 2.79+-0.16 GB/s 2.73+-0.16 GB/s 2.76+-0.12 GB/s 2.71+-0.15 GB/s 2.75+-0.16 GB/s 2.72+-0.16 GB/s 2.72+-0.16 GB/s 2.73+-0.14 GB/s 2.67+-0.15 GB/s 2.16+-0.17 GB/s
6x4p2 2.59+-0.11 GB/s 2.48+-0.09 GB/s 2.55+-0.10 GB/s 2.51+-0.07 GB/s 2.52+-0.11 GB/s 2.50+-0.17 GB/s 2.48+-0.13 GB/s 2.48+-0.11 GB/s 2.46+-0.14 GB/s 2.02+-0.13 GB/s
6x5p1 2.94+-0.11 GB/s 2.84+-0.16 GB/s 2.85+-0.14 GB/s 2.86+-0.16 GB/s 2.85+-0.14 GB/s 2.88+-0.14 GB/s 2.82+-0.17 GB/s 2.88+-0.15 GB/s 2.82+-0.19 GB/s 2.25+-0.10 GB/s
9x3p1 2.75+-0.13 GB/s 2.66+-0.13 GB/s 2.68+-0.15 GB/s 2.67+-0.14 GB/s 2.66+-0.13 GB/s 2.66+-0.14 GB/s 2.69+-0.15 GB/s 2.65+-0.14 GB/s 2.67+-0.14 GB/s 2.17+-0.14 GB/s

resilver times

  • create 10 x 8GB files with random content in shared memory

mkdir -p /dev/shm/data
for i in $(seq -w 10); do dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=/dev/shm/data/random.$i bs=1M count=8192 iflag=fullblock; done

After /tank has been filled to 100% /dev/sdb has been taken offline and filled partially with zeros.

  • zpool offline tank /dev/sdb
  • dd if=/dev/zero of=/dev/sdb bs=1G count=10
  • zpool replace tank -f /dev/sdb /dev/sdb
  • zpool scrub tank

durations

host raidset duration (%H:%M:%S) scrub with little repair resilvering a disk filled entirely with zeros
work14a 2+1 20:10:40 23:42:40
work14b 1+1 19:15:57 22:48:43
work14c 4+2 22:27:23 26:20:13
work14d 3+1 19:12:33 23:08:16
work14e 5+1 23:44:53 27:26:41

resilver times under load

read

The pool contains only one read set. The entire pool has been filled with 8GiB files as described above and subsequently read in with dd. dd if=$i of=/dev/null bs=1M status=noxfer. The times have been measured per file before, during and after resilvering.

host raidset duration read performance lost factor
work14c 1+1 51:46:56 2.2
work14e 4+2 30:58:20 36

write

Write constant in 12 files in parallel using fio
; stream write

[global]
name=fio-write
readwrite=write
bs=128K
numjobs=12
time_based=0
runtime=300
directory=/tank/fio
log_avg_msec=5000
log_hist_msec=5000
stats=1

[create]
create_on_open=1
unlink=1
nrfiles=1
filename_format=fio-write-$jobnum-$filenum
filesize=1G
ioengine=libaio
file_append=0
write_iops_log=/root/fio/run/logs
write_bw_log=/root/fio/run/logs
write_lat_log=/root/fio/run/logs
write_hist_log=/root/fio/run/logs

host raidset duration write performance lost factor
work14c 1+1 202:10:21 1.2 - 1.5
work14d 4+2 112:37:39 2.265

+++ similar consideration but only writing one file at a time:

host raidset duration write performance lost factor
work14f 2+1 140:22:21 1.245
work14b 3+1 159:31.23 1.178
work14b 6 +1 220:4:12  
work14c 8+1 66:46:43 1.148
work14e 11+1 183:43:11 3.046

-- HenningFehrmann - 18 Dec 2020
Topic revision: r16 - 17 Feb 2021, HenningFehrmann
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback