Simple ZFS ZVol testing

creating baseline

Create simple test data set in RAM:
mkdir -p /dev/shm/data
for i in $(seq -w 30); do dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=/dev/shm/data/random.$i bs=1M count=1000 iflag=fullblock; done

Check speed with pv and mbuffer:
pv < /dev/shm/data/random.* > /dev/null  # 2.94GiB/s
mbuffer < /dev/shm/data/random.* > /dev/null
in @ 2022 MiB/s, out @ 2022 MiB/s, 28.7 GiB total, buffer   0% full
summary: 29.3 GiByte in 15.5sec - average of 1933 MiB/s

single HDD

pv < /dev/shm/data/random.* > /dev/sdc leads to 77.9MiB/s. (WRONG need to re-run with -a)

mbuffer < /dev/shm/data/random.0* > /dev/sdd
in @  0.0 KiB/s, out @  0.0 KiB/s, 9000 MiB total, buffer   0% full
summary: 9000 MiByte in 43.9sec - average of  205 MiB/s

single NVMe

pv < /dev/shm/data/random.0* > /dev/nvme0n1 leads to 146MiB/s. (WRONG need to re-run with -a)

mbuffer < /dev/shm/data/random.0* > /dev/nvme0n1
in @  0.0 KiB/s, out @  0.0 KiB/s, 9000 MiB total, buffer   0% full
summary: 9000 MiByte in 19.6sec - average of  458 MiB/s

simple mirrored zpool

file system

zpool create -f data mirror sdc sdd mirror sde sdf mirror sdg sdh mirror sdi sdj mirror sdk sdl mirror sdm sdn
pv -a < /dev/shm/data/random.* > /data/test
[ 401MiB/s]
zpool destroy data
zpool create -f data mirror sdc sdd mirror sde sdf mirror sdg sdh mirror sdi sdj mirror sdk sdl mirror sdm sdn
mbuffer < /dev/shm/data/random.* > /data/test
summary: 29.3 GiByte in  1min 05.8sec - average of  456 MiB/s
zpool destroy data

sparse zvol

zpool create -f -m none data mirror sdc sdd mirror sde sdf mirror sdg sdh mirror sdi sdj mirror sdk sdl mirror sdm sdn
zfs create -s -V 1000G data/zvol
# run test
# mbuffer < /dev/shm/data/random.* > /dev/zvol/data/zvol
# or
# pv -a < /dev/shm/data/random.* > /dev/zvol/data/zvol
zfs destroy data
mbuffer: 412 MiB/s, pv: 383MiB/s

standard zvol

zpool create -f -m none data mirror sdc sdd mirror sde sdf mirror sdg sdh mirror sdi sdj mirror sdk sdl mirror sdm sdn
zfs create -V 1000G data/zvol
# run test
# mbuffer < /dev/shm/data/random.* > /dev/zvol/data/zvol
# or
# pv -a < /dev/shm/data/random.* > /dev/zvol/data/zvol
zfs destroy data
mbuffer: 415 MiB/s, pv: 388MiB/s

cross-validation if RAM influence is still high

looks ok, e.g.

zpool create -f -m none data mirror sdc sdd mirror sde sdf mirror sdg sdh mirror sdi sdj mirror sdk sdl mirror sdm sdn
zfs create -s -V 1000G data/zvol
mbuffer <<( cat /dev/shm/data/random.* /dev/shm/data/random.* /dev/shm/data/random.* /dev/shm/data/random.* ) > /dev/zvol/data/zvol
in @  0.0 KiB/s, out @  0.0 KiB/s,  117 GiB total, buffer   0% full
summary:  117 GiByte in  4min 44.9sec - average of  421 MiB/s
zpool destroy data

raidz2

reducing output, pool set-up with zpool create -f data raidz2 sdc sdd sde sdf sdg sdh sdi sdj sdk sdl sdm sdn or with -m none as before, zvol set-up as above, only using mbuffer for testing as pv and mbuffer numbers are kind of the same while mbuffer seems to stress the disks more (at least as seen by runing iostat and/or dstat).

Results:
normal FS
480 Mib/s
sparse zvol
408 MiB/s
standard zvol
411 MiB/s

raidz2 sparse zvols with different block/ashift sizes

All results in MiB/s:

ashift / volblocksize 8k 16k 32k 64k 128k 256k 512k 1024k
9 405 492 496 539 567 510 559 606
10 326 499 487 536 566 534 553 597
11 430 486 498 551 552 523 562 81.2
12 421 489 503 542 564 531 567 586
13 419 450 56.0 542 564 521 575 593

Looks like we need better statistics...

results

baseline test

  • good write performance without much reading of the HDDs until 96% filling
  • deleted all files
  • from now on write performance dropped by more then 50% (from 240MB/s to 110MB/s and worse)
  • for each 6 write operation on the disks one read operation was required
  • the zd0 device was 100% utilized

-- CarstenAulbert - 21 Jun 2017
Topic revision: r3 - 01 Feb 2019, HenningFehrmann
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback