Measuring open/close IO performance

Local/NFS performance measured 2009-02-06

[cycles/s] h1 h2 h4 remarks
local 59282 3586 60301 h1,h2: Areca 4 disk RAID6, h4: Areca 8 disk RAID6
s01 108 3.5 101 via NFS, zpool setup with 3 large vdevs
s13 109 4.1 115 via NFS, zpool setup with 8 small vdevs
d02 832 4.5 631 via NFS, Areca 16 disk RAID6
KERNEL 2.6.24.4 2.6.27.7 2.6.27.7 kernel at the time of testing

Measurements are done with this stupid little program

One word of caution concerning h2: This machine was always under a high load of 20+ (sometimes briefly up to 100) while tests were run. Big question is why.

local IO performance on h4

On the local h4 RAID6 with 8 disks the open/close program was run several times along with stress with either the --cpu x or the --vm x options. As soon as the number x comes close to the number of system cores the performance drops down significantly:

openclose.png

-- CarstenAulbert - 06 Feb 2009
Topic attachments
I Attachment Action Size DateSorted ascending Who Comment
openclose.pngpng openclose.png manage 29 K 06 Feb 2009 - 11:09 CarstenAulbert Open/close performance versus stress test utility
measure_open_close.cc measure_open_close.c manage 964 bytes 06 Feb 2009 - 15:32 CarstenAulbert Little program opening and closing the same file over and over again
Topic revision: r4 - 16 Feb 2009, CarstenAulbert
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback