Possible zpool configurations
In the standard setup there are two system disks (either on controller c5 or c6 depending if one installs the box via a USB device - can this be fixed?) leaving 46 disks for storage. We have played around with a few possible set-ups:
raidz1
|
c0 |
c1 |
c4 |
c5 |
c6 |
c7 |
t7 |
8 |
8 |
8 |
8 |
8 |
8 |
t6 |
7 |
7 |
7 |
7 |
7 |
7 |
t5 |
6 |
6 |
6 |
6 |
6 |
6 |
t4 |
5 |
5 |
5 |
S |
5 |
5 |
t3 |
4 |
4 |
4 |
4 |
4 |
4 |
t2 |
3 |
3 |
3 |
3 |
3 |
3 |
t1 |
2 |
2 |
2 |
2 |
2 |
2 |
t0 |
1 |
1 |
1 |
S |
1 |
1 |
Command to create this:
zpool create -f $ZPOOL raidz1 c{0,1,4,6,7}t0d0 \
raidz1 c{0,1,4,5,6,7}t1d0 \
raidz1 c{0,1,4,5,6,7}t2d0 \
raidz1 c{0,1,4,5,6,7}t3d0 \
raidz1 c{0,1,4,6,7}t4d0 \
raidz1 c{0,1,4,5,6,7}t5d0 \
raidz1 c{0,1,4,5,6,7}t6d0 \
raidz1 c{0,1,4,5,6,7}t7d0
results:dd if=/dev/zero of=/atlashome/zero/z bs=1024k count=100000
zpool iostat 5
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
atlashome 2.30G 20.8T 0 3.09K 0 387M
atlashome 4.82G 20.8T 0 3.38K 0 424M
atlashome 7.29G 20.8T 0 3.33K 0 417M
atlashome 9.68G 20.8T 0 3.21K 0 402M
atlashome 12.0G 20.8T 0 3.18K 0 399M
atlashome 14.4G 20.8T 0 3.22K 0 403M
atlashome 17.4G 20.8T 0 3.98K 0 500M
atlashome 20.0G 20.8T 0 3.55K 0 445M
atlashome 23.1G 20.8T 0 4.14K 0 520M
atlashome 26.2G 20.8T 0 4.12K 0 518M
atlashome 29.3G 20.8T 0 4.19K 0 526M
atlashome 32.4G 20.8T 0 4.14K 0 519M
atlashome 35.2G 20.8T 0 3.79K 0 475M
atlashome 38.2G 20.8T 0 3.99K 0 501M
atlashome 41.4G 20.8T 0 4.32K 0 542M
atlashome 44.6G 20.8T 0 4.33K 0 544M
atlashome 47.7G 20.8T 0 4.11K 0 516M
atlashome 50.5G 20.8T 0 3.72K 0 467M
atlashome 53.4G 20.8T 0 3.92K 0 492M
atlashome 56.3G 20.8T 0 3.85K 0 483M
atlashome 59.6G 20.8T 0 4.48K 0 563M
atlashome 62.6G 20.8T 0 3.97K 0 498M
atlashome 65.3G 20.8T 0 3.64K 0 457M
This set-up yields
no hot spare drives, good redundancy versus controller failure and a net storage of 6 * 5 + 2 * 4 = 38 drives or 19 TB with 500 GB drives.
raidz2
|
c0 |
c1 |
c4 |
c5 |
c6 |
c7 |
t7 |
4 |
4 |
4 |
4 |
4 |
4 |
t6 |
4 |
4 |
4 |
4 |
H |
4 |
t5 |
3 |
3 |
3 |
3 |
3 |
3 |
t4 |
3 |
3 |
3 |
S |
3 |
3 |
t3 |
2 |
2 |
2 |
2 |
2 |
2 |
t2 |
2 |
H |
2 |
2 |
2 |
2 |
t1 |
1 |
1 |
1 |
1 |
1 |
1 |
t0 |
1 |
1 |
1 |
S |
1 |
1 |
Command to create this:
zpool create -f $ZPOOL raidz2 c{0,1,4,6,7}t0d0 c{5,0,1,4,6,7}t1d0 \
raidz2 c{5,0,1,6,7}t2d0 c{5,0,1,4,6,7}t3d0 \
raidz2 c{0,1,4,6,7}t4d0 c{5,0,1,4,6,7}t5d0 \
raidz2 c{5,0,1,4,7}t6d0 c{5,0,1,4,6,7}t7d0 \
spare c1t2d0 c6t6d0
results:dd if=/dev/zero of=/atlashome/zero/z bs=1024k count=100000
zpool iostat 5
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
atlashome 315M 20.0T 3 412 4.19K 50.5M
atlashome 5.13G 20.0T 0 3.80K 0 477M
atlashome 8.84G 20.0T 0 3.73K 0 469M
atlashome 12.3G 20.0T 0 4.13K 0 518M
atlashome 15.6G 20.0T 0 4.16K 0 522M
atlashome 18.8G 20.0T 0 4.52K 0 567M
atlashome 18.8G 20.0T 0 3.82K 0 482M
atlashome 24.6G 20.0T 0 3.99K 0 497M
atlashome 24.6G 20.0T 0 4.41K 0 556M
atlashome 29.1G 20.0T 0 4.22K 0 529M
atlashome 33.6G 20.0T 0 3.88K 0 482M
atlashome 33.6G 20.0T 0 3.93K 0 495M
atlashome 37.7G 20.0T 0 3.88K 0 486M
atlashome 41.1G 20.0T 0 4.11K 0 516M
atlashome 44.7G 20.0T 0 4.03K 0 506M
atlashome 48.4G 20.0T 0 3.74K 0 470M
atlashome 51.3G 19.9T 0 4.38K 0 550M
Our current setup (raidz2)
|
c0 |
c1 |
c4 |
c5 |
c6 |
c7 |
t7 |
3 |
3 |
3 |
3 |
3 |
3 |
t6 |
3 |
3 |
3 |
3 |
3 |
3 |
t5 |
2 |
2 |
3 |
3 |
3 |
3 |
t4 |
2 |
2 |
2 |
S |
2 |
2 |
t3 |
2 |
2 |
2 |
2 |
2 |
2 |
t2 |
1 |
1 |
1 |
1 |
2 |
2 |
t1 |
1 |
1 |
1 |
1 |
1 |
1 |
t0 |
1 |
1 |
1 |
S |
1 |
1 |
Command to create this:
#!/bin/bash
ZPOOL=atlashome
zpool create -f $ZPOOL raidz2 c{0,1,4,6,7}t0d0 c{0,1,4,5,6,7}t1d0 c{0,1,4,5}t2d0 \
raidz2 c{6,7}t2d0 c{0,1,4,5,6,7}t3d0 c{0,1,4,6,7}t4d0 c{0,1}t5d0 \
raidz2 c{4,5,6,7}t5d0 c{0,1,5,4,6,7}t6d0 c{0,1,4,5,6,7}t7d0
This will create three raidz2 blocks striped across. This uses 40+6 disks for storage thus a net storage of 20 TB (500GB drives).
ALWAYS reserve space
# create a small zfs file system with reservation, this will help to keep ZFS working once it's filled up
zfs create -o reservation=10M $ZPOOL/badtimes
--
CarstenAulbert - 12 Jun 2008