http://sunsolve.sun.com/handbook_private/Devices/Disk/ TEST: + time sh -c 'mkfile 32g bla ; sync' ; \ + time sh -c 'mkfile 32g blabla ; sync' ; \ + time sh -c 'mkfile 32g blablabla ; sync' + chmod 644 b* + time dd if=bla of=/dev/null bs=128k ; \ + time dd if=blabla of=/dev/null bs=128k ; \ + time dd if=blablabla of=/dev/null bs=128k on v490 4xSPARC IV+ 32GB ============================================================================= # interne Platten: ============================================================================= HITACHI UltraStar 10K300 103014FLF210 (HUS1014FASUN146G) http://sunsolve.sun.com/data/816/816-7256/pdf/816-7256-10.pdf http://www.hitachigst.com/tech/techlib.nsf/techdocs/0F10AC11D870802C86256E3F007A9B1F/\$file/HGST_Ultrastar_10K300_DS7.23.pdf Specs: 57 MB/s Head to Buf, 212.5 MB/s max. IF-Xrate bei 2 Gbps 46.8 - 89.3 MB/s sustained, 68.05 MB/s av. sustained (berechnet) RAW (1 disk): ------------- UFS: write 48.9 46.8 47.8 (19.2%) read 84.4 88.1 89.4 (25.0%) SW-Raid1 (2 disks): ------------------- UFS: write 39.3 (15.0%) 39.0 (17.9%) 39.0 (18.2%) 54.0 (14.2%) 54.6 (15.4%) 57.8 (15.9%) ============================================================================= # 3510 Platten: ============================================================================= SEAGATE Cheetah 10K.7 (ST314670FSUN146G) http://www.seagate.com/docs/pdf/datasheet/disc/ds_cheetah10k.7.pdf http://www.seagate.com/staticfiles/support/disc/iguides/fc/100260920a.pdf Specs: 212 MB/s max. IF-Xrate bei 2 Gbps 54.5 - 105.5 MB/s sustained (berechnet), 80 MB/s av. sustained Expectations: FC-Proto has an overhead of 20% (8/10bit xfers) + about 1.75% (protocol) So, for an 2 Gbps link Max. payload should be ~ 196.5 MiB/s General observations: - users time per write ~ 3.0s , per write ~ 0.3s (FS and Traget indep) - write, always ~ 100% b - read on UFS is always ~ 90..92% b, on ZFS always ~ 100% b HW-Raid5 (11,1) 1LV/LD: ----------------------- UFS: write 137.3 (50.5%) 129.8 (43.1%) 133.8 (49.0%) read 154.4 (42.8%) 193.1 (53.5%) 184.9 (50.1%) ZFS: write 79.1 (54.0%) 101.8 (39.1%) 101.0 (40.3%) read 184.5 (31.8%) 184.9 (28.8%) 184.4 (33.1%) HW-Raid1 (6x2) 1LV/LD: ---------------------- UFS: write 123.0 (47.3%) 124.0 (48.5%) 122.7 (50.3%) read 191.0 (57.0%) 147.2 (44.5%) 163.9 (48.5%) ZFS: write 76.1 (53.2%) 107.3 (42.3%) 107.3 (40.5%) read 184.9 (28.8%) 184.2 (29.5%) 184.3 (29.1%) HW-Raid0 (12) 1LD: ------------------ UFS: write 167.5 (62.4%) 164.3 (60.7%) 168.6 (59.7%) read 104.7 (45.5%) 180.0 (54.0%) 192.5 (55.1%) ZFS: write 170.0 (73.6%) 168.0 (65.9%) 168.3 (64.8%) read 184.1 (29.7%) 184.6 (29.8%) 184.6 (29.5%) ZFS Raid1 2way over HW-Raid0 (6) 2LDs (2 Channels): --------------------------------------------------- ZFS: write 75.8 (54.6%) 150.5 (60.4%) 149.5 (61.2%) read 311.3 (45.2%) 311.7 (45.1%) 318.2 (42.7%) - busy rate on 1st write test only ~30% almost during the complete test: after ~27GB it gets faster and becomes 100% b with much higher xfer rate (sample report every 5 sec): #device r/s w/s kr/s kw/s wait actv svc_t %w %b #ssd0 0.0 327.5 0.0 40603.2 0.0 8.8 26.8 0 26 #ssd1 0.0 327.9 0.0 40603.2 0.0 8.8 26.8 0 26 #ssd0 0.0 536.7 0.0 67849.3 0.0 14.9 27.7 0 43 #ssd1 0.0 536.7 0.0 67849.3 0.0 14.9 27.7 0 43 #ssd0 0.0 936.2 0.0 118398.5 0.0 26.3 28.1 1 76 #ssd1 0.0 936.2 0.0 118398.5 0.0 26.3 28.1 1 76 #ssd0 0.0 1227.9 0.0 155751.1 0.0 34.7 28.2 1 100 #ssd1 0.0 1227.9 0.0 155751.1 0.0 34.7 28.2 1 100 #pool1 26.3G 790G 0 350 0 41.8M #pool1 26.3G 790G 0 466 0 56.9M #pool1 26.9G 789G 0 955 0 116M #pool1 27.8G 788G 0 1.22K 0 152M #pool1 27.8G 788G 0 1.20K 0 151M #pool1 29.1G 787G 0 1.23K 0 153M #pool1 29.1G 787G 0 1.23K 25.5K 155M #Ch Tgt LUN ld/lv ID-Partition Assigned Filter Map #--------------------------------------------------------------------- # 0 40 0 ld0 74457A20-00 Primary # 1 43 1 ld1 28548240-00 Primary #Ch Type Media Speed Width PID / SID #-------------------------------------------- # 0 Host FC(L) 2G Serial 40 / N/A # 1 Host FC(L) 2G Serial 43 / N/A #NAME STATE READ WRITE CKSUM #pool1 ONLINE 0 0 0 # mirror ONLINE 0 0 0 # c1t40d0 ONLINE 0 0 0 # c2t43d1 ONLINE 0 0 0 ZFS Raid0 + 6xHW-Raid0 (2 channels) ----------------------------------- write 226.5 (88.4%) 229.2 (90.0%) 225.1 (87.9%) read 332.7 (48.2%) 340.5 (45.5%) 332.2 (48.6%) - load average 1.88 .. 2.04 #NAME STATE READ WRITE CKSUM #pool1 ONLINE 0 0 0 # c1t40d0 ONLINE 0 0 0 # c1t40d1 ONLINE 0 0 0 # c1t40d2 ONLINE 0 0 0 # c2t43d0 ONLINE 0 0 0 # c2t43d1 ONLINE 0 0 0 # c2t43d2 ONLINE 0 0 0 ZFS Raid1 + 6xHW-Raid0 (2 channels) -------------------------------------- write 75.7 (54.3%) 149.7 (59.5%) 149.2 (59.8%) read 281.4 (38.1%) 279.4 (40.1%) 281.1 (40.2%) - load average ~ 1.4 .. 1.7 #NAME STATE READ WRITE CKSUM #pool1 ONLINE 0 0 0 # mirror ONLINE 0 0 0 # c1t40d0 ONLINE 0 0 0 # c2t43d0 ONLINE 0 0 0 # mirror ONLINE 0 0 0 # c1t40d1 ONLINE 0 0 0 # c2t43d1 ONLINE 0 0 0 # mirror ONLINE 0 0 0 # c1t40d2 ONLINE 0 0 0 # c2t43d2 ONLINE 0 0 0 ZFS Raid0 + 6xHW-Raid1 (2 channels) ---------------------------------- write 141.7 (55.7%) 116.6 (46.9%) 114.9 (46.7%) read 294.8 (42.8%) 303.1 (43.7%) 310.0 (41.0%) - load average 1.1 .. 1.4, but aweful service times: usually 250, but up to 350!!! 1xHW-Raid1(5x2) + 2 GS + MPxIO (2 channels) -------------------------------------------- ZFS (disk): svc_t actv write 133.7 (92.3%) 186.7 (70.0%) 183.8 (67.5%) ~23 ~35 read 314.4 (45.8%) 314.1 (45.7%) 316.2 (42.2%) ~13.5 ~35 - on slow start write svc_t ~5 actv ~5 and only ~52% busy HDD - load average ~ 1.5 .. 1.8 - seems to be the best one can get wrt. ctrl. redundancy and HDD failures - what happens with ZFS, if a mirror gets replaced with a double sized mirror or 2 GS are added as a 6th mirror pair? Nothing, size stays as is. UFS: write 172.1 (76.6%) 184.9 (68.5%) 183.9 (69.0%) ~50 ~13 read 220.6 (66.0%) 178.6 (52.6%) 197.7 (59.8%) ~ 8 ~1.5 - load average ~ 0.6 .. 0.7