Ian Collins
2013-08-11 22:23:05 UTC
I've been looking at the performance of some ZFS storage used by
VmWare guests (all via NFS) that has a reputation for being
"slow". I've noticed that there are a significant number of small
(look like 4K) writes going straight to disk.
The pool comprises a stripe of 10 two way mirrors + two cache and two
log devices.
A typical pool drive in iostat:
   r/s   w/s  Mr/s  Mw/s wait actv wsvc_t asvc_t %wÂ
%b device
   0.1  36.3   0.0   0.5 0.0 0.2   0.0  Â
6.6Â Â 0Â Â 4 c4t5000C50014CCB518d0
The log devices:
   8.2   00   0.0   0.0 0.0 0.0   0.0  Â
0.3Â Â 0Â Â 0 c3t1d0
One mirror from zpool iostat:
pool                                  Â
alloc  free  read write  read write
Â
mirror                               Â
731GÂ Â 197GÂ Â Â Â Â 7Â Â Â 192Â Â 688KÂ Â 685K
   c4t5000C50014CCAAB7d0                 Â
-Â Â Â Â Â -Â Â Â Â Â 3Â Â Â Â 31Â Â 302KÂ Â 685K
   c4t5000C50014CCB518d0                 Â
-Â Â Â Â Â -Â Â Â Â Â 4Â Â Â Â 32Â Â 386KÂ Â 685K
As you can see, there is a high ratio of write operations to write
bandwidth. I wouldn't expect to see this with ZFS unless the writes
were synchronous. If they were, I would expect to see most of the
IPOs go to the logs, which isn't the case.
Any ideas?
--
Ian
-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/23047029-187a0c8d
Modify Your Subscription: https://www.listbox.com/member/?member_id=23047029&id_secret=23047029-2e85923f
Powered by Listbox: http://www.listbox.com
VmWare guests (all via NFS) that has a reputation for being
"slow". I've noticed that there are a significant number of small
(look like 4K) writes going straight to disk.
The pool comprises a stripe of 10 two way mirrors + two cache and two
log devices.
A typical pool drive in iostat:
   r/s   w/s  Mr/s  Mw/s wait actv wsvc_t asvc_t %wÂ
%b device
   0.1  36.3   0.0   0.5 0.0 0.2   0.0  Â
6.6Â Â 0Â Â 4 c4t5000C50014CCB518d0
The log devices:
   8.2   00   0.0   0.0 0.0 0.0   0.0  Â
0.3Â Â 0Â Â 0 c3t1d0
One mirror from zpool iostat:
pool                                  Â
alloc  free  read write  read write
Â
mirror                               Â
731GÂ Â 197GÂ Â Â Â Â 7Â Â Â 192Â Â 688KÂ Â 685K
   c4t5000C50014CCAAB7d0                 Â
-Â Â Â Â Â -Â Â Â Â Â 3Â Â Â Â 31Â Â 302KÂ Â 685K
   c4t5000C50014CCB518d0                 Â
-Â Â Â Â Â -Â Â Â Â Â 4Â Â Â Â 32Â Â 386KÂ Â 685K
As you can see, there is a high ratio of write operations to write
bandwidth. I wouldn't expect to see this with ZFS unless the writes
were synchronous. If they were, I would expect to see most of the
IPOs go to the logs, which isn't the case.
Any ideas?
--
Ian
-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/23047029-187a0c8d
Modify Your Subscription: https://www.listbox.com/member/?member_id=23047029&id_secret=23047029-2e85923f
Powered by Listbox: http://www.listbox.com