Greg Zartman
2013-11-16 07:16:02 UTC
I am using SmartOS, which as you know is based on illumos.
The SmartOS installer has you create a zpool right off the bat based on the
available disks in the system. My goal is to setup a 6x2TB raidz1 pool as
my primary pool. Long story short, 5 of the drives are currently
unavailable until I move some data around.
So, my plan is to start with a 6x1TB array, move my data, then replace the
1TB drives with the 2TB Red Drives.
As I started down this road, I found that the SmartOS installer (based on
illumos) sets the ashift of the raidz vdev to 9 because the 1TB drives have
512 byte blocks. My Red Drives have 4K blocks, so I need my raidz vdev to
have a ashift of 12. After much research, I felt the easiest way to do
this with SmartOS was to put one 2TB red drive in my initial pool for force
the vdev to an ashift of 12: raidz= 5x1TB + 1x2TB. At first glance,
this seemed to work. SmartOS created the vdev with a ashift of 12 and
seemed to treat the 2TB drive like is was just a 1TB drive. End result was
a zpool of about 5TB (what we would expect)
However, after working to move 3.5TB of my data to the zpool, it seems to
think it's filled up after moving about 2TB of data. Some of the mount
points become non-writable, disk full. I know this can't be the case
because I am positive I've only moved about 2TB of data and my zpool status
is showing plenty of space:
[***@SmartOS ~]# zpool list
NAME SIZE ALLOC FREE EXPANDSZ CAP DEDUP HEALTH ALTROOT
zones 5.44T 1.65T 3.78T - 30% 1.00x ONLINE -
[***@SmartOS ~]#
Zpool status is showing the pool/vdev as healthy and all disks as online.
My questions are this:
Is the 2TB drive some how corrupting the raidz vdev, but zfs doesn't know
it until it tries to write past a certain point in the vdev?
Do I need to wait for a certain amount of time after initially creating the
raidz vdev, letting the vdev get fully slivered, before filling up about
75% of the new pool?
Any other possibilities why my vdev is basically keeling over?
Many thanks in advance!
--
Greg J. Zartman
-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/23047029-187a0c8d
Modify Your Subscription: https://www.listbox.com/member/?member_id=23047029&id_secret=23047029-2e85923f
Powered by Listbox: http://www.listbox.com
The SmartOS installer has you create a zpool right off the bat based on the
available disks in the system. My goal is to setup a 6x2TB raidz1 pool as
my primary pool. Long story short, 5 of the drives are currently
unavailable until I move some data around.
So, my plan is to start with a 6x1TB array, move my data, then replace the
1TB drives with the 2TB Red Drives.
As I started down this road, I found that the SmartOS installer (based on
illumos) sets the ashift of the raidz vdev to 9 because the 1TB drives have
512 byte blocks. My Red Drives have 4K blocks, so I need my raidz vdev to
have a ashift of 12. After much research, I felt the easiest way to do
this with SmartOS was to put one 2TB red drive in my initial pool for force
the vdev to an ashift of 12: raidz= 5x1TB + 1x2TB. At first glance,
this seemed to work. SmartOS created the vdev with a ashift of 12 and
seemed to treat the 2TB drive like is was just a 1TB drive. End result was
a zpool of about 5TB (what we would expect)
However, after working to move 3.5TB of my data to the zpool, it seems to
think it's filled up after moving about 2TB of data. Some of the mount
points become non-writable, disk full. I know this can't be the case
because I am positive I've only moved about 2TB of data and my zpool status
is showing plenty of space:
[***@SmartOS ~]# zpool list
NAME SIZE ALLOC FREE EXPANDSZ CAP DEDUP HEALTH ALTROOT
zones 5.44T 1.65T 3.78T - 30% 1.00x ONLINE -
[***@SmartOS ~]#
Zpool status is showing the pool/vdev as healthy and all disks as online.
My questions are this:
Is the 2TB drive some how corrupting the raidz vdev, but zfs doesn't know
it until it tries to write past a certain point in the vdev?
Do I need to wait for a certain amount of time after initially creating the
raidz vdev, letting the vdev get fully slivered, before filling up about
75% of the new pool?
Any other possibilities why my vdev is basically keeling over?
Many thanks in advance!
--
Greg J. Zartman
-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/23047029-187a0c8d
Modify Your Subscription: https://www.listbox.com/member/?member_id=23047029&id_secret=23047029-2e85923f
Powered by Listbox: http://www.listbox.com