Discussion:
Do I need to wait for the zpool to sliver?
Greg Zartman
2013-11-16 07:16:02 UTC
Permalink
I am using SmartOS, which as you know is based on illumos.

The SmartOS installer has you create a zpool right off the bat based on the
available disks in the system. My goal is to setup a 6x2TB raidz1 pool as
my primary pool. Long story short, 5 of the drives are currently
unavailable until I move some data around.

So, my plan is to start with a 6x1TB array, move my data, then replace the
1TB drives with the 2TB Red Drives.

As I started down this road, I found that the SmartOS installer (based on
illumos) sets the ashift of the raidz vdev to 9 because the 1TB drives have
512 byte blocks. My Red Drives have 4K blocks, so I need my raidz vdev to
have a ashift of 12. After much research, I felt the easiest way to do
this with SmartOS was to put one 2TB red drive in my initial pool for force
the vdev to an ashift of 12: raidz= 5x1TB + 1x2TB. At first glance,
this seemed to work. SmartOS created the vdev with a ashift of 12 and
seemed to treat the 2TB drive like is was just a 1TB drive. End result was
a zpool of about 5TB (what we would expect)

However, after working to move 3.5TB of my data to the zpool, it seems to
think it's filled up after moving about 2TB of data. Some of the mount
points become non-writable, disk full. I know this can't be the case
because I am positive I've only moved about 2TB of data and my zpool status
is showing plenty of space:

[***@SmartOS ~]# zpool list
NAME SIZE ALLOC FREE EXPANDSZ CAP DEDUP HEALTH ALTROOT
zones 5.44T 1.65T 3.78T - 30% 1.00x ONLINE -
[***@SmartOS ~]#

Zpool status is showing the pool/vdev as healthy and all disks as online.

My questions are this:

Is the 2TB drive some how corrupting the raidz vdev, but zfs doesn't know
it until it tries to write past a certain point in the vdev?

Do I need to wait for a certain amount of time after initially creating the
raidz vdev, letting the vdev get fully slivered, before filling up about
75% of the new pool?

Any other possibilities why my vdev is basically keeling over?

Many thanks in advance!
--
Greg J. Zartman



-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/23047029-187a0c8d
Modify Your Subscription: https://www.listbox.com/member/?member_id=23047029&id_secret=23047029-2e85923f
Powered by Listbox: http://www.listbox.com
Nigel W
2013-11-17 02:39:35 UTC
Permalink
Hello Greg,

Responses inline...
Post by Greg Zartman
I am using SmartOS, which as you know is based on illumos.
The SmartOS installer has you create a zpool right off the bat based on
the available disks in the system. My goal is to setup a 6x2TB raidz1 pool
as my primary pool. Long story short, 5 of the drives are currently
unavailable until I move some data around.
So, my plan is to start with a 6x1TB array, move my data, then replace the
1TB drives with the 2TB Red Drives.
As I started down this road, I found that the SmartOS installer (based on
illumos) sets the ashift of the raidz vdev to 9 because the 1TB drives have
512 byte blocks. My Red Drives have 4K blocks, so I need my raidz vdev to
have a ashift of 12. After much research, I felt the easiest way to do
this with SmartOS was to put one 2TB red drive in my initial pool for force
the vdev to an ashift of 12: raidz= 5x1TB + 1x2TB. At first glance,
this seemed to work. SmartOS created the vdev with a ashift of 12 and
seemed to treat the 2TB drive like is was just a 1TB drive. End result was
a zpool of about 5TB (what we would expect)
However, after working to move 3.5TB of my data to the zpool, it seems to
think it's filled up after moving about 2TB of data. Some of the mount
points become non-writable, disk full. I know this can't be the case
because I am positive I've only moved about 2TB of data and my zpool status
NAME SIZE ALLOC FREE EXPANDSZ CAP DEDUP HEALTH ALTROOT
zones 5.44T 1.65T 3.78T - 30% 1.00x ONLINE -
Zpool status is showing the pool/vdev as healthy and all disks as online.
Is the 2TB drive some how corrupting the raidz vdev, but zfs doesn't know
it until it tries to write past a certain point in the vdev?
That is possible. It depends on how you added the 4k sector disk to the
pool.

I do not see in your description above anything about you recreating the
pool with the 4k disk in it, but you did say that that you it checked that
the vdev now has an ashift of 12. Having said that, what you are
describing sounds to me like you only did a zpool replace on the to get the
4k drive into the vdev which left the vdev with an ashift of 9, rather than
creating the pool from scratch as you need to do for your plan to work.

What would be very helpful at this point is the output of:

# 1)
zpool status zones

# and 2)
zpool history zones
Post by Greg Zartman
Do I need to wait for a certain amount of time after initially creating
the raidz vdev, letting the vdev get fully slivered, before filling up
about 75% of the new pool?
No. zpool status will tell you if any resilvering is being done. zfs does
not 'format' or otherwise prepare a disk to be a member of a pool beyond
writing headers and creating partition tables to help protect itself from
inadvertent erasure by OSes that do not understand zfs.
Post by Greg Zartman
Any other possibilities why my vdev is basically keeling over?
Apart from what I speculate above, what you are describing should work just
fine. I have successfully done something very similar to deal with ashift
issues in the past.

Thanks,
Nigel



-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/23047029-187a0c8d
Modify Your Subscription: https://www.listbox.com/member/?member_id=23047029&id_secret=23047029-2e85923f
Powered by Listbox: http://www.listbox.com
Greg Zartman
2013-11-17 05:08:14 UTC
Permalink
Post by Nigel W
Apart from what I speculate above, what you are describing should work
just fine. I have successfully done something very similar to deal with
ashift issues in the past.
I figured out my issue. I am running SmartOS on a raidz pool. When I
started putting data on the the zpool, I inadvertently filled up the pool
because I didn't take into account parity for the raidz. I'd allocated 4TB
os disk storage, but I only had about 3.8TB available once you include
parity for raidz.

I'm somewhat new to zfs having used mostly linux software raid where parity
is mostly hidden and what the file system reports is net storage/usage
(i.e., gross less parity).

Lesson learned....

Greg
--
Greg J. Zartman
Board Member

Koozali Foundation, Inc.
2755 19th Street SE
Salem, Oregon 97302
Cell: 541-5218449

SME Server user and community member since 2000



-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/23047029-187a0c8d
Modify Your Subscription: https://www.listbox.com/member/?member_id=23047029&id_secret=23047029-2e85923f
Powered by Listbox: http://www.listbox.com
Loading...