Discussion:
Moving half+ of zpool data to new zpool
Harry Putnam
2014-03-22 00:10:38 UTC
Permalink
Setup:
Running oi b 151_a8, as vbox guest on win7 64 bit

I'll admit straight away that I'm taking baby steps here... and am
thoroughly confused about send / recv

I want to move several zfs filesystems to a new zpool on bigger hdd
The hdd are on the same host but reside on usb3 connected external
drive.

I've managed to get the drives setup as a mirrored pool ok within
Vbox, so they appear to the os, like any other drive.


The subject fs looks like this: ( '_' <underscore> indicate where the
.zfs directories are - there are 4)

/rmh_/
/rmh_/m2_/
/rmh_/reader_/
/rmh_/reader_/pub_/

I want to move that with all the snaps in place when thru.

I will narrow it down to 5 or so at each '.zfs' directory.

After stumbling thru quite a lot of material at:
Oracle Solaris ZFS Administration Guide
and lots of other parts at:
docs.oracle.com

I'm pretty confused... they make it sound like I need to do something
called 'Remote Replication'

But this is really very small time stuff. The whole thing is only 73
GB. And the part I want to send is really only 71.3 GB. So, can't
this be done with send/receive and if so, must I use snapshots?

root # zfs list -r p2
NAME USED AVAIL REFER MOUNTPOINT
p2 72.5G 9.72G 31K /p2
p2/newtrubl 871M 9.72G 865M /newtrubl
p2/oldOS 314M 9.72G 310M /oldOS
p2/rmh 71.3G 9.72G 177K /rmh
p2/rmh/m2 61.6G 9.72G 47.0G /rmh/m2
p2/rmh/reader 9.68G 9.72G 2.91G /rmh/reader
p2/rmh/reader/pub 6.77G 9.72G 3.85G /rmh/reader/pub

Is send / recv incapable of sending several joined top level zfs?

A brief description of how to precede from someone who does this stuff
or a living would be very well appreciated.

One last thing... would I be better off rsyncing, and if so, can I
expect the fs to respond once moved, as it normally does?
Richard Laager
2014-03-22 22:11:58 UTC
Permalink
As always, make backups you trust before changing anything. But you
should be doing that anyway.

You're moving from ZFS to ZFS? If so, yes, use zfs send | zfs recv. That
*requires* working from a snapshot; that's just how zfs send works.

If I understand you correctly, you want to move p2/rmh to the new pool?

If so, you want something like this:
zfs snapshot -r p2/***@replicate
zfs send -R p2/***@replicate | zfs recv NEWPOOL/rmh

The -R flag will cause it to copy over all of the child datasets and all
of the existing snapshots for p2/rmh and all the child datasets.

I didn't test that, but if that's not exactly what you want, it should
get you really close.

Once it's replicated, you'll need to sort out the mountpoint situation.
Here's how I'd proceed:

1) Unmount the old datasets and make sure they don't remount by running
(for each dataset): zfs set canmount=no p2/rmh
2) Verify they're actually unmounted, no files show up, etc.
3) Set the mountpoint property on the new dataset tree:
zfs set mountpoint=/rmh NEWPOOL/rmh
4) Verify it's mounted and where you want it.

Then wait some time (days?) depending on your needs; once you're 100%
comfortable that everything is still working, then delete old datasets.

--
Harry Putnam
2014-03-23 20:38:19 UTC
Permalink
Post by Richard Laager
As always, make backups you trust before changing anything. But you
should be doing that anyway.
You're moving from ZFS to ZFS? If so, yes, use zfs send | zfs recv. That
*requires* working from a snapshot; that's just how zfs send works.
If I understand you correctly, you want to move p2/rmh to the new pool?
This is very helpful stuff... however the predicate your basing it on
is off just a little.

I've probably made a complete botch of conveying what I'm try to do so
I'll try to fill in a bit more:

p2/rmh/m2 is what I want to move... and eventually delete the
original. It is two zfs filesystems p2/rmh and p2/rmh/m2

There is also p2/rmh/reader/pub which is 3 zfs filesystems.

I don't want to fool with the latter.

[aside: afterthoughts: in light of your post... I'm thinking it
would be much simpler to go ahead and move it all. Then make any
adjustments to what goes where after. Probably just remove the whole
mess from p2 and use that pool for something else that isn't likely
to very do its disc size]

What follows is now out of sync considering the aside above, but still
like to hear your thinking on it.

Let me say too, that there is no real fear of losing any data. Of
course I want to learn the correct way to do this.. but this data is
here just for that purpose.... I have other duplicate data elsewhere.
So if something went south and destroyed the whole thing... I'd only
have to recreate it from another source and try again.
Post by Richard Laager
The -R flag will cause it to copy over all of the child datasets and all
of the existing snapshots for p2/rmh and all the child datasets.
This would also include p2/rmh/reader/pub... which is actually ok,
since I could always zfs destroy that part after the move. But,
I wanted to see if it can be done by just replicating p2/rmh/m2.

The problem (which is sort of a fake one since the data is expendable)
is that the disc p2/rmh is on is getting too full. p2/rmh/m2 holds
by far the hefty end of the data...

So, I've created two more larger discs to move that portion of the
data to. I thought I would create p3/rmh2/m2 by move that referenced
data to the new empty file system. And then once It seems good,
zfs destroy it on p2

So is that just the wrong way to go? Or does it just require
different procedure with send/recv?

Is it possible to do something with send/recv that mimics how one can
use cp to create a duplicate portion of data with a different name?

The more I think about this, I'm thinking it would probably much
easier to move all of p2/rmh... and do the necessary 'zfs destroy'
after. I'm probably making this way to complicated.
Post by Richard Laager
I didn't test that, but if that's not exactly what you want, it should
get you really close.
Once it's replicated, you'll need to sort out the mountpoint situation.
1) Unmount the old datasets and make sure they don't remount by running
(for each dataset): zfs set canmount=no p2/rmh
2) Verify they're actually unmounted, no files show up, etc.
zfs set mountpoint=/rmh NEWPOOL/rmh
4) Verify it's mounted and where you want it.
This all sounds very safe but If I can rename with send/recv it would
eliminate some of that problem eh?

What you've posted may be all I need to get this done and end up with
some new experience.... thanks
Richard Laager
2014-03-24 00:22:36 UTC
Permalink
I'd just do this:

zfs set mountpoint=/rmh/reader p2/rmh/reader
zfs rename p2/rmh/reader p2/reader

Now p2/rmh/reader is out of the way, which you needed to do after the
sync anyway before you destroyed p2/rmh.

Then you can go ahead as I suggested before:
zfs snapshot -r p2/***@replicate
zfs send -R p2/***@replicate | zfs recv NEWPOOL/rmh

--
Stefan Ring
2014-03-24 07:44:36 UTC
Permalink
Post by Harry Putnam
p2/rmh/m2 is what I want to move... and eventually delete the
original. It is two zfs filesystems p2/rmh and p2/rmh/m2
There is also p2/rmh/reader/pub which is 3 zfs filesystems.
I don't want to fool with the latter.
IIRC, you can just destroy the just-created (recursive) snapshots from
the filesystems that you don't want to touch. Then they will be
skipped by zfs send.
Harry Putnam
2014-04-09 12:08:28 UTC
Permalink
Post by Richard Laager
As always, make backups you trust before changing anything. But you
should be doing that anyway.
You're moving from ZFS to ZFS? If so, yes, use zfs send | zfs recv. That
*requires* working from a snapshot; that's just how zfs send works.
If I understand you correctly, you want to move p2/rmh to the new pool?
The -R flag will cause it to copy over all of the child datasets and all
of the existing snapshots for p2/rmh and all the child datasets.
I didn't test that, but if that's not exactly what you want, it should
get you really close.
Once it's replicated, you'll need to sort out the mountpoint situation.
[...]

First.. let me report that these suggestions got that job done.

This is a redux of above and probably a bit long winded buy I
found it difficult to express:

Sorry to bug you again with sort of a long winded query.

Now a very similar situation with the same zfs fs has arisen. Even
with the cutback in scope of what that fs contains, it has still grown
to the point of being close to outstripping its designated discs

Since the setup is a vm, discs are cheap .. hehe. And since I'd like
to maintain my setup of paired/mirror discs rather than one of the raidz
setups...

(and since I need a lot more practice managing zfs)

I've created a set of larger discs and used the formulas above to
send/recv.

However in the interim some additional directories had been created. I
mean a specific directory created with mkdir.

So a quick recap:

Prior to send/recv:

zfs list -r p2
NAME USED AVAIL REFER MOUNTPOINT
p3 93.6G 249G 31K /p/p3
p3/rmh 93.6G 249G 1.30M /rmh
p3/rmh/m2 93.6G 249G 46.1G /rmh/m2

NOTE the 'adm' directory below
ls /rmh
adm m2

/rmh/adm was added with mkdir (no /rmh/adm/.zfs is present)
------- ------- ---=--- ------- -------

Then I ran these cmds... (taken from my shell history):

2047 zfs snapshot -r p3/***@replicate4send-140408

2049* time zfs send -R p3/***@replicate4send-140408 |zfs recv p2/rmh

------- ------- ---=--- ------- -------

After straightening out the mount points I now have zpool p2 mounted
on:

/mnt/rmh-p3:

root # zfs list -r p3
NAME USED AVAIL REFER MOUNTPOINT
p3 91.0G 116G 31K /p/p3
p3/rmh 91.0G 116G 1.30M /mnt/rmh-p3
p3/rmh/m2 91.0G 116G 45.3G /mnt/rmh-p3/m2

And p2 with the new bigger discs, mounted like p3 before it:

zfs list -r p2
NAME USED AVAIL REFER MOUNTPOINT
p2 93.6G 249G 31K /p/p2
p2/rmh 93.6G 249G 1.30M /rmh
p2/rmh/m2 93.6G 249G 46.1G /rmh/m2

All good, and thanks again for the suggestions.
One small thing I noticed.. why is the new fs 2.6 gb larger?. Must be
something related to new bigger discs and zfs needs, eh?

------- ------- ---=--- ------- -------

Also... the 'adm' directory mentioned above did not survive the send.

Here you see it in the snapshot that was sent.

ls /mnt/rmh-p3/.zfs/snapshot/replicate4send-140408
adm m2

OK, but in the new fs that recv created, 'adm' directory is
absent.

Further, the .zfs directory of the sent fs at p3/rmh/.zfs
is also absent, not to mention all the snapshots.

So the new fs p2/rmh/ is missing the 'adm' directory and the .zfs

ls /rmh
m2

ls /rmh/.zfs
ls: cannot access /rmh/.zfs: No such file or directory

But /rmh/m2/.zfs has survived and contains all its snapshots

------- ------- ---=--- ------- -------

OK, should I have done two sends? One for each piece of the zfs
file system on the sending zfs fs? Or what did I miss?

I understood the shapshot -r would get all of it...
Harry Putnam
2014-04-09 19:35:16 UTC
Permalink
Harry Putnam <***@newsguy.com> writes:

I'm sorry but some pretty damaging typos are in this post
Some how I mixed up the pool names in a couple of places making it
very confusing to read... please note the corrections in repinted full
message:

Corrections are set off with asterisks (***********)

------- ------- ---=--- ------- -------
Post by Richard Laager
As always, make backups you trust before changing anything. But you
should be doing that anyway.
You're moving from ZFS to ZFS? If so, yes, use zfs send | zfs recv. That
*requires* working from a snapshot; that's just how zfs send works.
If I understand you correctly, you want to move p2/rmh to the new pool?
The -R flag will cause it to copy over all of the child datasets and all
of the existing snapshots for p2/rmh and all the child datasets.
I didn't test that, but if that's not exactly what you want, it should
get you really close.
Once it's replicated, you'll need to sort out the mountpoint situation.
[...]

First.. let me report that these suggestions got that job done.

This is a redux of above and probably a bit long winded buy I
found it difficult to express:

Sorry to bug you again with sort of a long winded query.

Now a very similar situation with the same zfs fs has arisen. Even
with the cutback in scope of what that fs contains, it has still grown
to the point of being close to outstripping its designated discs

Since the setup is a vm, discs are cheap .. hehe. And since I'd like
to maintain my setup of paired/mirror discs rather than one of the raidz
setups...

(and since I need a lot more practice managing zfs)

I've created a set of larger discs and used the formulas above to
send/recv.

However in the interim some additional directories had been created. I
mean a specific directory created with mkdir.

So a quick recap:

Prior to send/recv:

**************************************
Below should say:
zfs list -r p3 (p2 is the new one... even though the numbers would
not indicate so)

zfs list -r p2
************************************
NAME USED AVAIL REFER MOUNTPOINT
p3 93.6G 249G 31K /p/p3
p3/rmh 93.6G 249G 1.30M /rmh
p3/rmh/m2 93.6G 249G 46.1G /rmh/m2

NOTE the 'adm' directory below
ls /rmh
adm m2

/rmh/adm was added with mkdir (no /rmh/adm/.zfs is present)
------- ------- ---=--- ------- -------

Then I ran these cmds... (taken from my shell history):

ere, all is correct:

2047 zfs snapshot -r p3/***@replicate4send-140408

2049* time zfs send -R p3/***@replicate4send-140408 |zfs recv p2/rmh

------- ------- ---=--- ------- -------

************************************
Here is a major error:
should say: p3
After straightening out the mount points I now have zpool p2 mounted
on:
***********************************
/mnt/rmh-p3:

root # zfs list -r p3
NAME USED AVAIL REFER MOUNTPOINT
p3 91.0G 116G 31K /p/p3
p3/rmh 91.0G 116G 1.30M /mnt/rmh-p3
p3/rmh/m2 91.0G 116G 45.3G /mnt/rmh-p3/m2

And p2 with the new bigger discs, mounted like p3 before it:

zfs list -r p2
NAME USED AVAIL REFER MOUNTPOINT
p2 93.6G 249G 31K /p/p2
p2/rmh 93.6G 249G 1.30M /rmh
p2/rmh/m2 93.6G 249G 46.1G /rmh/m2

All good, and thanks again for the suggestions.
One small thing I noticed.. why is the new fs 2.6 gb larger?. Must be
something related to new bigger discs and zfs needs, eh?

------- ------- ---=--- ------- -------

Also... the 'adm' directory mentioned above did not survive the send.

Here you see it in the snapshot that was sent.

ls /mnt/rmh-p3/.zfs/snapshot/replicate4send-140408
adm m2

OK, but in the new fs that recv created, 'adm' directory is
absent.

Further, the .zfs directory of the sent fs at p3/rmh/.zfs
is also absent, not to mention all the snapshots.

So the new fs p2/rmh/ is missing the 'adm' directory and the .zfs

ls /rmh
m2

ls /rmh/.zfs
ls: cannot access /rmh/.zfs: No such file or directory

But /rmh/m2/.zfs has survived and contains all its snapshots

------- ------- ---=--- ------- -------

OK, should I have done two sends? One for each piece of the zfs
file system on the sending zfs fs? Or what did I miss?

I understood the shapshot -r would get all of it...
------- ------- ---=--- ------- -------

Loading...