Discussion:
dedup: do we have enough resources?
Omen Wild via illumos-zfs
2014-10-13 19:39:57 UTC
Permalink
We have a backup server that is a pretty good candidate for dedup, but
I have heard many horror stories about how slow it can get if there are
not enough resources so I would appreciate some feedback.

The server is OpenIndiana with 176GB of RAM.

We have 3 backup pools, but are thinking of enabling dedup on only 1:
NAME SIZE ALLOC FREE EXPANDSZ CAP DEDUP HEALTH ALTROOT
backups 60T 54.2T 5.84T - 90% 1.00x ONLINE -

I ran `zdb -S` on the pool and got:
dedup = 1.48, compress = 1.54, copies = 1.09, dedup * compress / copies = 2.07

Total blocks is 127M. With 320 bytes/block that works out to 41GB of RAM
to hold the dedup table. I watched zdb with top, and it maxed at 98GB for
its size, larger than the 41GB expected.

So, my questions are:

Do we have enough RAM to run dedup on that pool? We could pretty
easily suppliment with a SSD if necessary.

Is the discrepancy between 320 bytes/block and what zdb needed to run
-S expected? Should we expect dedup to take more like 98GB of RAM?

Are there any more commands I can run to get a better idea of what to
expect?

Thanks,
Omen
--
It must be wonderfull being a mage, zooming
through time and space and closed doors.



-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/23047029-187a0c8d
Modify Your Subscription: https://www.listbox.com/member/?member_id=23047029&id_secret=23047029-2e85923f
Powered by Listbox: http://www.listbox.com
Loading...