So what's the status on setting l2arc_norw to B_FLASE by default? It looks
like the only concerns are coming from Garrett. Could you work with him to
this change will bring would be convincing.
Post by Garrett D'AmorePost by Garrett D'Amore2009 isn't *that* long ago. However, if we're reasonably sure that
nobody is using illumos on those fishworks appliances, and that the devices
in question weren't in use anywhere else, then I tend to agree. If the
problem with those devices was only a *performance* and not a *correctness*
problem, then I *definitely* agree.
Post by Garrett D'AmoreThat said, I know some fishworks customers had inquired with other
illumos based distro providers about running illumos on their kit. (I know
Nexenta had contemplated this at one point, at least.)
Post by Garrett D'Amore(I just hate making changes that break existing deployments.)
It might be a good idea to have something like a release note/FAQ
section in the code that distro maintainers could pick up and integrate
into their own notes.
Just as a further note and so other people know about this, I asked
Brendan about this and here's his response on why l2arc_norw = B_TRUE
-------- Original Message --------
Subject: Re: l2arc_norw default change - what devices might be
affected?
Date: Mon, 12 Aug 2013 21:05:46 -0700
Right, the problem was with the drives we were using in 2008, which from
memory were either the STEC ZeusIOPS or the Intel X25s. They were
supposed to have separate queues for reads and writes, but that
functionality wasn't working. I /think/ that was something we could fix
in the kernel, in our lower level driver stack, and there was a ticket
filed. But the ticket wasn't a high enough priority to be fixed. The SSD
vendor engineers were surprised that we weren't using the functionality.
The problem it caused was reads queueing behind writes. So, as you know,
the L2ARC writes in batches, which can take 100s of milliseconds to
complete. Reads could queue behind those, and so with both reads and
writes at the same time there were some reads taking 10s and 100s of
milliseconds. Not great for a device supposed to be super fast for
random read workloads!
What's the status today with drives? I don't know. Other than testing it
with the L2ARC, you may be able to test it using dd. Eg, do a random
read workload to /dev/rdsk and dtrace the I/O latency. Then, issues some
dd writes to the same device for 100s of Mbytes. See how much it
interferes with the reads.
I'd lean towards Matt's opinion though - this was set a long time ago,
and now probably only affects older devices.
Brendan
-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
https://www.listbox.com/member/archive/rss/182191/21635000-ebd1d460
https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
Modify Your Subscription: https://www.listbox.com/member/?member_id=23047029&id_secret=23047029-2e85923f