Discussion:
performance stats / advise
Randy S via illumos-zfs
2014-07-01 10:47:31 UTC
Permalink
Hi,

I'm doing some research on putting together a storage system which has to be able to do fast reads and fast writes. It will primarily store many digital movie files for playing and editing. We will start with probably 80TB of net storage space.

Now I've seen some posts of people who have created ssd only pools (storage systems).
If anyone has experience with these ssd configurations, I would appreciate it if you could post the everyday read/write performance that you achieve with the configuration that you created (single copy action and or multiple copy actions simultaniously).

So these could be configurations consisting of ssd only pools or configurations consisting of mixed pools (e.g. sas disks and a probably very large read cache).

In our case the average file length is about 30GB (which probably cause sequential reads/writes). The thing is that there will be many many files. Our goal is also that files that have not been accessed for a long time should also be able to be read fast (e.g. is somebody wants to edit an old file). By fast I mean >= 400MB/s

Is it correct thet setting vfs.zfs.l2arc_noprefetch to 0 will also help filling the l2arc with streaming/sequential reads faster?

I thank you in advance.

Regards,

R





-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/23047029-187a0c8d
Modify Your Subscription: https://www.listbox.com/member/?member_id=23047029&id_secret=23047029-2e85923f
Powered by Listbox: http://www.listbox.com
Stefan Ring via illumos-zfs
2014-07-01 11:10:33 UTC
Permalink
On Tue, Jul 1, 2014 at 12:47 PM, Randy S via illumos-zfs <
Post by Randy S via illumos-zfs
Is it correct thet setting vfs.zfs.l2arc_noprefetch to 0 will also help
filling the l2arc with streaming/sequential reads faster?
Why would you need an l2arc with an all-ssd pool, especially with a
streaming workload?



-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/23047029-187a0c8d
Modify Your Subscription: https://www.listbox.com/member/?member_id=23047029&id_secret=23047029-2e85923f
Powered by Listbox: http://www.listbox.com
Randy S via illumos-zfs
2014-07-01 11:33:31 UTC
Permalink
This setting would only be applicable in the case the pool is mixed.
Just to be clear, my question is primarily about ssd only pools. But because of the costs we might have to choose for a mixed pool with a very large l2arc so any experience and or performance advise / stats are appreciated.


Date: Tue, 1 Jul 2014 13:10:33 +0200
Subject: Re: [zfs] performance stats / advise
From: ***@lists.illumos.org
To: ***@lists.illumos.org

On Tue, Jul 1, 2014 at 12:47 PM, Randy S via illumos-zfs <***@lists.illumos.org> wrote:
Is it correct thet setting vfs.zfs.l2arc_noprefetch to 0 will also help filling the l2arc with streaming/sequential reads faster?

Why would you need an l2arc with an all-ssd pool, especially with a streaming workload?

illumos-zfs | Archives

| Modify
Your Subscription



-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/23047029-187a0c8d
Modify Your Subscription: https://www.listbox.com/member/?member_id=23047029&id_secret=23047029-2e85923f
Powered by Listbox: http://www.listbox.com
Garrett D'Amore via illumos-zfs
2014-07-02 21:53:40 UTC
Permalink
Are you sure you even need SSDs? HDDs can stream at 200 MB/sec these days,
and given sufficient spindles, and a streaming workload, maybe you won't
get that much benefit from them? (Granted SSDs typically can stream at
even higher rates, and have no special latency considerations... but
usually price is a factor to consider. :-)

On Tue, Jul 1, 2014 at 4:33 AM, Randy S via illumos-zfs <
Post by Randy S via illumos-zfs
This setting would only be applicable in the case the pool is mixed.
Just to be clear, my question is primarily about ssd only pools. But
because of the costs we might have to choose for a mixed pool with a very
large l2arc so any experience and or performance advise / stats are
appreciated.
------------------------------
Date: Tue, 1 Jul 2014 13:10:33 +0200
Subject: Re: [zfs] performance stats / advise
On Tue, Jul 1, 2014 at 12:47 PM, Randy S via illumos-zfs <
Is it correct thet setting vfs.zfs.l2arc_noprefetch to 0 will also help
filling the l2arc with streaming/sequential reads faster?
Why would you need an l2arc with an all-ssd pool, especially with a streaming workload?
*illumos-zfs* | Archives
<https://www.listbox.com/member/archive/182191/=now>
<https://www.listbox.com/member/archive/rss/182191/26062426-a4c9712f> |
Modify <https://www.listbox.com/member/?&> Your Subscription
<http://www.listbox.com>
*illumos-zfs* | Archives
<https://www.listbox.com/member/archive/182191/=now>
<https://www.listbox.com/member/archive/rss/182191/22035932-85c5d227> |
Modify
<https://www.listbox.com/member/?&>
Your Subscription <http://www.listbox.com>
-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/23047029-187a0c8d
Modify Your Subscription: https://www.listbox.com/member/?member_id=23047029&id_secret=23047029-2e85923f
Powered by Listbox: http://www.listbox.com
Ian Collins via illumos-zfs
2014-07-03 05:09:36 UTC
Permalink
Post by Randy S via illumos-zfs
In our case the average file length is about 30GB (which probably
cause sequential reads/writes). The thing is that there will be many
many files. Our goal is also that files that have not been accessed
for a long time should also be able to be read fast (e.g. is somebody
wants to edit an old file). By fast I mean >= 400MB/s
That's a fairly modest requirement for streaming data, three or 4 vdevs
comprising decent spinning rust could comfortably manage it.

It you aren't performing lots or random reads, you probably won't need
any cache devices.
--
Ian.
Randy S via illumos-zfs
2014-07-03 13:44:41 UTC
Permalink
Hi,
Thank you for your answers.

The reason that I'm thinking about SSD's is that we already have a system with 14 vdevs sas drives (7200rpm , 180TB net space) in raidz1 as an archive for this type of material. 128MB Ram and 240GB L2arc. (The system was put together for archiving so no problem there, but since the people are very happy with this they now also want to use for more then just archiving...).

The archiving function (writing to the storage) is more then fine (far exceeds 700MB/s ). But if somebody wants to pull some material (which hasn't been used for a while) to work with it, the reading speed barely reaches 100MB/s. It also occurs frequently that several people pull old material simultaneously which doesn't make them any happier.
Now as long as the requested files are still in the cache (arc / l2arc) , the reading speed is ok, but reading speed on older material kinda sucks...(pardon my french).
So I would appreciate any suggestions on a configuration that can handle this situation.

Regards,

R
Date: Thu, 3 Jul 2014 17:09:36 +1200
Subject: Re: [zfs] performance stats / advise
Post by Randy S via illumos-zfs
In our case the average file length is about 30GB (which probably
cause sequential reads/writes). The thing is that there will be many
many files. Our goal is also that files that have not been accessed
for a long time should also be able to be read fast (e.g. is somebody
wants to edit an old file). By fast I mean >= 400MB/s
That's a fairly modest requirement for streaming data, three or 4 vdevs
comprising decent spinning rust could comfortably manage it.
It you aren't performing lots or random reads, you probably won't need
any cache devices.
--
Ian.
-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/26062426-a4c9712f
Modify Your Subscription: https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/23047029-187a0c8d
Modify Your Subscription: https://www.listbox.com/member/?member_id=23047029&id_secret=23047029-2e85923f
Powered by Listbox: http://www.listbox.com
Bob Friesenhahn via illumos-zfs
2014-07-03 14:39:14 UTC
Permalink
Post by Randy S via illumos-zfs
The archiving function (writing to the storage) is more then fine (far exceeds 700MB/s ). But if somebody wants to pull
some material (which hasn't been used for a while) to work with it, the reading speed barely reaches 100MB/s. It also
occurs frequently that several people pull old material simultaneously which doesn't make them any happier.
Now as long as the requested files are still in the cache (arc / l2arc) , the reading speed is ok, but reading speed on
older material kinda sucks...(pardon my french).
So I would appreciate any suggestions on a configuration that can handle this situation.
Zfs's file sequential read-ahead algorithm has a "flaw" in that it
increments the amount of read-ahead as the file is read. As it turns
out, this places a limit on the sequential read performance based on
file size (or the amount of data already read from the file). Often
there is not suffient read-ahead to keep the reading software busy
until all (or quite a lot) of the file has been read.

A 30GB file should offer plenty of opportunity to spin up read-ahead
but only if the reading software reads suffient megabytes of it in
sequential order and consistently enough to keep the read-ahead
active.

This problem becomes evident with I/O benchmarks (e.g. iozone) showing
high available read rates with very large files but applications (cpio
is the perfect example) exhibiting rates less than 100MB/s regardless
of pool architecture.

You will find plenty of gripes from me on this topic.

The standard response is that the reading application should overlap
read requests in a multi-threaded manner but this represents a
substantial re-write of the application.

Bob
--
Bob Friesenhahn
***@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Randy S via illumos-zfs
2014-07-03 15:38:27 UTC
Permalink
Hi Bob,
Thanks for your mail.

So in fact what you are saying that unless the reading application is multithreaded, there isn't much that you can do (as in e.g pool / memory / cache architecture / configuration) ?
Date: Thu, 3 Jul 2014 09:39:14 -0500
Subject: RE: [zfs] performance stats / advise
Post by Randy S via illumos-zfs
The archiving function (writing to the storage) is more then fine (far exceeds 700MB/s ). But if somebody wants to pull
some material (which hasn't been used for a while) to work with it, the reading speed barely reaches 100MB/s. It also
occurs frequently that several people pull old material simultaneously which doesn't make them any happier.
Now as long as the requested files are still in the cache (arc / l2arc) , the reading speed is ok, but reading speed on
older material kinda sucks...(pardon my french).
So I would appreciate any suggestions on a configuration that can handle this situation.
Zfs's file sequential read-ahead algorithm has a "flaw" in that it
increments the amount of read-ahead as the file is read. As it turns
out, this places a limit on the sequential read performance based on
file size (or the amount of data already read from the file). Often
there is not suffient read-ahead to keep the reading software busy
until all (or quite a lot) of the file has been read.
A 30GB file should offer plenty of opportunity to spin up read-ahead
but only if the reading software reads suffient megabytes of it in
sequential order and consistently enough to keep the read-ahead
active.
This problem becomes evident with I/O benchmarks (e.g. iozone) showing
high available read rates with very large files but applications (cpio
is the perfect example) exhibiting rates less than 100MB/s regardless
of pool architecture.
You will find plenty of gripes from me on this topic.
The standard response is that the reading application should overlap
read requests in a multi-threaded manner but this represents a
substantial re-write of the application.
Bob
--
Bob Friesenhahn
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/26062426-a4c9712f
Modify Your Subscription: https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/23047029-187a0c8d
Modify Your Subscription: https://www.listbox.com/member/?member_id=23047029&id_secret=23047029-2e85923f
Powered by Listbox: http://www.listbox.com
Bob Friesenhahn via illumos-zfs
2014-07-03 16:12:52 UTC
Permalink
Post by Randy S via illumos-zfs
Hi Bob,
Thanks for your mail.
So in fact what you are saying that unless the reading application is multithreaded, there isn't much that you can do (as in
e.g pool / memory / cache architecture / configuration) ?
That is my understanding. Regardless of the number of parallel
devices, the algorithm is bounded by the disk access time and the
number of recent sequential reads which have been done. The algorithm
is linear, adding another step of prefetch for each file read which
was not satisfied from prefetched data. With 128k blocks, quite a few
megabytes need to be read before the prefetch rate is anything close
to hardware rates.

Zfs would not want to been too agressive with file pre-fetch because
then the system could be indundated with excessive pre-fetch for data
which is never actually used.

The posix_fadvise() call is a good solution for this but it is
currently a NOP in Illumos.

Bob
--
Bob Friesenhahn
***@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Matthew Ahrens via illumos-zfs
2014-07-03 17:21:01 UTC
Permalink
On Thu, Jul 3, 2014 at 7:39 AM, Bob Friesenhahn via illumos-zfs <
Post by Bob Friesenhahn via illumos-zfs
Post by Randy S via illumos-zfs
The archiving function (writing to the storage) is more then fine (far
exceeds 700MB/s ). But if somebody wants to pull
some material (which hasn't been used for a while) to work with it, the
reading speed barely reaches 100MB/s. It also
occurs frequently that several people pull old material simultaneously
which doesn't make them any happier.
Now as long as the requested files are still in the cache (arc / l2arc) ,
the reading speed is ok, but reading speed on
older material kinda sucks...(pardon my french).
So I would appreciate any suggestions on a configuration that can handle this situation.
Zfs's file sequential read-ahead algorithm has a "flaw" in that it
increments the amount of read-ahead as the file is read. As it turns out,
this places a limit on the sequential read performance based on file size
(or the amount of data already read from the file). Often there is not
suffient read-ahead to keep the reading software busy until all (or quite a
lot) of the file has been read.
Can you be more specific? The prefetch ramp rate is exponential (for each
block sequentially read, it doubles the number of prefetched blocks). In
practice, I have observed that prefetch will have issued the maximum number
of i/os - 256 after about 10 blocks have been sequentially read. So with
recordsize=128K (the default), after ~1.2MB has been read, prefetches will
have been issued for the next 32MB. To me, this seems like a pretty fast
ramp-up.

If the issue is that prefetch doesn't read far enough in advance (i.e. 256
outstanding i/os is insufficient to get maximum throughput), you can
increase the tunable "zfetch_block_cap".

--matt
Post by Bob Friesenhahn via illumos-zfs
A 30GB file should offer plenty of opportunity to spin up read-ahead but
only if the reading software reads suffient megabytes of it in sequential
order and consistently enough to keep the read-ahead active.
This problem becomes evident with I/O benchmarks (e.g. iozone) showing
high available read rates with very large files but applications (cpio is
the perfect example) exhibiting rates less than 100MB/s regardless of pool
architecture.
You will find plenty of gripes from me on this topic.
The standard response is that the reading application should overlap read
requests in a multi-threaded manner but this represents a substantial
re-write of the application.
Bob
--
Bob Friesenhahn
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/
21635000-ebd1d460
Modify Your Subscription: https://www.listbox.com/
member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/23047029-187a0c8d
Modify Your Subscription: https://www.listbox.com/member/?member_id=23047029&id_secret=23047029-2e85923f
Powered by Listbox: http://www.listbox.com
Bob Friesenhahn via illumos-zfs
2014-07-03 17:30:05 UTC
Permalink
Can you be more specific?  The prefetch ramp rate is exponential (for each block sequentially read, it doubles the number of
prefetched blocks).  In practice, I have observed that prefetch will have issued the maximum number of i/os - 256 after about
10 blocks have been sequentially read.  So with recordsize=128K (the default), after ~1.2MB has been read, prefetches will
have been issued for the next 32MB.  To me, this seems like a pretty fast ramp-up.
If the issue is that prefetch doesn't read far enough in advance (i.e. 256 outstanding i/os is insufficient to get maximum
throughput), you can increase the tunable "zfetch_block_cap".
Has Illumos/OpenZFS changed the prefetch behavior?

The classic test is to use cpio to send the contents of a filesystem
with multi-megabyte files to /dev/null. In my past testing, there
seemed to be a cap on read performance with this test using thousands
of 8MB files which is much less than the performance observed using
single large files.

Bob
--
Bob Friesenhahn
***@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
Matthew Ahrens via illumos-zfs
2014-07-03 17:40:31 UTC
Permalink
On Thu, Jul 3, 2014 at 10:30 AM, Bob Friesenhahn <
Post by Bob Friesenhahn via illumos-zfs
Post by Matthew Ahrens via illumos-zfs
Can you be more specific? The prefetch ramp rate is exponential (for
each block sequentially read, it doubles the number of
prefetched blocks). In practice, I have observed that prefetch will have
issued the maximum number of i/os - 256 after about
10 blocks have been sequentially read. So with recordsize=128K (the
default), after ~1.2MB has been read, prefetches will
have been issued for the next 32MB. To me, this seems like a pretty fast
ramp-up.
If the issue is that prefetch doesn't read far enough in advance (i.e.
256 outstanding i/os is insufficient to get maximum
throughput), you can increase the tunable "zfetch_block_cap".
Has Illumos/OpenZFS changed the prefetch behavior?
No.
Post by Bob Friesenhahn via illumos-zfs
The classic test is to use cpio to send the contents of a filesystem with
multi-megabyte files to /dev/null. In my past testing, there seemed to be
a cap on read performance with this test using thousands of 8MB files which
is much less than the performance observed using single large files.
Assuming recordsize=128k, that would necessarily result in less concurrent
prefetch i/os. Each file has only 64 blocks, so there will be at most 64
concurrent i/os, whereas for a single large file it will essentially always
have 256 outstanding i/os.

Additionally, are you sure that the 8MB files and the single large file
have comparable layouts on disk? If one is more sequential than the other
(also considering sequentialness between different 8MB files, in the order
they will be read), that would also have an impact.

--matt
Post by Bob Friesenhahn via illumos-zfs
Bob
--
Bob Friesenhahn
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/23047029-187a0c8d
Modify Your Subscription: https://www.listbox.com/member/?member_id=23047029&id_secret=23047029-2e85923f
Powered by Listbox: http://www.listbox.com
Bob Friesenhahn via illumos-zfs
2014-07-03 19:16:14 UTC
Permalink
Additionally, are you sure that the 8MB files and the single large file have comparable layouts on disk?  If one is more
sequential than the other (also considering sequentialness between different 8MB files, in the order they will be read), that
would also have an impact.
My past testing has been with "virgin" pools where the files were
copied into place when the pool was still brand new. The files were
typically copied into place using cpio.

I have only tested with pools organized as mirrors (12 or 8 disks) and
never with raidz type pools.

A long time ago, I posted this test case (which was executed by many
other people, including people with enormous pools) to illustrate a
particular bug, but it is still useful to test/demonstrate possible
pre-fetch issues with many medium sized files:

http://www.simplesystems.org/users/bfriesen/zfs-discuss/zfs-cache-test.ksh

At the time, it was interesting to notice that reported cpio read
performance was about the same, regardless of if the pool had two
disks or 64 disks.

Bob
--
Bob Friesenhahn
***@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
Garrett D'Amore via illumos-zfs
2014-07-03 17:08:49 UTC
Permalink
RAIDz is a loser for various performance critical workloads. Use a stripe of mirrors or even a stripe of RAIDz vdevs and your performance troubles will most likely go away. :-)

This means increasing disk (doubling storage requirement) but compared to buying ssd this is probably more cost effective.

Sent from my iPhone
Post by Randy S via illumos-zfs
Hi,
Thank you for your answers.
The reason that I'm thinking about SSD's is that we already have a system with 14 vdevs sas drives (7200rpm , 180TB net space) in raidz1 as an archive for this type of material. 128MB Ram and 240GB L2arc. (The system was put together for archiving so no problem there, but since the people are very happy with this they now also want to use for more then just archiving...).
The archiving function (writing to the storage) is more then fine (far exceeds 700MB/s ). But if somebody wants to pull some material (which hasn't been used for a while) to work with it, the reading speed barely reaches 100MB/s. It also occurs frequently that several people pull old material simultaneously which doesn't make them any happier.
Now as long as the requested files are still in the cache (arc / l2arc) , the reading speed is ok, but reading speed on older material kinda sucks...(pardon my french).
So I would appreciate any suggestions on a configuration that can handle this situation.
Regards,
R
Date: Thu, 3 Jul 2014 17:09:36 +1200
Subject: Re: [zfs] performance stats / advise
Post by Randy S via illumos-zfs
In our case the average file length is about 30GB (which probably
cause sequential reads/writes). The thing is that there will be many
many files. Our goal is also that files that have not been accessed
for a long time should also be able to be read fast (e.g. is somebody
wants to edit an old file). By fast I mean >= 400MB/s
That's a fairly modest requirement for streaming data, three or 4 vdevs
comprising decent spinning rust could comfortably manage it.
It you aren't performing lots or random reads, you probably won't need
any cache devices.
--
Ian.
-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/26062426-a4c9712f
Modify Your Subscription: https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
illumos-zfs | Archives | Modify Your Subscription
-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/23047029-187a0c8d
Modify Your Subscription: https://www.listbox.com/member/?member_id=23047029&id_secret=23047029-2e85923f
Powered by Listbox: http://www.listbox.com
Randy S via illumos-zfs
2014-07-03 17:41:59 UTC
Permalink
I understand but we already have 13 vdevs in Raidz (each vdev containing 5 disks) in the pool, so I guess this is already striped 13 times. That's why I am bit baffled by the poor reading speed when data is not in cache. Probably the mirroring would have been better then.

Subject: Re: [zfs] performance stats / advise
From: ***@lists.illumos.org
Date: Thu, 3 Jul 2014 10:08:49 -0700
To: ***@lists.illumos.org; ***@live.nl

RAIDz is a loser for various performance critical workloads. Use a stripe of mirrors or even a stripe of RAIDz vdevs and your performance troubles will most likely go away. :-)
This means increasing disk (doubling storage requirement) but compared to buying ssd this is probably more cost effective.

Sent from my iPhone
On Jul 3, 2014, at 6:44 AM, "Randy S via illumos-zfs" <***@lists.illumos.org> wrote:

Hi,
Thank you for your answers.

The reason that I'm thinking about SSD's is that we already have a system with 14 vdevs sas drives (7200rpm , 180TB net space) in raidz1 as an archive for this type of material. 128MB Ram and 240GB L2arc. (The system was put together for archiving so no problem there, but since the people are very happy with this they now also want to use for more then just archiving...).

The archiving function (writing to the storage) is more then fine (far exceeds 700MB/s ). But if somebody wants to pull some material (which hasn't been used for a while) to work with it, the reading speed barely reaches 100MB/s. It also occurs frequently that several people pull old material simultaneously which doesn't make them any happier.
Now as long as the requested files are still in the cache (arc / l2arc) , the reading speed is ok, but reading speed on older material kinda sucks...(pardon my french).
So I would appreciate any suggestions on a configuration that can handle this situation.

Regards,

R
Date: Thu, 3 Jul 2014 17:09:36 +1200
Subject: Re: [zfs] performance stats / advise
Post by Randy S via illumos-zfs
In our case the average file length is about 30GB (which probably
cause sequential reads/writes). The thing is that there will be many
many files. Our goal is also that files that have not been accessed
for a long time should also be able to be read fast (e.g. is somebody
wants to edit an old file). By fast I mean >= 400MB/s
That's a fairly modest requirement for streaming data, three or 4 vdevs
comprising decent spinning rust could comfortably manage it.
It you aren't performing lots or random reads, you probably won't need
any cache devices.
--
Ian.
-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/26062426-a4c9712f
Modify Your Subscription: https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
illumos-zfs | Archives

| Modify
Your Subscription

illumos-zfs | Archives

| Modify
Your Subscription



-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/23047029-187a0c8d
Modify Your Subscription: https://www.listbox.com/member/?member_id=23047029&id_secret=23047029-2e85923f
Powered by Listbox: http://www.listbox.com
Loading...