Richard Kojedzinszky
2014-05-06 09:32:08 UTC
And also a performance and reliability test would worth it.
(https://github.com/rkojedzinszky/zfsziltest)
I would be interested in comparing it to an Intel SSD DC3700, which has a
very impressive performance, and with Intel's promise, its endurance is
comparable to SLC based SSDs. And the cost is very reasonable.
Kojedzinszky Richard
On Tue, 6 May 2014, Steven Hartland wrote:
> I can't really comment on OI but we have quite a bit of experience of all SSD
> pools under FreeBSD.
>
> The biggest issue is single strength when going though expanders when using
> 6Gbps devices. We've tested a number of chassis with hotswap backplanes
> which have turned out to have bad signal strength which results in unstable
> devices which will drop under load.
>
> Once you have a setup which is confirmed to have good signaling then things
> become a lot easier.
>
> I cant say I've used Seagate SSD's as we mainly use consumer grade disks
> which have served us well for what we do.
>
> One thing that may be an issue is SSD's generally require TRIM support to
> remain performant. Currently OI doesn't have TRIM support for ZFS where
> as FreeBSD does, which myself and other actively maintain so it maybe
> something worth considering.
>
> FW is also very important, particularly when it comes to TRIM support so
> I'd definitely recommend testing a single disk before buying in bulk.
>
> Regards
> Steve
>
>
> ----- Original Message ----- From: "Luke Iggleden" <***@lists.illumos.org>
> To: <***@lists.illumos.org>
> Sent: Tuesday, May 06, 2014 8:45 AM
> Subject: [zfs] all ssd pool
>
>
>> Hi All,
>>
>> We're looking at deploying an all SSD pool with the following hardware:
>>
>> Dual Node
>>
>> Supermicro SSG-2027B-DE2R24L
>> (includes LSI 2308 Controller)
>> 128GB RAM per node
>> 24 x Seagate PRO 600 480GB SSD
>>
>> 24 x LSI interposers (sata > sas) ?? (maybe, see post)
>> RSF-1 High Availability Suite to failover between nodes
>> Open Indiana or Omni OS
>>
>> My question really relates to the issues with SATA on SAS expanders and ZFS
>> and are modern LSI interposers with this combo working ok now with the
>> mpt_sas driver?
>>
>> I've seen some posts on forums which suggest that a couple of interposers
>> have died and have crashed the mpt_sas driver due to resets, but I'm
>> wondering if that is related to the bug in illumos which crashes the
>> mpt_sas driver (illumos bugs 4403, 4682 & 4819)
>>
>> https://www.illumos.org/issues/4403
>> âšhttps://www.illumos.org/issues/4682
>> âšhttps://www.illumos.org/issues/4819
>>
>> If LSI interposers are a no go, has anyone got these (or other) SATA SSD's
>> running on supermicro SAS2 expanders and getting a reliable platform,
>> specifically when a SSD dies or performance is at max?
>>
>> A few years ago we were burned by putting Hitachi 7200rpm SATA disks on an
>> expander, this was before most of the posts about 'sata on sas DONT!' posts
>> came out. That was 2009/10 then, so things could have changed?
>>
>> Also, there were some other posts suggesting that the WWN for SSD's with
>> LSI interposers were not being passed through, but it was suggested that
>> this was an issue with the SSD and not the interposer.
>>
>> Thanks in advance.
>>
>>
>> Luke Iggleden
>>
>>
>>
>> -------------------------------------------
>> illumos-zfs
>> Archives: https://www.listbox.com/member/archive/182191/=now
>> RSS Feed:
>> https://www.listbox.com/member/archive/rss/182191/24401717-fdfe502b
>> Modify Your Subscription: https://www.listbox.com/member/?&
>> Powered by Listbox: http://www.listbox.com
>>
>
>
>
> -------------------------------------------
> illumos-zfs
> Archives: https://www.listbox.com/member/archive/182191/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/182191/25402478-0858cafa
> Modify Your Subscription:
> https://www.listbox.com/member/?&
> Powered by Listbox: http://www.listbox.com
(https://github.com/rkojedzinszky/zfsziltest)
I would be interested in comparing it to an Intel SSD DC3700, which has a
very impressive performance, and with Intel's promise, its endurance is
comparable to SLC based SSDs. And the cost is very reasonable.
Kojedzinszky Richard
On Tue, 6 May 2014, Steven Hartland wrote:
> I can't really comment on OI but we have quite a bit of experience of all SSD
> pools under FreeBSD.
>
> The biggest issue is single strength when going though expanders when using
> 6Gbps devices. We've tested a number of chassis with hotswap backplanes
> which have turned out to have bad signal strength which results in unstable
> devices which will drop under load.
>
> Once you have a setup which is confirmed to have good signaling then things
> become a lot easier.
>
> I cant say I've used Seagate SSD's as we mainly use consumer grade disks
> which have served us well for what we do.
>
> One thing that may be an issue is SSD's generally require TRIM support to
> remain performant. Currently OI doesn't have TRIM support for ZFS where
> as FreeBSD does, which myself and other actively maintain so it maybe
> something worth considering.
>
> FW is also very important, particularly when it comes to TRIM support so
> I'd definitely recommend testing a single disk before buying in bulk.
>
> Regards
> Steve
>
>
> ----- Original Message ----- From: "Luke Iggleden" <***@lists.illumos.org>
> To: <***@lists.illumos.org>
> Sent: Tuesday, May 06, 2014 8:45 AM
> Subject: [zfs] all ssd pool
>
>
>> Hi All,
>>
>> We're looking at deploying an all SSD pool with the following hardware:
>>
>> Dual Node
>>
>> Supermicro SSG-2027B-DE2R24L
>> (includes LSI 2308 Controller)
>> 128GB RAM per node
>> 24 x Seagate PRO 600 480GB SSD
>>
>> 24 x LSI interposers (sata > sas) ?? (maybe, see post)
>> RSF-1 High Availability Suite to failover between nodes
>> Open Indiana or Omni OS
>>
>> My question really relates to the issues with SATA on SAS expanders and ZFS
>> and are modern LSI interposers with this combo working ok now with the
>> mpt_sas driver?
>>
>> I've seen some posts on forums which suggest that a couple of interposers
>> have died and have crashed the mpt_sas driver due to resets, but I'm
>> wondering if that is related to the bug in illumos which crashes the
>> mpt_sas driver (illumos bugs 4403, 4682 & 4819)
>>
>> https://www.illumos.org/issues/4403
>> âšhttps://www.illumos.org/issues/4682
>> âšhttps://www.illumos.org/issues/4819
>>
>> If LSI interposers are a no go, has anyone got these (or other) SATA SSD's
>> running on supermicro SAS2 expanders and getting a reliable platform,
>> specifically when a SSD dies or performance is at max?
>>
>> A few years ago we were burned by putting Hitachi 7200rpm SATA disks on an
>> expander, this was before most of the posts about 'sata on sas DONT!' posts
>> came out. That was 2009/10 then, so things could have changed?
>>
>> Also, there were some other posts suggesting that the WWN for SSD's with
>> LSI interposers were not being passed through, but it was suggested that
>> this was an issue with the SSD and not the interposer.
>>
>> Thanks in advance.
>>
>>
>> Luke Iggleden
>>
>>
>>
>> -------------------------------------------
>> illumos-zfs
>> Archives: https://www.listbox.com/member/archive/182191/=now
>> RSS Feed:
>> https://www.listbox.com/member/archive/rss/182191/24401717-fdfe502b
>> Modify Your Subscription: https://www.listbox.com/member/?&
>> Powered by Listbox: http://www.listbox.com
>>
>
>
>
> -------------------------------------------
> illumos-zfs
> Archives: https://www.listbox.com/member/archive/182191/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/182191/25402478-0858cafa
> Modify Your Subscription:
> https://www.listbox.com/member/?&
> Powered by Listbox: http://www.listbox.com