The Jim Henson Studios has used rgw Stratoflash posted below for almost two years now, in its entirety, to home build flash storage systems that are quite unique. I architected the shabang and BOM and materials build in 2014/2015 with Brocade and Samsung. I believe you need to understand the flow of electrons (data) across your own facility, right down to the last speck and drop. Outsourcing storage and network topographies is a BIG no-no in this busness, as is placing the onus of responsibitilies on third parties is asking for it. You may feel safer in your job by calling out to a storage vender when their is an issue, and the onus is then not on you, but no one is helped with such a belief or strategy. Gone will soon be these days when YOU are not IT (old school....).
And this stuff is not rocket science.
(1) We use Brocade 10G/40G switches so that Clients are each connected with a 10G card directly to three 48 port 10G Brocades. Our core or primary switch is a 24 port 40G storage switch. These can certainly be costly, but there is good reason. Each switch is bundled to the core at dual 2x40G or 80G. Each server is to the core at 40G. Each and EVERY client is 10G to the switches, either via a SANLINK2/TH2 for Apple, or windows/linux enjoy the 10G PCI-E cards.
(2) Each storage node actually can support 24x 2TB Samsung Flash - totalling 48TB of RAW Flash. We gcluster the nodes with some secrete sauce, and we obviously have tweaked the shite out of Debian Linux inside each storage node, and tqeaked Zpooling and ZUK'ing and Cach'ing. But this too can be met with most peoples linux experience.
(3) When I mean we cover ALL VFX ad Post productions of these storage nodes, I mean pretty much all of it. Davinci, Premier, Final Cut, AVID, Nuke, Maya, Max, GPU Redshift Rendering, PhotoShop, et cetera EVEN A RUNTIME PRODUCITON MOCAP STAGE. No issues what so ever, no slowness. EVEN a fully outgrown deployment of and Assdet Database system. Thats 100's of operations per second from file formats like EXR multiband, to database read/write IOs, to many many bucket renders from Redshift, onto Premier caching and simultaneous work flows and sharing, and all at 2K/4k.....
(4) We have tried recent R&D on the systems and cluster nodes. Specific to this thread, if we tried to force in a set of high end RAID controllers, the shit goes to shit, instantly. There is something pure and unique about a proper zpool strategy.
We have recently decided to R&D over clocking into the mix, as the MOBOs we use are specifically tuned for this and our machine rooms have 6 tons of AC.
PS_ for more please text me at 310-951-7331 and I am cincluding my responses to this thread. I will call back.