I had a similar, horrificly bad marketing email come through my inbox from a different vendor this month. The guy even called me and couldn't explain his product to me.
Something is in the water.
Sent from Outlook for Android
|THE MILL 1000 WEST FULTON MARKET, SUITE 250, CHICAGO, IL 60607|
From: firstname.lastname@example.org <email@example.com> on behalf of Todd Smith <firstname.lastname@example.org>
Sent: Sunday, November 25, 2018 2:29:42 PM
Subject: Re: [SSA-Discuss] The evolvement of traditional drives, sas, sata, to ssd, m.2, nvme, what is next .
** Incoming mail from outside The Mill
Please make it stop.
Head of Information Technology
soho vfx |
40 Hanna Ave. Suite 403, Toronto, Ontario M6K 0C3
office: (416) 516-7863 fax: (416)
516-9682 web: sohovfx.com
----- On Nov 25, 2018, at 12:27 PM, content <email@example.com> wrote:
The last two years i have seen a lot of storage vendors pushing NVME upgrades or offering explicitly NVME storage. It appear that is
becoming the norm these days. We are living in the era of
technology advancements, new hard drives, controllers , network cards, cpus becoming faster, bigger, lower latency every day. That is all good and companies are integrating those in their workflows, well at least the companies that can afford it. What i
have seen in the the real world is that such solutions are still not enough for a M&E average production. Let me explain my thoughts. If you spec out a full NVME solution is probably in the 200k+ range and you may get 40TB usable. The system will be super
fast and will have no issues keeping up with the iops and bandwidth for your ever demanding workflow. Now that you have the system in, you have to figure out to to deliver that performance to the workstation. Find network cards that works with tour OS and
compatible drivers. Use the available open source protocols like NFS, SMB and figure out to to optimize them, or write your own agent/protocol to actually take advantage of that performance. Ok well some of you have that figure out, now you have to see
how your NLE will utilize such performance and how to you maintain that setup with the inevitable upgrades like OS versions, and application version.
What if i tell you that there is another way to get faster performance than the NVME flash arrays at more affordable cost.
Well lets get a bit technical now. What is the fastest piece of hardware that access data in a computer? Is it the hard drive ? Is the CPU? Is it the bus (PCI slots) ? Is it the RAM?
Which one is M&E industry always in demand. Of course all those do matter in a traditional server but how do you get that performance? One piece of hardware that i have see not being utilize in the traditional storage vendors is the RAM. Some of the
hardware manufacturers will offer hardware controllers that use some caching or they have some kind of cashing mechanism build in. Not large enough to make a significant difference. They usually do not get into very details about how that works. You have
to dig it up yourself.
RAM is the fastest way to access data, large data, media and entertainment data. Load that uncompressed movie into RAM, an average server these day can easily use 1TB of RAM. The challenge here is how to tell the filesystem and the OS to load that data
into RAM. I have been working with Solaris ZFS for the last 5 years and no other OS/filesystem does it better and more efficient. ZFS will load the most frequently accessed data in to RAM ARC. DDR4 at 2666MHz ECC low latency is utilized here. When the
RAM is full the system will load data in the LARC which is the read cache. That is where the fast NVME drives come handy. We used two 4TB Intel striped and we get 8TB read cache. I can tell you is quite amazing to see both of those RAM and READ CACHE max
out on a production day when you have 60 edit bays hitting it. Lets do some quick math here, 1TB of RAM speed and 8TB of two NVME striped using PCI slots. I would say that is fast enough for a traditional production workflow and also 9TB available worth
of media files ready to serve and any given time. I have posted some benchmarks ran a CLI from a real production server below so you can see some realistic numbers.
More over all the editing, compositing, finishing, motion graphics applications that are being developed are thirsty for ram. You can see it in system resources if you open 4K media in your timeline. Somehow the application process sucks the entire ram
available. They do that to load the data into ram for smoother playback. It does not matter how much ram you install, 64G, 124G, 256G any modern release software will use it assuming that you have a heavy 4K timeline.
Here are some performance stats that we got from out latest server ZFS-600 that we just put in production. These are results running CLI commands not theoretical and accumulative bandwidth taken from data sheet products.
RAM sustainable bandwidth 197GB/sec
command line volume test run with dd 7.3GB/sec
command line multithreaded using iozone 28GB/sec
that is upper case B means Bytes.
The full data sheet and server specs of the ZFS-600 can be found of the here http://ittemple.com/products/
To unsubscribe from the list send a blank e-mail to mailto:firstname.lastname@example.org?subject=unsubscribe