Sponsor Products
Nuke reading DPX sequences very slowly from Mac OSX server.
posted by Todd Smith  on Feb. 2, 2016, 11:10 a.m. (4 years, 6 months, 1 day ago)
1 Responses     0 Plus One's     1 Comments  
Is the single OSX server connected to both the backup volume and the main storage array?What is the brand of storage array and how many spindles (Hard Drives) are behind it?
First and foremost, if your array is already busy, then you should schedule your backups to a time when you don't require human interaction. 
Is there a renderfarm for Nuke, or does each artist render locally?  If there is a Nuke render farm, how many threads (processor cores) is it set to use?  
Have you looked at the Activity Monitor on the OSX server to see what's happening with RAM/CPU or any RAID controller utilities available to you during these periods of heavy use?
Nuke artists *should* be able to cache read nodes locally (on their workstation), especially if they have large and/or fast local drives.  This would take load off your AFP server and allow them to work faster.
Todd SmithHead of Information Technology
soho vfx | 99 Atlantic Ave. Suite 303, Toronto, Ontario M6K 3J8office: (416) 516-7863 fax: (416) 516-9682 web: sohovfx.com

I have a file server performance issue thats been bugging me for a while and am hoping you all can help...


Nuke (9.0v5) is reading in DPX sequences at around 2-3fps when network traffic is low.  When traffic is high or automatic backups are running, read times can drop to 2-3 seconds per frame.   This becomes a huge time-sink when nearing the end of an episode and an artist is making small changes to a dozen shots per day.


My background:  Character animator for the last 10 years but have always enjoyed tinkering with hardware and software and have become the de facto tech guy at my latest studio because of it.  Im currently learning bash and python to help manage the render farm and automate mundane stuff. So please use small words and provide pictures when possible, Im pretty new to this stuff.  :)


The setting:  Small studio doing episodic television and commercials with about 15 artists but well ramp up to about 30 artists during heavy production.  About 8-10 of the artists would be Nuke compositors.  Were all Mac OS for the time being.



Now heres a whole lot of info Ive collected since Im not sure whats relevant and what isnt


Current file server configuration:  (I did not build this, we contract with an outside company for hardware setup and support)


Cisco SG500X-48  Switch.

Server: Mac Pro 4,1 (2009) - Dual Quad-core Zeon E5520 @ 2.26GHz, 32GB RAM, OS 10.9.5

RAID (1+0 configuration) storage 44TB w/ 66TB backup via ChronoSync

RAID card = Areca ARC-1863  -  http://www.areca.com.tw/products/1883.htm

Ethernet card = Small Tree P2E10G-2-XR 10GB running x8 width

Volumes are shared via AFP.



Current Test results:


Iperf3 - 940 Mb/s ethernet = theoretical max of 117.5 MB/s

Black magic Disk Speed test - Local Write (running locally on the server): avg. 1800 MB/s  Local Read: avg. 1000 MB/s

Black magic Disk Speed test - Network Write (shared volume mounted on workstation): avg. 105 MB/s  Network Read: avg. 105 MB/s


DPX sequences average about 8MB per frame giving a theoretical max of about 13fps to compositing machines.


A 1080p, 156 frame dpx sequence took 68 seconds to read in (just over 2 fps) and Activity Monitor reported that Data Received topped out around 14 MB/s.  So about 13% of the theoretical max transfer.

Simply copying the same sequence to the workstation takes about 16 seconds.


Our current show was shot on Arri Alexas and a 2K (2048x1152) ProRes 4444 clip straight from the Alexa reads in at about 18-20fps. Activity Monitor reports that Data Received tops out around 30 MB/s about 28% of the theoretical max but nearly 10X faster than the dpx sequence from the Nuke artists perspective.



I get that theres a fair bit of overhead when transferring 100, 8MB files as opposed to one, 800MB file but does that seem right to you all?  Am I delusional about what our little server should be able to do?



So this boils down to two main questions:


1.  This feels like a RAID issue to me.  What sort of settings could I look into on the RAID to minimize read times on image sequences or is this the best I can expect from our setup?

2.  What advantages (if any) does working with DPX sequences offer when the raw footage is shot in ProRes 4444?



Thanks for reading through all of that.  Any insight would be appreciated!


 - JG


To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

Thread Tags:

Response from J Griffin @ Feb. 2, 2016, 11:43 a.m.

Yes, the server is connected to both the main storage array and the backup array.  Both array enclosures are Cineraid loaded with Seagate drives.... 24, 4Gig drives (ST4000NM0023) in the main array and 12, 6Gig drives (ST6000DX000-1H217Z) in the backup array.

Backup scheduling has been a point of contention.  The producers and owner of the company are used the hourly backup offered by Time Machine and want the same kind of system in place for all project data, which is totally understandable.  I've negotiated down to every two hours so far.

Most Nuke rendering is done locally but we have two render-only nodes on the farm that can be used when needed.  They're both 6-core (12 threads) machines.

During heavy use, the CPU usage on the server remains fairly low.  I think I've seen it hit 50% once.  RAM usage has been a problem as the server will regularly fill the memory with cache files.  We were rebooting the server every couple of weeks to clear it out but now use an app "Memory Monitor" when memory pressure hits a certain threshold.  

Yes, Nuke caching works fine for the artists and if they're only working in a handful of shots per day, this isn't a huge issue.  Almost all of the complaints about this come from the two lead compositors since they are jumping in and out of many, many shots every day.


 - JG

0 Plus One's     1 Comments