Sponsors
Sponsor Products
SuperMicro TwinBlades, are they good?
posted by Michael Miller  on Feb. 15, 2017, 10:22 a.m. (3 months, 8 days ago)
16 Responses     0 Plus One's     0 Comments  

I am greatly considering building a renderfarm using one of the these  for density, space and cost savings reasons but I don't see many people talk much about them in forums and was wondering if anybody here had any first hand experience with using them and how they compare more expensive solutions from HP or Dell.  Thankyou in advance!


Thread Tags:
  render 

Response from Michael Miller @ March 12, 2017, 5:37 a.m.

Thanks for all the good advice but I am probably just going to wait until SuperMicro releases the new 8U 20-node blade servers. https://www.supermicro.com/newsroom/pressreleases/2017/press170227_SuperBlade.cfm


0 Plus One's     0 Comments  
   

Response from Anonymous @ Feb. 23, 2017, 6:55 p.m.

I'm a bit late to this thread..


Just to add that we generally like the SM twin blade chassis. Best value for money. Yes, they stick out the front of the chassis and look a bit funky.


We to have also had issues with the sticking power buttons and the high fan noise.


We did some work on the noise side and found that downgrading the firmware for the management controller helped a lot in that regard.


From my notes.... "We noticed that with newer firmware on the CMM the fan speed sticks to high.The new CMM firmware version 02.05.82 has been designed to boost the Power supply Fan speeds to full if it detects some of the Nodes CPU Temps running above 65degC."


Downgrading to cmm020579.bin resolves the fan speed issue, for us anyway.


Dylan

From: studiosysadmins-discuss-bounces@studiosysadmins.com <studiosysadmins-discuss-bounces@studiosysadmins.com> on behalf of Jean-Francois Panisset <panisset@gmail.com>
Sent: Friday, 17 February 2017 10:18:58 AM
To: studiosysadmins-discuss@studiosysadmins.com
Subject: Re: [SSA-Discuss] SuperMicro TwinBlades, are they good?
 

[ CAUTION ]

This email originated outside Deluxe.

 

The SBI-7228R-T2X TwinBlades which support the V3/V4 Xeons have a "native" Mellanox 10GbE NIC as well as dual Intel 350 GbE ports, the SBM-XEM-X10SM 10GbE switch module does have 4 external 10GbE uplinks so you end up with 2.5:1 oversubscription, unless you want to use the SBM-XEM-002M passthrough module and an external switch.

Your NVMe based storage sounds really interesting, it does sound like with an all-flash back end one could get really high sequential and random performance through sheer commodity hardware brute force.

JF


On Wed, Feb 15, 2017 at 6:01 PM, Saker Klippsten <sakerk@gmail.com> wrote:
Short answer is Yes we can. All sims easily max them out loading the cache files for rendering  Remember Each node connects to the backplane at 1Gig and then you have that 3 port 10gig Switch. Unless you buy the 10gig addon cards for each motherboard which is expensive.
 
To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

 

[ CAUTION ]

DO NOT open attachments or click links from unknown senders. Only respond if you can validate the senders legitimacy.

 

This e-mail and any attachments are intended only for use by the addressee(s) named herein and may contain confidential information. If you are not the intended recipient of this e-mail, you are hereby notified any dissemination, distribution or copying of this email and any attachments is strictly prohibited. If you receive this email in error, please immediately notify the sender by return email and permanently delete the original, any copy and any printout thereof. The integrity and security of e-mail cannot be guaranteed.

0 Plus One's     0 Comments  
   

Response from Jean-Francois Panisset @ Feb. 16, 2017, 6:20 p.m.
The SBI-7228R-T2X TwinBlades which support the V3/V4 Xeons have a "native" Mellanox 10GbE NIC as well as dual Intel 350 GbE ports, the SBM-XEM-X10SM 10GbE switch module does have 4 external 10GbE uplinks so you end up with 2.5:1 oversubscription, unless you want to use the SBM-XEM-002M passthrough module and an external switch.

Your NVMe based storage sounds really interesting, it does sound like with an all-flash back end one could get really high sequential and random performance through sheer commodity hardware brute force.

JF


On Wed, Feb 15, 2017 at 6:01 PM, Saker Klippsten <sakerk@gmail.com> wrote:
Short answer is Yes we can. All sims easily max them out loading the cache files for rendering Remember Each node connects to the backplane at 1Gig and then you have that 3 port 10gig Switch. Unless you buy the 10gig addon cards for each motherboard which is expensive.

To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe


0 Plus One's     0 Comments  
   

Response from David Leach @ Feb. 16, 2017, 3:25 p.m.

+1


4-way nerd conference call!




From: studiosysadmins-discuss-bounces@studiosysadmins.com <studiosysadmins-discuss-bounces@studiosysadmins.com> on behalf of Douglas Meyer <dougmeyer@me.com>
Sent: Friday, 17 February 2017 5:13 a.m.
To: studiosysadmins-discuss@studiosysadmins.com
Cc: studiosysadmins-discuss@studiosysadmins.com
Subject: Re: [SSA-Discuss] SuperMicro TwinBlades, are they good?
  +1 super interested in NVME for render filers.
Also considering rolling my own for workstations with the 4x m.2 Amfeltec card and 4x 2TB Samsung 960 Pro's.
How does NVME behave when striping blades together?






On Feb 16, 2017, at 10:27 AM, William Sandler <william.sandler@allthingsmedia.com> wrote:

Saker,
I would love to see a detailed post about your experience with those SM NVMe servers.  I'm eyeing them pretty hard for ZFS but am curious to hear what you did with them!
 

William Sandler  All Things Media, LLC Office: 201.818.1999 Ex 158.   william.sandler@allthingsmedia.com
On Wed, Feb 15, 2017 at 9:18 PM, Jeremy Lang <jeremy.lang@it4vfx.com> wrote:
+11
Love the twin blades.   Saker's note on power is *very important*.  And even that might not be enough if you max out the CPU and RAM!   Know they're LOUD (there's an option to put the chassis in 'Office Mode' that helps a little) if your server room isn't acoustically contained it'll be a problem and you should buy ear protection for working behind your racks once these are in.   Also their COOLING load is not insignificant!   I just tried adding a second 10g switch to a chassis, that enables the second gigabit ethernet per machine and with Win10 on the render nodes and recent OneFS on the Isilon it automagically started being able to pull 2Gbit at a time, no additional setup/LAG/trunk/etc.  SMB3 for the win, very cool!  
The 14blade Micro I didn't love, but it is denser and easier to power.   Not sure the amperage draw but you get to use standard C13 to C14 cables which are easier.  (Not even positive they require 220v...)   The 40g to 10g is nice though the switches can be extra PITA to config.   They stick out a bit, if your rails are not recessed and you haven't thrown out your rack doors yet you might have an issue closing them (we did on our APC racks).   They DO NOT have the built in KVM mechanism that aggregates stuff and gives you a single connect for a crash cart!  Straight IPMI is getting pretty good though.   The RAM they use is ridiculously small (short) and hard to find, building these with 256GB isn't much an option yet and it'll always be more expensive than RAM for the twin blades.  
Ghassan usually has both in stock at ACECA.  
On both setups the switch OS varies pretty widely between OS versions so probably a good idea to firmware update either if you're not just single-dumb connecting them.  Especially noticeable on rentals some of which may be significantly older than others.  Seriously, the commands change...  grr.  



______________
Jeremy M. Lang
it4vfx

On Wed, Feb 15, 2017 at 8:05 AM, Saker Klippsten <sakerk@gmail.com> wrote:
+1 we have well over 500 twin blades  Buttons do get stuck but we use ipmi and the remote GUI to power on and off. So not effected.
Make sure you run 2x 30amp breakers per chassis. Require L6-30p http://aceca.com/blades-20nodes-power.html
Also instead of using those pdu pictures we use y cables plug direct into chassis and into L6-30p outlet. Cleaner cabling. Cheaper as well. 

We are moving to the 14blade micro. As we can get 40Gbit uplinks and 10gig per node http://aceca.com/blades-14nodes.html

Sent from my iPhone
On Feb 15, 2017, at 7:26 AM, Dave Young <davey@themill.com> wrote:

we have 140 of them in ny, 80 in chi and an unfathomable number in LA -- they're great


the power supplies sound like jet engines and the little buttons on the front have a nasty habit of getting stuck in a pressed state, but i still back 'em.




From: studiosysadmins-discuss-bounces@studiosysadmins.com <studiosysadmins-discuss-bounces@studiosysadmins.com> on behalf of Michael Miller <content@studiosysadmins.com>
Sent: Wednesday, February 15, 2017 10:22 AM
To: studiosysadmins-discuss@studiosysadmins.com
Subject: [SSA-Discuss] SuperMicro TwinBlades, are they good?   ** WARNING: This mail is from an external source **

I am greatly considering building a renderfarm using one of the these  for density, space and cost savings reasons but I don't see many people talk much about them in forums and was wondering if anybody here had any first hand experience with using them and how they compare more expensive solutions from HP or Dell.  Thankyou in advance!

To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe


To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

0 Plus One's     0 Comments  
   

Response from Doug Meyer @ Feb. 16, 2017, 11:15 a.m.
+1 super interested in NVME for render filers.
Also considering rolling my own for workstations with the 4x m.2 Amfeltec card and 4x 2TB Samsung 960 Pro's.
How does NVME behave when striping blades together?






On Feb 16, 2017, at 10:27 AM, William Sandler <william.sandler@allthingsmedia.com> wrote:

Saker,
I would love to see a detailed post about your experience with those SM NVMe servers.  I'm eyeing them pretty hard for ZFS but am curious to hear what you did with them!
 

William Sandler All Things Media, LLCOffice: 201.818.1999 Ex 158.  william.sandler@allthingsmedia.com
On Wed, Feb 15, 2017 at 9:18 PM, Jeremy Lang <jeremy.lang@it4vfx.com> wrote:
+11
Love the twin blades.  Saker's note on power is *very important*.  And even that might not be enough if you max out the CPU and RAM!  Know they're LOUD (there's an option to put the chassis in 'Office Mode' that helps a little) if your server room isn't acoustically contained it'll be a problem and you should buy ear protection for working behind your racks once these are in.  Also their COOLING load is not insignificant!  I just tried adding a second 10g switch to a chassis, that enables the second gigabit ethernet per machine and with Win10 on the render nodes and recent OneFS on the Isilon it automagically started being able to pull 2Gbit at a time, no additional setup/LAG/trunk/etc.  SMB3 for the win, very cool!  
The 14blade Micro I didn't love, but it is denser and easier to power.  Not sure the amperage draw but you get to use standard C13 to C14 cables which are easier.  (Not even positive they require 220v...)  The 40g to 10g is nice though the switches can be extra PITA to config.  They stick out a bit, if your rails are not recessed and you haven't thrown out your rack doors yet you might have an issue closing them (we did on our APC racks).  They DO NOT have the built in KVM mechanism that aggregates stuff and gives you a single connect for a crash cart!  Straight IPMI is getting pretty good though.  The RAM they use is ridiculously small (short) and hard to find, building these with 256GB isn't much an option yet and it'll always be more expensive than RAM for the twin blades.  
Ghassan usually has both in stock at ACECA.  
On both setups the switch OS varies pretty widely between OS versions so probably a good idea to firmware update either if you're not just single-dumb connecting them.  Especially noticeable on rentals some of which may be significantly older than others.  Seriously, the commands change...  grr.  



______________
Jeremy M. Lang
it4vfx

On Wed, Feb 15, 2017 at 8:05 AM, Saker Klippsten <sakerk@gmail.com> wrote:
+1 we have well over 500 twin blades  Buttons do get stuck but we use ipmi and the remote GUI to power on and off. So not effected.
Make sure you run 2x 30amp breakers per chassis. Require L6-30phttp://aceca.com/blades-20nodes-power.html
Also instead of using those pdu pictures we use y cables plug direct into chassis and into L6-30p outlet. Cleaner cabling. Cheaper as well. 

We are moving to the 14blade micro. As we can get 40Gbit uplinks and 10gig per nodehttp://aceca.com/blades-14nodes.html

Sent from my iPhone
On Feb 15, 2017, at 7:26 AM, Dave Young <davey@themill.com> wrote:

we have 140 of them in ny, 80 in chi and an unfathomable number in LA -- they're great


the power supplies sound like jet engines and the little buttons on the front have a nasty habit of getting stuck in a pressed state, but i still back 'em.




From: studiosysadmins-discuss-bounces@studiosysadmins.com <studiosysadmins-discuss-bounces@studiosysadmins.com> on behalf of Michael Miller <content@studiosysadmins.com>
Sent: Wednesday, February 15, 2017 10:22 AM
To: studiosysadmins-discuss@studiosysadmins.com
Subject: [SSA-Discuss] SuperMicro TwinBlades, are they good? ** WARNING: This mail is from an external source **

I am greatly considering building a renderfarm using one of the these  for density, space and cost savings reasons but I don't see many people talk much about them in forums and was wondering if anybody here had any first hand experience with using them and how they compare more expensive solutions from HP or Dell.  Thankyou in advance!

To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe


To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

0 Plus One's     0 Comments  
   

Response from Ali Poursamadi @ Feb. 16, 2017, 10:50 a.m.
Saker,I'm also very curious regarding the stability and reliability of the platform, have you tried running Freenas on it by any chance ?-Ali

On Thu, Feb 16, 2017 at 7:26 AM, William Sandler <william.sandler@allthingsmedia.com> wrote:
Saker,
I would love to see a detailed post about your experience with those SM NVMe servers. I'm eyeing them pretty hard for ZFS but am curious to hear what you did with them!


William SandlerAll Things Media, LLCOffice:201.818.1999 Ex 158.william.sandler@allthingsmedia.com
On Wed, Feb 15, 2017 at 9:18 PM, Jeremy Lang <jeremy.lang@it4vfx.com> wrote:
+11
Love the twin blades. Saker's note on power is *very important*. And even that might not be enough if you max out the CPU and RAM! Know they're LOUD (there's an option to put the chassis in 'Office Mode' that helps a little) if your server room isn't acousticallycontained it'll be a problem and you should buy ear protection for working behind your racks once these are in. Also their COOLING load is not insignificant! I just tried adding a second 10g switch to a chassis, that enables the second gigabit ethernet per machine and with Win10 on the render nodes and recent OneFS on the Isilon it automagically started being able to pull 2Gbit at a time, no additional setup/LAG/trunk/etc. SMB3 for the win, very cool!
The 14blade Micro I didn't love, but it is denser and easier to power. Not sure the amperage draw but you get to use standard C13 to C14 cables which are easier. (Not even positive they require 220v...) The 40g to 10g is nice though the switches can be extra PITA to config. They stick out a bit, if your rails are not recessed and you haven't thrown out your rack doors yet you might have an issue closing them (we did on our APC racks). They DO NOT have the built in KVM mechanism that aggregates stuff and gives you a single connect for a crash cart! Straight IPMI is getting pretty good though. The RAM they use is ridiculously small (short) and hard to find, building these with 256GB isn't much an option yet and it'll always be more expensive than RAM for the twin blades.
Ghassan usually has both in stock at ACECA.
On both setups the switch OS varies pretty widely between OS versions so probably a good idea to firmware update either if you're not just single-dumb connecting them. Especially noticeable on rentals some of which may be significantly older than others. Seriously, the commands change... grr.



______________
Jeremy M. Lang
it4vfx

On Wed, Feb 15, 2017 at 8:05 AM, Saker Klippsten <sakerk@gmail.com> wrote:
+1 we have well over 500 twin blades Buttons do get stuck but we use ipmi and the remote GUI to power on and off. So not effected.
Make sure you run 2x 30amp breakers per chassis. Require L6-30phttp://aceca.com/blades-20nodes-power.html
Also instead of using those pdu pictures we use y cables plug direct into chassis and into L6-30p outlet. Cleaner cabling. Cheaper as well.

We are moving to the 14blade micro. As we can get 40Gbit uplinks and 10gig per nodehttp://aceca.com/blades-14nodes.html

Sent from my iPhone
On Feb 15, 2017, at 7:26 AM, Dave Young <davey@themill.com> wrote:

we have 140 of them in ny, 80 in chi and an unfathomable number in LA -- they're great


the power supplies sound like jet engines and the little buttons on the front have a nasty habit of getting stuck in a pressed state, but i still back 'em.




From: studiosysadmins-discuss-bounces@studiosysadmins.com <studiosysadmins-discuss-bounces@studiosysadmins.com> on behalf of Michael Miller <content@studiosysadmins.com>
Sent: Wednesday, February 15, 2017 10:22 AM
To: studiosysadmins-discuss@studiosysadmins.com
Subject: [SSA-Discuss] SuperMicro TwinBlades, are they good?
** WARNING: This mail is from an external source **

I am greatly considering building a renderfarm using one of the these for density, space and cost savings reasons but I don't see many people talk much about them in forums and was wondering if anybody here had any first hand experience with using them and how they compare more expensive solutions from HP or Dell. Thankyou in advance!

To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe


To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe


To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe


0 Plus One's     0 Comments  
   

Response from William Sandler @ Feb. 16, 2017, 10:30 a.m.
Saker,
I would love to see a detailed post about your experience with those SM NVMe servers. I'm eyeing them pretty hard for ZFS but am curious to hear what you did with them!


William SandlerAll Things Media, LLCOffice:201.818.1999 Ex 158.william.sandler@allthingsmedia.com
On Wed, Feb 15, 2017 at 9:18 PM, Jeremy Lang <jeremy.lang@it4vfx.com> wrote:
+11
Love the twin blades. Saker's note on power is *very important*. And even that might not be enough if you max out the CPU and RAM! Know they're LOUD (there's an option to put the chassis in 'Office Mode' that helps a little) if your server room isn't acousticallycontained it'll be a problem and you should buy ear protection for working behind your racks once these are in. Also their COOLING load is not insignificant! I just tried adding a second 10g switch to a chassis, that enables the second gigabit ethernet per machine and with Win10 on the render nodes and recent OneFS on the Isilon it automagically started being able to pull 2Gbit at a time, no additional setup/LAG/trunk/etc. SMB3 for the win, very cool!
The 14blade Micro I didn't love, but it is denser and easier to power. Not sure the amperage draw but you get to use standard C13 to C14 cables which are easier. (Not even positive they require 220v...) The 40g to 10g is nice though the switches can be extra PITA to config. They stick out a bit, if your rails are not recessed and you haven't thrown out your rack doors yet you might have an issue closing them (we did on our APC racks). They DO NOT have the built in KVM mechanism that aggregates stuff and gives you a single connect for a crash cart! Straight IPMI is getting pretty good though. The RAM they use is ridiculously small (short) and hard to find, building these with 256GB isn't much an option yet and it'll always be more expensive than RAM for the twin blades.
Ghassan usually has both in stock at ACECA.
On both setups the switch OS varies pretty widely between OS versions so probably a good idea to firmware update either if you're not just single-dumb connecting them. Especially noticeable on rentals some of which may be significantly older than others. Seriously, the commands change... grr.



______________
Jeremy M. Lang
it4vfx

On Wed, Feb 15, 2017 at 8:05 AM, Saker Klippsten <sakerk@gmail.com> wrote:
+1 we have well over 500 twin blades Buttons do get stuck but we use ipmi and the remote GUI to power on and off. So not effected.
Make sure you run 2x 30amp breakers per chassis. Require L6-30phttp://aceca.com/blades-20nodes-power.html
Also instead of using those pdu pictures we use y cables plug direct into chassis and into L6-30p outlet. Cleaner cabling. Cheaper as well.

We are moving to the 14blade micro. As we can get 40Gbit uplinks and 10gig per nodehttp://aceca.com/blades-14nodes.html

Sent from my iPhone
On Feb 15, 2017, at 7:26 AM, Dave Young <davey@themill.com> wrote:

we have 140 of them in ny, 80 in chi and an unfathomable number in LA -- they're great


the power supplies sound like jet engines and the little buttons on the front have a nasty habit of getting stuck in a pressed state, but i still back 'em.




From: studiosysadmins-discuss-bounces@studiosysadmins.com <studiosysadmins-discuss-bounces@studiosysadmins.com> on behalf of Michael Miller <content@studiosysadmins.com>
Sent: Wednesday, February 15, 2017 10:22 AM
To: studiosysadmins-discuss@studiosysadmins.com
Subject: [SSA-Discuss] SuperMicro TwinBlades, are they good?
** WARNING: This mail is from an external source **

I am greatly considering building a renderfarm using one of the these for density, space and cost savings reasons but I don't see many people talk much about them in forums and was wondering if anybody here had any first hand experience with using them and how they compare more expensive solutions from HP or Dell. Thankyou in advance!

To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe


To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe


0 Plus One's     0 Comments  
   

Response from Jeremy Lang @ Feb. 15, 2017, 9:20 p.m.
+11
Love the twin blades. Saker's note on power is *very important*. And even that might not be enough if you max out the CPU and RAM! Know they're LOUD (there's an option to put the chassis in 'Office Mode' that helps a little) if your server room isn't acousticallycontained it'll be a problem and you should buy ear protection for working behind your racks once these are in. Also their COOLING load is not insignificant! I just tried adding a second 10g switch to a chassis, that enables the second gigabit ethernet per machine and with Win10 on the render nodes and recent OneFS on the Isilon it automagically started being able to pull 2Gbit at a time, no additional setup/LAG/trunk/etc. SMB3 for the win, very cool!
The 14blade Micro I didn't love, but it is denser and easier to power. Not sure the amperage draw but you get to use standard C13 to C14 cables which are easier. (Not even positive they require 220v...) The 40g to 10g is nice though the switches can be extra PITA to config. They stick out a bit, if your rails are not recessed and you haven't thrown out your rack doors yet you might have an issue closing them (we did on our APC racks). They DO NOT have the built in KVM mechanism that aggregates stuff and gives you a single connect for a crash cart! Straight IPMI is getting pretty good though. The RAM they use is ridiculously small (short) and hard to find, building these with 256GB isn't much an option yet and it'll always be more expensive than RAM for the twin blades.
Ghassan usually has both in stock at ACECA.
On both setups the switch OS varies pretty widely between OS versions so probably a good idea to firmware update either if you're not just single-dumb connecting them. Especially noticeable on rentals some of which may be significantly older than others. Seriously, the commands change... grr.



______________
Jeremy M. Lang
it4vfx

On Wed, Feb 15, 2017 at 8:05 AM, Saker Klippsten <sakerk@gmail.com> wrote:
+1 we have well over 500 twin blades Buttons do get stuck but we use ipmi and the remote GUI to power on and off. So not effected.
Make sure you run 2x 30amp breakers per chassis. Require L6-30phttp://aceca.com/blades-20nodes-power.html
Also instead of using those pdu pictures we use y cables plug direct into chassis and into L6-30p outlet. Cleaner cabling. Cheaper as well.

We are moving to the 14blade micro. As we can get 40Gbit uplinks and 10gig per nodehttp://aceca.com/blades-14nodes.html

Sent from my iPhone
On Feb 15, 2017, at 7:26 AM, Dave Young <davey@themill.com> wrote:

we have 140 of them in ny, 80 in chi and an unfathomable number in LA -- they're great


the power supplies sound like jet engines and the little buttons on the front have a nasty habit of getting stuck in a pressed state, but i still back 'em.




From: studiosysadmins-discuss-bounces@studiosysadmins.com <studiosysadmins-discuss-bounces@studiosysadmins.com> on behalf of Michael Miller <content@studiosysadmins.com>
Sent: Wednesday, February 15, 2017 10:22 AM
To: studiosysadmins-discuss@studiosysadmins.com
Subject: [SSA-Discuss] SuperMicro TwinBlades, are they good?
** WARNING: This mail is from an external source **

I am greatly considering building a renderfarm using one of the these for density, space and cost savings reasons but I don't see many people talk much about them in forums and was wondering if anybody here had any first hand experience with using them and how they compare more expensive solutions from HP or Dell. Thankyou in advance!

To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe


0 Plus One's     0 Comments  
   

Response from Saker Klippsten @ Feb. 15, 2017, 9:05 p.m.
Short answer is Yes we can. All sims easily max them out loading the cache files for rendering Remember Each node connects to the backplane at 1Gig and then you have that 3 port 10gig Switch. Unless you buy the 10gig addon cards for each motherboard which is expensive.
The new microblade systems all are native dual port 10gig to the backblane and then you have the option of 10gig passthrough cards or dual port 40Gig switch.

maybe should be another thread.....

We are skipping 10Gig and doing 40/100 on the Desktop/ServerPictured here is the first batch that just shows up. (5) Dell Force10 6100 Switches with 3 - 16 port 40gig cards and 1 - 8 port 100Gbe Card in each 2U chassis, they also have 2 ports of 10gbe to the right...

Inline image 1

Besides NVMe on the Workstation Side.. We are also building our own NVMe servers and ditching the normal go to storage vendors for our performance based storage. Using the 48 2.5" bay Supermicro NVMe Chassis with Intel P3700's and two 100Gbe Cards from Mellanox per chassis. In our tests we can easily saturate both cards. We unleashed our Farm on it and it did not bat an eye. Even when pulling drives out to simulate a failure.Here is a pict showing two 40gig cards each on their own PCI3x8 slot. We ran out of available render nodes during this screen capture but we have saturated two 100Gbe Nics since.Inline image 5

slightly OT but are you finding you're maxing out a 10gigE link to your chassis' thus necessitating a move to 40 or are you just future-proofing?



0 Plus One's     0 Comments  
   

Response from Bryce Evans @ Feb. 15, 2017, 3:30 p.m.
Another +1, I deployed/managed dozens of these and never had a problem.
JFP, to manage them I used the (crappy) supermicro SMCIPMITool.jar which is their command line interface tool, and wrote bash wrappers around it to do stuff.

On Wed, Feb 15, 2017 at 12:14 PM, Jean-Francois Panisset <panisset@gmail.com> wrote:
We also use mostly TwinBlade chassis for our render farm:

- as others have said, the PSUs are stupidly loud, probably much louder than anything else in your machine room
- built in KVM and network switch costs are essentially free compared to what you will spend on the Xeon CPUs and memory
- no silly licensing to enable "enterprise" features
- software is kind of amusingly bad and ugly but will still get you through: SuperMicro is definitely a hardware company
- minimal to non-existent documentation for the software side of things, I haven't really found tools or APIs to manage configuration. This is fine if you have a few chassis, you just set aside some time and click-click-click in the web GUI, but would love to know how the folks with larger deployments manage this

One of the main advantages of SuperMicro gear is that it allows smaller facilities to get pricing levels that the larger entities can get from the Dell/HP/Lenovos of the world. And for render nodes it's really all about "what's the cheapest possible way to stick Xeons and RAM in a box" with the bare minimum of manageability, and in that context the TwinBlades are very good bang for the buck.

JF



On Wed, Feb 15, 2017 at 9:52 AM, Dan Young <dan.young@framestore.com> wrote:
Having had a variety of mixed technologies I'd suggest SM and Dell are the only things worth a moment of your time and frankly the extra expense unless you have direct agreements with either vendor isn't worth it.
You'll find SM is thin on resource to run, built just well enough to be considered good, and is going to turn out the hits for awhile.
I back SuperMicro, with specific respect to Quattro, twin1U, blades, and render applications of all kinds more than almost any other vendor I can think of.
Taiwan FTW
DY
On Wed, Feb 15, 2017 at 10:22 AM, Michael Miller <content@studiosysadmins.com> wrote:

I am greatly considering building a renderfarm using one of the these for density, space and cost savings reasons but I don't see many people talk much about them in forums and was wondering if anybody here had any first hand experience with using them and how they compare more expensive solutions from HP or Dell. Thankyou in advance!


To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe



--

Framestore
Dan Young Lead Systems Engineer
London New York Los Angeles Montral
T+1 212 775 0600
135 Spring Street, New York NY 10012
Twitter Facebook framestore.com

To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe


To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe


0 Plus One's     0 Comments  
   

Response from Jean-Francois Panisset @ Feb. 15, 2017, 3:20 p.m.
We also use mostly TwinBlade chassis for our render farm:

- as others have said, the PSUs are stupidly loud, probably much louder than anything else in your machine room
- built in KVM and network switch costs are essentially free compared to what you will spend on the Xeon CPUs and memory
- no silly licensing to enable "enterprise" features
- software is kind of amusingly bad and ugly but will still get you through: SuperMicro is definitely a hardware company
- minimal to non-existent documentation for the software side of things, I haven't really found tools or APIs to manage configuration. This is fine if you have a few chassis, you just set aside some time and click-click-click in the web GUI, but would love to know how the folks with larger deployments manage this

One of the main advantages of SuperMicro gear is that it allows smaller facilities to get pricing levels that the larger entities can get from the Dell/HP/Lenovos of the world. And for render nodes it's really all about "what's the cheapest possible way to stick Xeons and RAM in a box" with the bare minimum of manageability, and in that context the TwinBlades are very good bang for the buck.

JF



On Wed, Feb 15, 2017 at 9:52 AM, Dan Young <dan.young@framestore.com> wrote:
Having had a variety of mixed technologies I'd suggest SM and Dell are the only things worth a moment of your time and frankly the extra expense unless you have direct agreements with either vendor isn't worth it.
You'll find SM is thin on resource to run, built just well enough to be considered good, and is going to turn out the hits for awhile.
I back SuperMicro, with specific respect to Quattro, twin1U, blades, and render applications of all kinds more than almost any other vendor I can think of.
Taiwan FTW
DY
On Wed, Feb 15, 2017 at 10:22 AM, Michael Miller <content@studiosysadmins.com> wrote:

I am greatly considering building a renderfarm using one of the these for density, space and cost savings reasons but I don't see many people talk much about them in forums and was wondering if anybody here had any first hand experience with using them and how they compare more expensive solutions from HP or Dell. Thankyou in advance!


To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe



--

Framestore
Dan Young Lead Systems Engineer
London New York Los Angeles Montral
T+1 212 775 0600
135 Spring Street, New York NY 10012
Twitter Facebook framestore.com

To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe


0 Plus One's     0 Comments  
   

Response from Dan Young @ Feb. 15, 2017, 12:55 p.m.
Having had a variety of mixed technologies I'd suggest SM and Dell are the only things worth a moment of your time and frankly the extra expense unless you have direct agreements with either vendor isn't worth it.
You'll find SM is thin on resource to run, built just well enough to be considered good, and is going to turn out the hits for awhile.
I back SuperMicro, with specific respect to Quattro, twin1U, blades, and render applications of all kinds more than almost any other vendor I can think of.
Taiwan FTW
DY
On Wed, Feb 15, 2017 at 10:22 AM, Michael Miller <content@studiosysadmins.com> wrote:

I am greatly considering building a renderfarm using one of the these for density, space and cost savings reasons but I don't see many people talk much about them in forums and was wondering if anybody here had any first hand experience with using them and how they compare more expensive solutions from HP or Dell. Thankyou in advance!


To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe



--

Framestore
Dan Young Lead Systems Engineer
London New York Los Angeles Montral
T+1 212 775 0600
135 Spring Street, New York NY 10012
Twitter Facebook framestore.com

0 Plus One's     0 Comments  
   

Response from Dave Young @ Feb. 15, 2017, 11:20 a.m.

hey saker


slightly OT but are you finding you're maxing out a 10gigE link to your chassis' thus necessitating a move to 40 or are you just future-proofing?




From: studiosysadmins-discuss-bounces@studiosysadmins.com <studiosysadmins-discuss-bounces@studiosysadmins.com> on behalf of Saker Klippsten <sakerk@gmail.com>
Sent: Wednesday, February 15, 2017 11:05 AM
To: studiosysadmins-discuss@studiosysadmins.com
Subject: Re: [SSA-Discuss] SuperMicro TwinBlades, are they good?
  ** WARNING: This mail is from an external source **
+1 we have well over 500 twin blades  Buttons do get stuck but we use ipmi and the remote GUI to power on and off. So not effected.
Make sure you run 2x 30amp breakers per chassis. Require L6-30p http://aceca.com/blades-20nodes-power.html
Blade Rental - aceca.com aceca.com In order to supply adequate power for each SuperBlade system, your site should provide (TWO) seperate circuits, each circuits should be 30 Amps of 200-240VAC.


Also instead of using those pdu pictures we use y cables plug direct into chassis and into L6-30p outlet. Cleaner cabling. Cheaper as well. 

We are moving to the 14blade micro. As we can get 40Gbit uplinks and 10gig per node http://aceca.com/blades-14nodes.html
14-Node Rental aceca.com Supermicro 14-Node Rental Up to (14) 3U Servers in (1) 42U Render Rack; 220V: Supermicro 3U enclosure includes; 4x (N+1 or N+N redundant) 2000W Titanium certified ...



Sent from my iPhone
On Feb 15, 2017, at 7:26 AM, Dave Young <davey@themill.com> wrote:

we have 140 of them in ny, 80 in chi and an unfathomable number in LA -- they're great


the power supplies sound like jet engines and the little buttons on the front have a nasty habit of getting stuck in a pressed state, but i still back 'em.




From: studiosysadmins-discuss-bounces@studiosysadmins.com <studiosysadmins-discuss-bounces@studiosysadmins.com> on behalf of Michael Miller <content@studiosysadmins.com>
Sent: Wednesday, February 15, 2017 10:22 AM
To: studiosysadmins-discuss@studiosysadmins.com
Subject: [SSA-Discuss] SuperMicro TwinBlades, are they good?
  ** WARNING: This mail is from an external source **

I am greatly considering building a renderfarm using one of the these  for density, space and cost savings reasons but I don't see many people talk much about them in forums and was wondering if anybody here had any first hand experience with using them and how they compare more expensive solutions from HP or Dell.  Thankyou in advance!

To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

0 Plus One's     0 Comments  
   

Response from Saker Klippsten @ Feb. 15, 2017, 11:10 a.m.
+1 we have well over 500 twin blades  Buttons do get stuck but we use ipmi and the remote GUI to power on and off. So not effected.
Make sure you run 2x 30amp breakers per chassis. Require L6-30phttp://aceca.com/blades-20nodes-power.html
Also instead of using those pdu pictures we use y cables plug direct into chassis and into L6-30p outlet. Cleaner cabling. Cheaper as well. 

We are moving to the 14blade micro. As we can get 40Gbit uplinks and 10gig per nodehttp://aceca.com/blades-14nodes.html

Sent from my iPhone
On Feb 15, 2017, at 7:26 AM, Dave Young <davey@themill.com> wrote:

we have 140 of them in ny, 80 in chi and an unfathomable number in LA -- they're great


the power supplies sound like jet engines and the little buttons on the front have a nasty habit of getting stuck in a pressed state, but i still back 'em.




From: studiosysadmins-discuss-bounces@studiosysadmins.com <studiosysadmins-discuss-bounces@studiosysadmins.com> on behalf of Michael Miller <content@studiosysadmins.com>
Sent: Wednesday, February 15, 2017 10:22 AM
To: studiosysadmins-discuss@studiosysadmins.com
Subject: [SSA-Discuss] SuperMicro TwinBlades, are they good?
  ** WARNING: This mail is from an external source **

I am greatly considering building a renderfarm using one of the these  for density, space and cost savings reasons but I don't see many people talk much about them in forums and was wondering if anybody here had any first hand experience with using them and how they compare more expensive solutions from HP or Dell.  Thankyou in advance!

To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

0 Plus One's     0 Comments  
   

Response from Greg Dickie @ Feb. 15, 2017, 10:30 a.m.
We've used them along with Dell. Definitely not as polished as the comparable Dell but can't really say anything bad about them. Super convenient having the iKVM builtin without paying the "enterprise" tax that Dell charges.
HTH,Greg
On Wed, Feb 15, 2017 at 10:22 AM, Michael Miller <content@studiosysadmins.com> wrote:

I am greatly considering building a renderfarm using one of the these for density, space and cost savings reasons but I don't see many people talk much about them in forums and was wondering if anybody here had any first hand experience with using them and how they compare more expensive solutions from HP or Dell. Thankyou in advance!


To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe



--


Greg Dickie
just a guy514-983-5400

0 Plus One's     0 Comments  
   

Response from Dave Young @ Feb. 15, 2017, 10:30 a.m.

we have 140 of them in ny, 80 in chi and an unfathomable number in LA -- they're great


the power supplies sound like jet engines and the little buttons on the front have a nasty habit of getting stuck in a pressed state, but i still back 'em.




From: studiosysadmins-discuss-bounces@studiosysadmins.com <studiosysadmins-discuss-bounces@studiosysadmins.com> on behalf of Michael Miller <content@studiosysadmins.com>
Sent: Wednesday, February 15, 2017 10:22 AM
To: studiosysadmins-discuss@studiosysadmins.com
Subject: [SSA-Discuss] SuperMicro TwinBlades, are they good?
  ** WARNING: This mail is from an external source **

I am greatly considering building a renderfarm using one of the these  for density, space and cost savings reasons but I don't see many people talk much about them in forums and was wondering if anybody here had any first hand experience with using them and how they compare more expensive solutions from HP or Dell.  Thankyou in advance!


0 Plus One's     0 Comments