Sponsors
Sponsor Products
Supermicro SSD/NVMe Servers and MS Storage Spaces 2016
posted by Ali Poursamadi  on Feb. 20, 2017, 3:35 p.m. (6 years, 9 months, 10 days ago)
5 Responses     0 Plus One's     1 Comments  
Thanks Saker for sharing this, it's very valuable.I was talking with iXsystem guys about running freenas on similar hardware (SSG-2028R-E1CR48L ) and they mentioned it might have Thermal issues, I was wondering what your experience is withthat system in terms of heat/power.
-Ali2028R-E1CR48L

2028R-E1CR48L


On Mon, Feb 20, 2017 at 11:24 AM, Saker Klippsten <sakerk@gmail.com> wrote:
**This is a continuation from the post Titled "SuperMicro TwinBlades, are they good?"
Will reply back with more data as we collect it:)

While we are lovers of Linux and Opensource. We are mostly a Windows Desktop Shop. We have opted for now to go with Windows Storage Spaces built into server 2016. If you have not checked it out, I highly recommend looking at these videos, do some testing and form your own opinion on its performance, simplicity and ease of use and all the benefits included, snapshots, quotas, scaling, storage tiers
As many know we run a pretty lean shop here at Zoic. ~430+ users and 5 Full time- IT Folk plus me ( someone has to change the light bulbs) across 3 locations. So while we could use a Unix setup, We opted to use Server 2016 because well.. we are all windows on the desktop/domain/email side. We will be 100% windows 10 by end 3rd quarter as projects wrap up , we can roll artists to new environments. More on this specs later but similar to POC1 below with a only a single 800GB nvme and 1TB system evo drive. single 40Gig port. And 256GB ram.
We had a requirement in the last month for lots of sims and rendering with Houdini. Our Isilons Clusters And our Qumulo were having a tuff time keeping up in our tests. The cost to scale those out were going to be too much for ourBudget. We had been testing our POC1 (below) with great success for serving out apps.Acting as an RV box. as well as testing it with one off sets of renders for Maya/Vray and Houdini. So we opted to invest in a larger setup.
These are all new systems for us so more battle testing to ensue in the next few months as we hit them hard with 20 Houdini Artists, 50+ Dedicated Sim Nodes and about 500 Mantra Render based render nodes reading gigantic cache files.
POC 2 and POC3 will be tiered together using Storage Space Direct 2016. We will be collecting lots of Valuable data to share back here when everything is up and running or burning up :) hopefully not the later.
We have another fun project following the delivery of this one for a 30min 360 -4k projection setup with an obnoxious resolution. so we will then flop this system from Houdini users to Nuke / Nuke Studio and Maya /Vray Rendering and some Flame all working off this storage.
If all goes well. We will scale this out and we will be moving off Isilon as our primary performance storage to these supermicro systems by the 3rd quarter of this year . we have a great opportunity to RND this kit in production . We still love Isilon and Qumulo but currently their performance in the area of NVMe is lacking and while many Bag on MS (my self included) so far Storage Spaces has impressed me with its performance, simplicity. Only time will dictate the reliability.
Our next project is then tiering to cold storage using 90 bay supermicro chassis and 10TB He Drives but thats Q4( if anyone has played with those! speak up )
Here are some Videos to check out on SSD2016
https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-direct-overview
https://www.youtube.com/watch?v=raeUiNtMk0E
https://www.youtube.com/watch?v=l2QBvNwJx64
https://www.youtube.com/watch?v=5rmkS5kwijU


POC1with a 1U Supermicro and two 40gig dual port intel X710 cards. remember a single port of 40Gig can max out an pcie3 x8 slot. So we have two cards in separate x16 slots for dual port 40 or single 100gig cards.
(1) Super Server1028U-TN10RT+
https://www.supermicro.com/products/system/1U/1028/SYS-1028U-TN10RT_.cfm

(2) IntelE5-2640 V4https://ark.intel.com/products/92984/Intel-Xeon-Processor-E5-2640-v4-25M-Cache-2_40-GHz
(8)16GB DDR4-2400 ECC REG DIMM
(6)Intel P3600 400GB NVMEhttp://www.intel.com/content/www/us/en/solid-state-drives/ssd-dc-p3600-spec.html
(2) Intel XL710QDA2http://www.intel.com/content/www/us/en/ethernet-products/converged-network-adapters/ethernet-xl710.html


POC2is the 2U Supermicro and two Melanox 100Gbe Cards
(1) SuperStorage Server2028R-NR48N
https://www.supermicro.com/products/system/2U/2028/SSG-2028R-NR48N.cfm

(2)Intel E56-2680 V4https://ark.intel.com/products/91754/Intel-Xeon-Processor-E5-2680-v4-35M-Cache-2_40-GHz

(8)16GB DDR4-2400 ECC REG DIMM
(2) Mellanox ConnectX-4 MCX456A-ECAT
http://www.mellanox.com/related-docs/prod_adapter_cards/PB_ConnectX-4_VPI_Card.pdf

(24)Intel P3600 400GB NVME expandable to 48
http://www.intel.com/content/www/us/en/solid-state-drives/ssd-dc-p3600-spec.html

POC3 is a 2U Supermicro Like above but using Samsung 4TB SSD EVO Drives.
(1) SuperStorage Server SSG-2028R-E1CR48Lhttp://www.supermicro.com/products/system/2U/2028/SSG-2028R-E1CR48L.cfm

(2)Intel E56-2680 V4
https://ark.intel.com/products/91754/Intel-Xeon-Processor-E5-2680-v4-35M-Cache-2_40-GHz

(8)16GB DDR4-2400 ECC REG DIMM

(2)Mellanox ConnectX-4 MCX456A-ECAT
http://www.mellanox.com/related-docs/prod_adapter_cards/PB_ConnectX-4_VPI_Card.pdf
(24)Samsung 4TB EVO expandable to 48
http://www.samsung.com/us/computing/memory-storage/solid-state-drives/ssd-850-evo-25-sata-iii-4tb-mz-75e4t0b-am













To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe


Thread Tags:
  discuss-at-studiosysadmins 

Response from Jai Bhatia @ July 26, 2017, 12:39 p.m.

Hi Saker,

Just curious how the POC3 is going?


0 Plus One's     1 Comments  
   

Response from Jai Bhatia @ July 26, 2017, 12:39 p.m.

Hi Saker,

Just curious how the POC3 is going?


0 Plus One's     0 Comments  
   

Response from Philippe Chotard @ Feb. 21, 2017, 12:25 p.m.
Thanks for sharing Saker,Are you doing dual parity ? If so, how's the machine's load when hammered by your renderfarm ?
Were you able to estimate the P3600 lifetime under your workload ?Did you go for ReFS ?
You have a great setup there, no "network-is-slow" for you any more ;)Oh wait.. I remember saying the same thing when we switched from 100Mb to 1Gb switches and got our first netapp .. :(
On Mon, Feb 20, 2017 at 4:15 PM, Jean-Francois Panisset <panisset@gmail.com> wrote:
Very interesting stuff, and thanks for the detailed info.

https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-direct-overview

says "Storage Spaces Direct is included in Windows Server 2016 Datacenter."

So that presumably means you need a license of Server 2016 Datacenter on each node you join to the storage cluster, right?

I'm not finding anything about NFS support in the NAS scenario, are you planning to use SMB for your Flames accessing this storage?

One specific scenario I could see this being very interesting is shared storage for a Hyper-V cluster, but it doesn't seem like Storage Spaces Direct is available in Hyper-V Server 2016, so you would need "full fat" Datacenter licenses for each node in your Hyper-V cluster.

Thanks again for sharing, lots of cool stuff to read / videos to watch.

JF




On Mon, Feb 20, 2017 at 11:24 AM, Saker Klippsten <sakerk@gmail.com> wrote:
**This is a continuation from the post Titled "SuperMicro TwinBlades, are they good?"
Will reply back with more data as we collect it:)

While we are lovers of Linux and Opensource. We are mostly a Windows Desktop Shop. We have opted for now to go with Windows Storage Spaces built into server 2016. If you have not checked it out, I highly recommend looking at these videos, do some testing and form your own opinion on its performance, simplicity and ease of use and all the benefits included, snapshots, quotas, scaling, storage tiers
As many know we run a pretty lean shop here at Zoic. ~430+ users and 5 Full time- IT Folk plus me ( someone has to change the light bulbs) across 3 locations. So while we could use a Unix setup, We opted to use Server 2016 because well.. we are all windows on the desktop/domain/email side. We will be 100% windows 10 by end 3rd quarter as projects wrap up , we can roll artists to new environments. More on this specs later but similar to POC1 below with a only a single 800GB nvme and 1TB system evo drive. single 40Gig port. And 256GB ram.
We had a requirement in the last month for lots of sims and rendering with Houdini. Our Isilons Clusters And our Qumulo were having a tuff time keeping up in our tests. The cost to scale those out were going to be too much for ourBudget. We had been testing our POC1 (below) with great success for serving out apps.Acting as an RV box. as well as testing it with one off sets of renders for Maya/Vray and Houdini. So we opted to invest in a larger setup.
These are all new systems for us so more battle testing to ensue in the next few months as we hit them hard with 20 Houdini Artists, 50+ Dedicated Sim Nodes and about 500 Mantra Render based render nodes reading gigantic cache files.
POC 2 and POC3 will be tiered together using Storage Space Direct 2016. We will be collecting lots of Valuable data to share back here when everything is up and running or burning up :) hopefully not the later.
We have another fun project following the delivery of this one for a 30min 360 -4k projection setup with an obnoxious resolution. so we will then flop this system from Houdini users to Nuke / Nuke Studio and Maya /Vray Rendering and some Flame all working off this storage.
If all goes well. We will scale this out and we will be moving off Isilon as our primary performance storage to these supermicro systems by the 3rd quarter of this year . we have a great opportunity to RND this kit in production . We still love Isilon and Qumulo but currently their performance in the area of NVMe is lacking and while many Bag on MS (my self included) so far Storage Spaces has impressed me with its performance, simplicity. Only time will dictate the reliability.
Our next project is then tiering to cold storage using 90 bay supermicro chassis and 10TB He Drives but thats Q4( if anyone has played with those! speak up )
Here are some Videos to check out on SSD2016
https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-direct-overview
https://www.youtube.com/watch?v=raeUiNtMk0E
https://www.youtube.com/watch?v=l2QBvNwJx64
https://www.youtube.com/watch?v=5rmkS5kwijU


POC1with a 1U Supermicro and two 40gig dual port intel X710 cards. remember a single port of 40Gig can max out an pcie3 x8 slot. So we have two cards in separate x16 slots for dual port 40 or single 100gig cards.
(1) Super Server1028U-TN10RT+
https://www.supermicro.com/products/system/1U/1028/SYS-1028U-TN10RT_.cfm

(2) IntelE5-2640 V4https://ark.intel.com/products/92984/Intel-Xeon-Processor-E5-2640-v4-25M-Cache-2_40-GHz
(8)16GB DDR4-2400 ECC REG DIMM
(6)Intel P3600 400GB NVMEhttp://www.intel.com/content/www/us/en/solid-state-drives/ssd-dc-p3600-spec.html
(2) Intel XL710QDA2http://www.intel.com/content/www/us/en/ethernet-products/converged-network-adapters/ethernet-xl710.html


POC2is the 2U Supermicro and two Melanox 100Gbe Cards
(1) SuperStorage Server2028R-NR48N
https://www.supermicro.com/products/system/2U/2028/SSG-2028R-NR48N.cfm

(2)Intel E56-2680 V4https://ark.intel.com/products/91754/Intel-Xeon-Processor-E5-2680-v4-35M-Cache-2_40-GHz

(8)16GB DDR4-2400 ECC REG DIMM
(2) Mellanox ConnectX-4 MCX456A-ECAT
http://www.mellanox.com/related-docs/prod_adapter_cards/PB_ConnectX-4_VPI_Card.pdf

(24)Intel P3600 400GB NVME expandable to 48
http://www.intel.com/content/www/us/en/solid-state-drives/ssd-dc-p3600-spec.html

POC3 is a 2U Supermicro Like above but using Samsung 4TB SSD EVO Drives.
(1) SuperStorage Server SSG-2028R-E1CR48Lhttp://www.supermicro.com/products/system/2U/2028/SSG-2028R-E1CR48L.cfm

(2)Intel E56-2680 V4
https://ark.intel.com/products/91754/Intel-Xeon-Processor-E5-2680-v4-35M-Cache-2_40-GHz

(8)16GB DDR4-2400 ECC REG DIMM

(2)Mellanox ConnectX-4 MCX456A-ECAT
http://www.mellanox.com/related-docs/prod_adapter_cards/PB_ConnectX-4_VPI_Card.pdf
(24)Samsung 4TB EVO expandable to 48
http://www.samsung.com/us/computing/memory-storage/solid-state-drives/ssd-850-evo-25-sata-iii-4tb-mz-75e4t0b-am













To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe


To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe


0 Plus One's     0 Comments  
   

Response from Jean-Francois Panisset @ Feb. 20, 2017, 7:20 p.m.
Very interesting stuff, and thanks for the detailed info.

https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-direct-overview

says "Storage Spaces Direct is included in Windows Server 2016 Datacenter."

So that presumably means you need a license of Server 2016 Datacenter on each node you join to the storage cluster, right?

I'm not finding anything about NFS support in the NAS scenario, are you planning to use SMB for your Flames accessing this storage?

One specific scenario I could see this being very interesting is shared storage for a Hyper-V cluster, but it doesn't seem like Storage Spaces Direct is available in Hyper-V Server 2016, so you would need "full fat" Datacenter licenses for each node in your Hyper-V cluster.

Thanks again for sharing, lots of cool stuff to read / videos to watch.

JF




On Mon, Feb 20, 2017 at 11:24 AM, Saker Klippsten <sakerk@gmail.com> wrote:
**This is a continuation from the post Titled "SuperMicro TwinBlades, are they good?"
Will reply back with more data as we collect it:)

While we are lovers of Linux and Opensource. We are mostly a Windows Desktop Shop. We have opted for now to go with Windows Storage Spaces built into server 2016. If you have not checked it out, I highly recommend looking at these videos, do some testing and form your own opinion on its performance, simplicity and ease of use and all the benefits included, snapshots, quotas, scaling, storage tiers
As many know we run a pretty lean shop here at Zoic. ~430+ users and 5 Full time- IT Folk plus me ( someone has to change the light bulbs) across 3 locations. So while we could use a Unix setup, We opted to use Server 2016 because well.. we are all windows on the desktop/domain/email side. We will be 100% windows 10 by end 3rd quarter as projects wrap up , we can roll artists to new environments. More on this specs later but similar to POC1 below with a only a single 800GB nvme and 1TB system evo drive. single 40Gig port. And 256GB ram.
We had a requirement in the last month for lots of sims and rendering with Houdini. Our Isilons Clusters And our Qumulo were having a tuff time keeping up in our tests. The cost to scale those out were going to be too much for ourBudget. We had been testing our POC1 (below) with great success for serving out apps.Acting as an RV box. as well as testing it with one off sets of renders for Maya/Vray and Houdini. So we opted to invest in a larger setup.
These are all new systems for us so more battle testing to ensue in the next few months as we hit them hard with 20 Houdini Artists, 50+ Dedicated Sim Nodes and about 500 Mantra Render based render nodes reading gigantic cache files.
POC 2 and POC3 will be tiered together using Storage Space Direct 2016. We will be collecting lots of Valuable data to share back here when everything is up and running or burning up :) hopefully not the later.
We have another fun project following the delivery of this one for a 30min 360 -4k projection setup with an obnoxious resolution. so we will then flop this system from Houdini users to Nuke / Nuke Studio and Maya /Vray Rendering and some Flame all working off this storage.
If all goes well. We will scale this out and we will be moving off Isilon as our primary performance storage to these supermicro systems by the 3rd quarter of this year . we have a great opportunity to RND this kit in production . We still love Isilon and Qumulo but currently their performance in the area of NVMe is lacking and while many Bag on MS (my self included) so far Storage Spaces has impressed me with its performance, simplicity. Only time will dictate the reliability.
Our next project is then tiering to cold storage using 90 bay supermicro chassis and 10TB He Drives but thats Q4( if anyone has played with those! speak up )
Here are some Videos to check out on SSD2016
https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-direct-overview
https://www.youtube.com/watch?v=raeUiNtMk0E
https://www.youtube.com/watch?v=l2QBvNwJx64
https://www.youtube.com/watch?v=5rmkS5kwijU


POC1with a 1U Supermicro and two 40gig dual port intel X710 cards. remember a single port of 40Gig can max out an pcie3 x8 slot. So we have two cards in separate x16 slots for dual port 40 or single 100gig cards.
(1) Super Server1028U-TN10RT+
https://www.supermicro.com/products/system/1U/1028/SYS-1028U-TN10RT_.cfm

(2) IntelE5-2640 V4https://ark.intel.com/products/92984/Intel-Xeon-Processor-E5-2640-v4-25M-Cache-2_40-GHz
(8)16GB DDR4-2400 ECC REG DIMM
(6)Intel P3600 400GB NVMEhttp://www.intel.com/content/www/us/en/solid-state-drives/ssd-dc-p3600-spec.html
(2) Intel XL710QDA2http://www.intel.com/content/www/us/en/ethernet-products/converged-network-adapters/ethernet-xl710.html


POC2is the 2U Supermicro and two Melanox 100Gbe Cards
(1) SuperStorage Server2028R-NR48N
https://www.supermicro.com/products/system/2U/2028/SSG-2028R-NR48N.cfm

(2)Intel E56-2680 V4https://ark.intel.com/products/91754/Intel-Xeon-Processor-E5-2680-v4-35M-Cache-2_40-GHz

(8)16GB DDR4-2400 ECC REG DIMM
(2) Mellanox ConnectX-4 MCX456A-ECAT
http://www.mellanox.com/related-docs/prod_adapter_cards/PB_ConnectX-4_VPI_Card.pdf

(24)Intel P3600 400GB NVME expandable to 48
http://www.intel.com/content/www/us/en/solid-state-drives/ssd-dc-p3600-spec.html

POC3 is a 2U Supermicro Like above but using Samsung 4TB SSD EVO Drives.
(1) SuperStorage Server SSG-2028R-E1CR48Lhttp://www.supermicro.com/products/system/2U/2028/SSG-2028R-E1CR48L.cfm

(2)Intel E56-2680 V4
https://ark.intel.com/products/91754/Intel-Xeon-Processor-E5-2680-v4-35M-Cache-2_40-GHz

(8)16GB DDR4-2400 ECC REG DIMM

(2)Mellanox ConnectX-4 MCX456A-ECAT
http://www.mellanox.com/related-docs/prod_adapter_cards/PB_ConnectX-4_VPI_Card.pdf
(24)Samsung 4TB EVO expandable to 48
http://www.samsung.com/us/computing/memory-storage/solid-state-drives/ssd-850-evo-25-sata-iii-4tb-mz-75e4t0b-am













To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe


0 Plus One's     0 Comments  
   

Response from Ali Poursamadi @ Feb. 20, 2017, 3:45 p.m.
If you can also comment on the heat dissipation of the NVMe version (2028R-NR48N ) would be great .-Ali

2028R-NR48N

2028R-NR48N

2028R-NR48N)


On Mon, Feb 20, 2017 at 12:32 PM, Ali Poursamadi <apoursamady@gmail.com> wrote:
Thanks Saker for sharing this, it's very valuable.I was talking with iXsystem guys about running freenas on similar hardware (SSG-2028R-E1CR48L ) and they mentioned it might have Thermal issues, I was wondering what your experience is withthat system in terms of heat/power.
-Ali2028R-E1CR48L

2028R-E1CR48L


On Mon, Feb 20, 2017 at 11:24 AM, Saker Klippsten <sakerk@gmail.com> wrote:
**This is a continuation from the post Titled "SuperMicro TwinBlades, are they good?"
Will reply back with more data as we collect it:)

While we are lovers of Linux and Opensource. We are mostly a Windows Desktop Shop. We have opted for now to go with Windows Storage Spaces built into server 2016. If you have not checked it out, I highly recommend looking at these videos, do some testing and form your own opinion on its performance, simplicity and ease of use and all the benefits included, snapshots, quotas, scaling, storage tiers
As many know we run a pretty lean shop here at Zoic. ~430+ users and 5 Full time- IT Folk plus me ( someone has to change the light bulbs) across 3 locations. So while we could use a Unix setup, We opted to use Server 2016 because well.. we are all windows on the desktop/domain/email side. We will be 100% windows 10 by end 3rd quarter as projects wrap up , we can roll artists to new environments. More on this specs later but similar to POC1 below with a only a single 800GB nvme and 1TB system evo drive. single 40Gig port. And 256GB ram.
We had a requirement in the last month for lots of sims and rendering with Houdini. Our Isilons Clusters And our Qumulo were having a tuff time keeping up in our tests. The cost to scale those out were going to be too much for ourBudget. We had been testing our POC1 (below) with great success for serving out apps.Acting as an RV box. as well as testing it with one off sets of renders for Maya/Vray and Houdini. So we opted to invest in a larger setup.
These are all new systems for us so more battle testing to ensue in the next few months as we hit them hard with 20 Houdini Artists, 50+ Dedicated Sim Nodes and about 500 Mantra Render based render nodes reading gigantic cache files.
POC 2 and POC3 will be tiered together using Storage Space Direct 2016. We will be collecting lots of Valuable data to share back here when everything is up and running or burning up :) hopefully not the later.
We have another fun project following the delivery of this one for a 30min 360 -4k projection setup with an obnoxious resolution. so we will then flop this system from Houdini users to Nuke / Nuke Studio and Maya /Vray Rendering and some Flame all working off this storage.
If all goes well. We will scale this out and we will be moving off Isilon as our primary performance storage to these supermicro systems by the 3rd quarter of this year . we have a great opportunity to RND this kit in production . We still love Isilon and Qumulo but currently their performance in the area of NVMe is lacking and while many Bag on MS (my self included) so far Storage Spaces has impressed me with its performance, simplicity. Only time will dictate the reliability.
Our next project is then tiering to cold storage using 90 bay supermicro chassis and 10TB He Drives but thats Q4( if anyone has played with those! speak up )
Here are some Videos to check out on SSD2016
https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-direct-overview
https://www.youtube.com/watch?v=raeUiNtMk0E
https://www.youtube.com/watch?v=l2QBvNwJx64
https://www.youtube.com/watch?v=5rmkS5kwijU


POC1with a 1U Supermicro and two 40gig dual port intel X710 cards. remember a single port of 40Gig can max out an pcie3 x8 slot. So we have two cards in separate x16 slots for dual port 40 or single 100gig cards.
(1) Super Server1028U-TN10RT+
https://www.supermicro.com/products/system/1U/1028/SYS-1028U-TN10RT_.cfm

(2) IntelE5-2640 V4https://ark.intel.com/products/92984/Intel-Xeon-Processor-E5-2640-v4-25M-Cache-2_40-GHz
(8)16GB DDR4-2400 ECC REG DIMM
(6)Intel P3600 400GB NVMEhttp://www.intel.com/content/www/us/en/solid-state-drives/ssd-dc-p3600-spec.html
(2) Intel XL710QDA2http://www.intel.com/content/www/us/en/ethernet-products/converged-network-adapters/ethernet-xl710.html


POC2is the 2U Supermicro and two Melanox 100Gbe Cards
(1) SuperStorage Server2028R-NR48N
https://www.supermicro.com/products/system/2U/2028/SSG-2028R-NR48N.cfm

(2)Intel E56-2680 V4https://ark.intel.com/products/91754/Intel-Xeon-Processor-E5-2680-v4-35M-Cache-2_40-GHz

(8)16GB DDR4-2400 ECC REG DIMM
(2) Mellanox ConnectX-4 MCX456A-ECAT
http://www.mellanox.com/related-docs/prod_adapter_cards/PB_ConnectX-4_VPI_Card.pdf

(24)Intel P3600 400GB NVME expandable to 48
http://www.intel.com/content/www/us/en/solid-state-drives/ssd-dc-p3600-spec.html

POC3 is a 2U Supermicro Like above but using Samsung 4TB SSD EVO Drives.
(1) SuperStorage Server SSG-2028R-E1CR48Lhttp://www.supermicro.com/products/system/2U/2028/SSG-2028R-E1CR48L.cfm

(2)Intel E56-2680 V4
https://ark.intel.com/products/91754/Intel-Xeon-Processor-E5-2680-v4-35M-Cache-2_40-GHz

(8)16GB DDR4-2400 ECC REG DIMM

(2)Mellanox ConnectX-4 MCX456A-ECAT
http://www.mellanox.com/related-docs/prod_adapter_cards/PB_ConnectX-4_VPI_Card.pdf
(24)Samsung 4TB EVO expandable to 48
http://www.samsung.com/us/computing/memory-storage/solid-state-drives/ssd-850-evo-25-sata-iii-4tb-mz-75e4t0b-am













To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe



0 Plus One's     0 Comments