Interconnect technologies

From HP-SEE Wiki

(Difference between revisions)
Jump to: navigation, search
(First structure and 100 Gbps Infiniband added)
(Ethernet & 40GbE Server and Storage Clusters)
Line 6: Line 6:
Prototypes are currently working at Mellanox labs and samples will be sent to customers in Q3, with general availability expected in early Q4 of 2012.
Prototypes are currently working at Mellanox labs and samples will be sent to customers in Q3, with general availability expected in early Q4 of 2012.
 +
 +
== Ethernet ==
 +
 +
=== 40GbE Server and Storage Clusters ===
 +
 +
The 40 Gigabit Ethernet interconnect solutions with RoCE (RDMA over Converged Ethernet) support have been optimized to deliver the highest performance for compute and storage intensive applications. The clusters deliver a more than 80 percent application performance increase compared to 10GbE based clusters. For storage access, it delivers 4X faster storage throughput, enabling high storage density and dramatic savings in CAPEX and OPEX. CAE (Computational Aided Engineering) applications demonstrated more than 80 percent performance increase. ConnectX-3 PCI Express 3.0 40GbE NICs and SwitchX 40GbE switch systems are available.

Revision as of 13:31, 2 October 2012

Infiniband

Mellanox has developed a new architecture, called Connect-IB, for high performance InfiniBand adapter. The new adapter doubles the throughput of the company’s FDR InfinBand gear, supporting speeds beyond 100 Gbps. With this adapter, Mellanox is attempting to re-sync the interconnect with the performance curve of the large clusters, with the goal to provide a balanced ratio of computational power and network bandwidth. Connect-IB was designed as a foundational technology for future exascale systems and ultra-scale datacenters.

Connect-IB increases performance for both MPI- and PGAS-based applications. The architecture also features the latest GPUDirect RDMA technology, known as GPUDirect v3. This allows direct GPU-to-GPU communication, bypassing the OS and CPU. New adapters can process up to 130 million messages per second, while current generation delivers only 33 million messages per second. The new generation of adapters will have latency of 0.7 ms, which is equal to that of the latest Connect-X hardware for FDR InfiniBand.

Prototypes are currently working at Mellanox labs and samples will be sent to customers in Q3, with general availability expected in early Q4 of 2012.

Ethernet

40GbE Server and Storage Clusters

The 40 Gigabit Ethernet interconnect solutions with RoCE (RDMA over Converged Ethernet) support have been optimized to deliver the highest performance for compute and storage intensive applications. The clusters deliver a more than 80 percent application performance increase compared to 10GbE based clusters. For storage access, it delivers 4X faster storage throughput, enabling high storage density and dramatic savings in CAPEX and OPEX. CAE (Computational Aided Engineering) applications demonstrated more than 80 percent performance increase. ConnectX-3 PCI Express 3.0 40GbE NICs and SwitchX 40GbE switch systems are available.

Personal tools