Interconnect technologies

From HP-SEE Wiki

(Difference between revisions)
Jump to: navigation, search
(Ethernet & 40GbE Server and Storage Clusters)
m (Infiniband: Top500 Infiniband nes)
Line 6: Line 6:
Prototypes are currently working at Mellanox labs and samples will be sent to customers in Q3, with general availability expected in early Q4 of 2012.
Prototypes are currently working at Mellanox labs and samples will be sent to customers in Q3, with general availability expected in early Q4 of 2012.
 +
 +
=== Top500 news ===
 +
 +
The InfiniBand Trade Association (IBTA), a global organization dedicated to maintaining and furthering the InfiniBand specification, has announced that, for the first time, InfiniBand has exceeded all other interconnect technologies on the TOP500 list of the world’s fastest supercomputers. The latest list, available at top500.org, was released June 18, 2012 and shows that InfiniBand is now being utilized by 210 out of 500 clusters listed at the TOP500.
== Ethernet ==
== Ethernet ==

Revision as of 13:45, 2 October 2012

Contents

Infiniband

Mellanox has developed a new architecture, called Connect-IB, for high performance InfiniBand adapter. The new adapter doubles the throughput of the company’s FDR InfinBand gear, supporting speeds beyond 100 Gbps. With this adapter, Mellanox is attempting to re-sync the interconnect with the performance curve of the large clusters, with the goal to provide a balanced ratio of computational power and network bandwidth. Connect-IB was designed as a foundational technology for future exascale systems and ultra-scale datacenters.

Connect-IB increases performance for both MPI- and PGAS-based applications. The architecture also features the latest GPUDirect RDMA technology, known as GPUDirect v3. This allows direct GPU-to-GPU communication, bypassing the OS and CPU. New adapters can process up to 130 million messages per second, while current generation delivers only 33 million messages per second. The new generation of adapters will have latency of 0.7 ms, which is equal to that of the latest Connect-X hardware for FDR InfiniBand.

Prototypes are currently working at Mellanox labs and samples will be sent to customers in Q3, with general availability expected in early Q4 of 2012.

Top500 news

The InfiniBand Trade Association (IBTA), a global organization dedicated to maintaining and furthering the InfiniBand specification, has announced that, for the first time, InfiniBand has exceeded all other interconnect technologies on the TOP500 list of the world’s fastest supercomputers. The latest list, available at top500.org, was released June 18, 2012 and shows that InfiniBand is now being utilized by 210 out of 500 clusters listed at the TOP500.

Ethernet

40GbE Server and Storage Clusters

The 40 Gigabit Ethernet interconnect solutions with RoCE (RDMA over Converged Ethernet) support have been optimized to deliver the highest performance for compute and storage intensive applications. The clusters deliver a more than 80 percent application performance increase compared to 10GbE based clusters. For storage access, it delivers 4X faster storage throughput, enabling high storage density and dramatic savings in CAPEX and OPEX. CAE (Computational Aided Engineering) applications demonstrated more than 80 percent performance increase. ConnectX-3 PCI Express 3.0 40GbE NICs and SwitchX 40GbE switch systems are available.

Personal tools