Interconnect technologies

From HP-SEE Wiki

(Difference between revisions)
Jump to: navigation, search
(First structure and 100 Gbps Infiniband added)
Line 1: Line 1:
-
Please take notes here.
+
== Infiniband ==
 +
 
 +
Mellanox has developed a new architecture, called Connect-IB, for high performance InfiniBand adapter. The new adapter doubles the throughput of the company’s FDR InfinBand gear, supporting speeds beyond 100 Gbps. With this adapter, Mellanox is attempting to re-sync the interconnect with the performance curve of the large clusters, with the goal to provide a balanced ratio of computational power and network bandwidth. Connect-IB was designed as a foundational technology for future exascale systems and ultra-scale datacenters. 
 +
 
 +
Connect-IB increases performance for both MPI- and PGAS-based applications. The architecture also features the latest GPUDirect RDMA technology, known as GPUDirect v3. This allows direct GPU-to-GPU communication, bypassing the OS and CPU. New adapters can process up to 130 million messages per second, while current generation delivers only 33 million messages per second. The new generation of adapters will have latency of 0.7 ms, which is equal to that of the latest Connect-X hardware for FDR InfiniBand.
 +
 
 +
Prototypes are currently working at Mellanox labs and samples will be sent to customers in Q3, with general availability expected in early Q4 of 2012.

Revision as of 12:54, 2 October 2012

Infiniband

Mellanox has developed a new architecture, called Connect-IB, for high performance InfiniBand adapter. The new adapter doubles the throughput of the company’s FDR InfinBand gear, supporting speeds beyond 100 Gbps. With this adapter, Mellanox is attempting to re-sync the interconnect with the performance curve of the large clusters, with the goal to provide a balanced ratio of computational power and network bandwidth. Connect-IB was designed as a foundational technology for future exascale systems and ultra-scale datacenters.

Connect-IB increases performance for both MPI- and PGAS-based applications. The architecture also features the latest GPUDirect RDMA technology, known as GPUDirect v3. This allows direct GPU-to-GPU communication, bypassing the OS and CPU. New adapters can process up to 130 million messages per second, while current generation delivers only 33 million messages per second. The new generation of adapters will have latency of 0.7 ms, which is equal to that of the latest Connect-X hardware for FDR InfiniBand.

Prototypes are currently working at Mellanox labs and samples will be sent to customers in Q3, with general availability expected in early Q4 of 2012.

Personal tools