Interconnect technologies

From HP-SEE Wiki

(Difference between revisions)
Jump to: navigation, search
(Infiniband: ConnectIB news, Intel news)
m (Ethernet: RDMA on Ethernet)
Line 34: Line 34:
The 40 Gigabit Ethernet interconnect solutions with RoCE (RDMA over Converged Ethernet) support have been optimized to deliver the highest performance for compute and storage intensive applications. The clusters deliver a more than 80 percent application performance increase compared to 10GbE based clusters. For storage access, it delivers 4X faster storage throughput, enabling high storage density and dramatic savings in CAPEX and OPEX. CAE (Computational Aided Engineering) applications demonstrated more than 80 percent performance increase. ConnectX-3 PCI Express 3.0 40GbE NICs and SwitchX 40GbE switch systems are available.
The 40 Gigabit Ethernet interconnect solutions with RoCE (RDMA over Converged Ethernet) support have been optimized to deliver the highest performance for compute and storage intensive applications. The clusters deliver a more than 80 percent application performance increase compared to 10GbE based clusters. For storage access, it delivers 4X faster storage throughput, enabling high storage density and dramatic savings in CAPEX and OPEX. CAE (Computational Aided Engineering) applications demonstrated more than 80 percent performance increase. ConnectX-3 PCI Express 3.0 40GbE NICs and SwitchX 40GbE switch systems are available.
 +
 +
=== RDMA on Ethernet ===
 +
 +
RoCE (Mellanox)
 +
 +
iWARP (Intel)

Revision as of 17:34, 25 December 2012

Contents

Mellanox Infiniband

Mellanox has developed a new architecture, called Connect-IB, for high performance InfiniBand adapter. The new adapter doubles the throughput of the company’s FDR InfinBand gear, supporting speeds beyond 100 Gbps. With this adapter, Mellanox is attempting to re-sync the interconnect with the performance curve of the large clusters, with the goal to provide a balanced ratio of computational power and network bandwidth. Connect-IB was designed as a foundational technology for future exascale systems and ultra-scale datacenters.

Connect-IB increases performance for both MPI- and PGAS-based applications. The architecture also features the latest GPUDirect RDMA technology, known as GPUDirect v3. This allows direct GPU-to-GPU communication, bypassing the OS and CPU. New adapters can process up to 130 million messages per second, while current generation delivers only 33 million messages per second. The new generation of adapters will have latency of 0.7 ms, which is equal to that of the latest Connect-X hardware for FDR InfiniBand.

Prototypes are currently working at Mellanox labs and samples will be sent to customers in Q3, with general availability expected in early Q4 of 2012.

Links:

Connect-IB

Connect-IB adapter cards provide the highest performing and most scalable interconnect solution for server and storage systems. Maximum bandwidth is delivered across PCI Express 3.0 x16 and two ports of FDR InfiniBand, supplying more than 100Gb/s of throughput together with consistent low latency across all CPU cores. Connect-IB also enables PCI Express 2.0 x16 systems to take full advantage of FDR, delivering at least twice the bandwidth of existing PCIe 2.0 solutions.

  • 100Gb/s interconnect throughput
  • Unlimited scaling with new transport service
  • 4X higher message rate

Top500 news

The InfiniBand Trade Association (IBTA), a global organization dedicated to maintaining and furthering the InfiniBand specification, has announced that, for the first time, InfiniBand has exceeded all other interconnect technologies on the TOP500 list of the world’s fastest supercomputers. The latest list, available at top500.org, was released June 18, 2012 and shows that InfiniBand is now being utilized by 210 out of 500 clusters listed at the TOP500.

Intel interconnect solutions

Intel is planning to integrate fabric controllers with its server processors. The company is planning to use the recently acquired interconnect technologies from Cray, QLogic and Fulcrum to deliver chips that put what is essentially a network interface card (NIC) onto the processor die. As with other types of processor integration, the idea is to deliver more capability - greater performance, scalability and energy efficiency.

Links:

Ethernet

40GbE Server and Storage Clusters

The 40 Gigabit Ethernet interconnect solutions with RoCE (RDMA over Converged Ethernet) support have been optimized to deliver the highest performance for compute and storage intensive applications. The clusters deliver a more than 80 percent application performance increase compared to 10GbE based clusters. For storage access, it delivers 4X faster storage throughput, enabling high storage density and dramatic savings in CAPEX and OPEX. CAE (Computational Aided Engineering) applications demonstrated more than 80 percent performance increase. ConnectX-3 PCI Express 3.0 40GbE NICs and SwitchX 40GbE switch systems are available.

RDMA on Ethernet

RoCE (Mellanox)

iWARP (Intel)

Personal tools