[Beowulf] Infiniband adapter aims for space, power

Eugen Leitl eugen at leitl.org
Wed Sep 22 04:27:55 PDT 2004


http://www.internetnews.com/infra/article.php/3404111

September 7, 2004
InfiniBand Adapter Aims For Space, Power
By Clint Boulton

Armed with new technology that pares the cost of power and board space in
server clusters, Mellanox Technologies launched the first single-chip
InfiniBand Host Channel Adapters (HCA) running at 10 gigabits per second.

Introduced Tuesday at the Intel Developer Forum in San Francisco, the
InfiniHost III Ex HCA devices are geared to work with InfiniBand (define)
switches from customers such as Topspin, Infinicon and Voltaire.

These companies add the adapters to their switches and sell them to systems
vendors such as IBM (Quote, Chart), HP (Quote, Chart) and Sun Microsystems
(Quote, Chart), who use the speedy interconnects to bolster the performance
of their server blades for communications, storage and clustering.

Kevin Deierling, vice president of product marketing at Santa Clara,
Calif.-based Mellanox, said the devices use the company's "MemFree"
technology, which reduces the cost, power, and board space required for
10Gb/s nodes in data center and technical computing server clusters.

There are key differences between Mellanox's traditional HCAs and those built
with MemFree, Deierling said.

Regular HCAs provide 10Gb/s server-to-server and server-to-storage I/O
interconnect, and require up to 256 megabytes of DDR memory, which is
controlled by the HCA device. The memory consists of several chips welded
directly on adapter cards and motherboards, or a pluggable memory Dual
In-Line Memory Module.

This local HCA memory is used to store information about the connections each
node has with the rest of the nodes in the cluster. Information on hundreds
of thousands or even millions of connections may need to be stored in local
memory. This information needs to be closely coupled to the HCA to enable
fast access to maintain proper performance levels.

"You have to add additional DRAM, and this is typical of network adapters,"
including fibre channel (define) adapters said Deierling. "InfiniBand enables
us to eliminate that memory, as well as timing and power components that are
the support infrastructure that normally goes on to the board."

However, Deierling said InfiniHost III Ex HCA devices with MemFree remove the
requirement for local memory on both PCI Express adapter cards and Landed on
Motherboard (LOM) designs (unlike card-based deployments, chips are placed on
the server motherboard with LOM), making them ideal for server blade
architectures.

The advantages? The executive said the HCAs would cost anywhere from 10 to 35
percent less than traditional HCAs, as well as a 20 percent reduction in
consumed power and a 40 percent reduction in board space.

The new HCA, which Deierling said is bolstered by advancements such as 64-bit
processing, DDR2 memory and PCI-Express, will help Mellanox provide OEM
customers with 10Gb/s server adapter products for under $100 by the first
quarter 2005, with lower price points soon thereafter.

The 10 Gb/s HCA drew praise from analyst Jag Bolaria of The Linley Group, who
said the adapter should increase the adoption rate of 10Gb/s server and
storage clustering.

"The company's MemFree technology reduces system cost, power, and space
requirements," Bolaria said. "These factors combined with Mellanox's
performance roadmap should accelerate adoption of InfiniBand technology on
servers and storage platforms."

-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a>
______________________________________________________________
ICBM: 48.07078, 11.61144            http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org         http://nanomachines.net
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: not available
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20040922/350061ec/attachment.sig>


More information about the Beowulf mailing list