Stay in touch…

Blog

Read the latest Bitstream

RSS Feed

LinkedIn

Look for us at LinkedIn

Twitter

Follow us on Twitter

Mix Magazine

This installment of The Bitstream column appeared in the September 2000 issue of Mix Magazine.

The Bitstream

This column discusses the emerging InfiniBand standard…

Infini-what?

Ah, September. A time when latent color emerges from behind the green of summer. When the latest retro tube gear struts its stuff at AES, and a young man’s fancy turns to improving local bus architecture. InfiniBand, to be precise. For those of you who rely on computers, speed is of the essence. Unfortunately, technology marches forward while the gear you’ve got doesn’t magically morph into the latest model. It just sits there and feels slower every day. So, when its time to upgrade, choose wisely, my son. Today’s CPUs are hampered by local bus limitations and even with PCI—X around the corner, compute and I/O—intensive processes, like real time encoding, serving and streaming, require as much speed as possible. InfiniBand equals speed. Lots of it.

At the recent Applied Computing Conference and Expo in sunny Santa Clara, a stone’s throw away from one of my favorite roller coasters and IMAX theaters, I met with the InfiniBand Trade Association (ITA) to discuss their new religion. Comprising seven founding members, “the association is dedicated to developing a new common I/O specification to deliver a channel-based, switched fabric technology that the entire industry can adopt.” The ITA’s top-tier steering committee has signed up more than 140 implementer companies, all eager to move local bus science forward.

InfiniBand is a switch-fabric architecture, sort of like Fibre Channel. A switch-fabric architecture decouples I/O operations from memory by using channel—based point—to—point connections rather than the shared bus, load and store configuration of older technologies. The predicted benefits of the InfiniBand specification are improved performance and ease of use, lower latency, built in security and better quality of service, or QoS. Sounds like a plan.

As clock rates spiral ever upward, serial communication eliminates the skew and other difficulties of getting a bunch of parallel signals to all work harmoniously. Designed specifically to address interserver I/O, InfiniBand’s physical implementation is a two-pair serial connection rather than PCI’s parallel approach. Basic “X1” links (see Glossary) operate at 2.5 Gigabits per second, and can be aggregated into larger pipes or “link widths” of X4 or X12, equivalent to 10 or 30 Gbps. This results in usable, bi-directional bandwidth of 0.5, 2 and 6 GB/second respectively. Feel the power!

Once you have a handful of servers and some peripheral devices, you’d want to hook ’em up. This is accomplished through a switch. InfiniBand switch architecture has been designed to accommodate more than 48,000 simultaneous connections, allowing complex meshes of to be made. And since IPv6 addressing is used, there’s no lack of valid address space. These multiple, autonomous point—to—point links can span 17 meters via copper and over 100 meters on glass. Any link can be assigned to one of 16 “virtual lanes” or priority levels to fulfill QoS requirements. And, redundant parallel links can be established to ease availability worries. Multiple switches and subnets can be interconnected via routers, carrying both IP and InfiniBand traffic hither and yon.

When first rolled out, InfiniBand will ratchet up the speed at which servers communicate, making high-performance cluster configurations more practical. In the long term, this should make multiprocessors with greater than two CPUs less attractive. Once mature, we may also see InfiniBand appearing in consumer products to replace local buses of lesser prowess. Think commodity Intel boxes loaded with FreeBSD doing all your heavy lifting for the Web (or, native-hosted audio editor applications - OM). High performance, combined with ease of installation, is tough to beat. There’s a good deal more work to be done, but IB—equipped über—servers should appear next year.

Glossary of Terms

Networking continues to weasel it’s way into every facet of business, including audio. So get hip to IP and IB networking jargon…

InfiniBand Trade Association

Microsoft, Intel, Hewlett—Packard, Sun Microsystems, IBM, Dell and Compaq. Put that many manufacturers in a room and you’re bound to come up with something interesting. Here’s a glossary of InfiniBand (IB) terms that have been slightly redefined by the ITA. Just when you thought it was safe to put down the network primer, along came:

Fabric

A fabric is a collection of host or target channel adapters, links and switches that are cross—connected in a many—to—many scheme rather than individual, isolated point—to—point or loop topologies. In the world of Fibre Channel networks, fabrics describe the physical interconnection of multiple devices interwoven or connected via hubs, switches or HBAs (host bus adapters).

HCA

Host Channel Adapter, the IB component that connects a processor to other IB devices. An HCA is really a bridge and must communicate with both other HCAs and TCAs. HCAs hook up, somehow, to your PC. Just how isn’t yet established, though I expect they’ll be integrated onto the motherboard, presenting an external physical connector for linking. It should allow I/O subsystems, like Ethernet, Fibre Channel, SCSI and interprocessor communication, to converse through the InfiniBand switch fabric, without complex hardware or software interfaces.

Host

A host can be thought of as a server. By the time IB is implemented, servers will mostly be 1U rackmounted boxes rather than the floor—standing tower configurations that are now common.

Link

A link is a dual simplex (simultaneous bi-directional) transmission path between a pair of network elements such as nodes (HCAs or TCAs) or switches. Link hardware is spec’d as dual simplex, which means that send and receive wires each have their own grounds and transmit data unidirectionally and independently. The more common simultaneous bi-directional method is “full duplex.” In full duplex hardware, both paths share a ground wire. Telephones are full duplex, while two-way radios (“walkie talkies”) are wireless half—duplex. Both Ethernet and PCI are half—duplex, with one “talker” at a time.

Packet

A packet is a unit of data encapsulated by a physical network protocol header and/or trailer. In general, the header provides control and routing information for directing the packet through the fabric, while the trailer contains data for ensuring packets are not delivered with corrupted contents. Other “packetized” transport mechanisms include IP, the Internet Protocol, and the new VXA tape format.

Router

A router connects multiple links and “routes” or directs packets from one link to one or more other links. The forwarding mechanism “looks” within each packet for address data. A router specializes in exchanging packets between subnets. In a similar vein, a TCP/IP router is a basic Layer 3 or Network Layer device that provides media—independent, dynamic packet forwarding.

Subnet

A subnet is a set of host or target channel adapters, links and switches that are interconnected and agree on a common set of device addresses. In the TCP/IP world, subnets encompass all devices whose IP addresses have the same prefix. For example, the IP subnet for my intranet is 192.168.0, and each device on the net has a unique address octet appended onto the end of the subnet address.

Switch

A switch connects multiple links together and forwards packets from one link to one or more of the other links. The forwarding mechanism “looks” within each packet for address data. A switch specializes in exchanging packets within a subnet. In the TCP/IP world of Ethernet, a switch is typically a Layer 2 or Data Link Layer device that provides filtering and forwarding of packets. Layer 3 switches are also manufactured, providing routing via hardware at “wire speeds.”

Target

A target can be thought of as a device such as a disk array or network adapter.

TCA

A Target Channel Adapter, the IB component that connects an input/output device to other IB devices. TBAs only require support for capabilities appropriate to the particular input/output device. TCAs live inside of (on a backplane) or are attached to a device, such as a solid state memory cache, or device group, such as a tape library.

Bio

Oliver Masciarotte is a tech dweeb and consultant on content creation infrastructure.