Stay in touch…

Blog

Read the latest Bitstream

RSS Feed

LinkedIn

Look for us at LinkedIn

Twitter

Follow us on Twitter

Mix Magazine

This installment of The Bitstream column appeared in the January 2003 issue of Mix Magazine.

The Bitstream

This column discusses TCP Offload Engines, hardware accelerators for TCP…

Toe To Toe

This month, I’m going to revisit a technology that I think will eventually replace Fibre Channel-based networks and save us all money in the bargain! Now, disk storage is something most of us need in this digital world, and networked storage is the way to go if you have more than one computer in your place. Imagine working on a project and you have to move files from one workstation to another. Rather than waiting for a file copy from one machine’s drive to another to finish or physically sneakernetting the drive, you can hang the drives themselves on your network. So, rather than working off direct–attached drives, you can make your hefty investment in disks available on the network, to your whole place, all without a huge cash outlay. Less time twiddling your thumbs, more time getting stuff done.

Cast your mind back to September 2001, when I last talked about iSCSI, the scheme that allows SCSI commands to travel via IP protocols. Sixteen months have passed, and vendors are beginning to provide board-level products that fill some of the gaps in the needed equipment roster. One specific item that most every installation requires is an HBA, or Host Bus Adapter. HBAs are hardware devices, usually PCI boards, that provide an interface between the local host bus and some communication standard. A good example would be a $30 network interface card (NIC) that you’d plug into your CPU to add additional Ethernet ports. The reason Ethernet HBAs are so cheap these days is because they provide the minimum amount of hardware to get the job done. What it doesn’t say on the box is there’s absolutely no smarts to increase efficiency. Indeed, a server burdened with a heavy amount of IP traffic will find most all of its CPU cycles taken up by the task of processing those network packets. And that’s one of the fundamental problems of IP storage.

You see, if one of your computers is busy digesting a flurry of network traffic, it can hardly be called upon to pay sufficient attention to your host-based application trying to record an overdub in the foreground. Remember I said you could configure your disks on different machines to appear on the network for everyone to use? Easy to do in either Win or Mac, but when you try to record to that network “volume” or disk, you may find the data throughput really sucks, with dropouts or worse as a result. This is especially true if you’re doing higher sample rate or multichannel work. Here’s the thing — most Ethernet hardware isn’t up to the task of doing more than out–of–real–time transfers, like file copying and web surfing, and that’s where TOEs come in.

TOEs, or TCP Offload Engines, are chip–level hardware solutions that address the problem of interpreting the TCP/IP stack in software. Whoa, a what stack? TCP/IP, the Transport Control Protocol/Internet Protocol, is the language that computers use to communicate over the Internet. TCP, also used by Ethernet, is responsible for setting up and maintaining the end points for a network data transaction while IP handles the task of routing and delivering the data once it’s arrived. The IEEE brewed up the complete scheme and decided that, rather than using a monolithic, all–inclusive approach to the complex task of communicating over a network, portions of the job would be given out to separate processes in a modular fashion. These processes are conceptualized as “layers” in a hierarchical “stack” that cooperatively get the job done.

Unfortunately, that job usually requires a good bit of heavy lifting on the part of the CPU. At the very least, data packet headers have to be read to glean the destination address. So, enterprising companies have baked the brains of a TCP software processor, that stack I mentioned earlier, into silicon, where it can sweat the gory details at “wire speed” [See Pedant In A Box below…] while the host’s CPU runs wild and free, so to speak.

Earlier, I mentioned cost savings, of which I’ve identified several areas. First, skilled TDs (technical dweebs) are in short supply but there are many more TDs who are fluent in TCP/IP than are knowledgeable about Fibre Channel, the de facto choice for networked storage. In addition, IP infrastructure, both hardware and services like metropolitan network connectivity, is inexpensive relative to FC, and IP networks are scalable without network interruption. All these factors taken together translate into lower overall support costs.

Fibre Channel never will be cheap, but, if you’ve got the need and the bucks to feed that need, then FC slakes the thirst for high-performance, networked storage. On the other hand, Ethernet and IP are scalable, universal technologies. Ethernet is a commodity technology these days even at gigabit speeds. So, building a storage network with switched 1000Base–T and iSCSI is way cheaper than with Fibre Channel. (By the way, the no–nonsense performance of Gigabit Ethernet provides darn good throughput when viewed against the highly tailored architecture of Fibre Channel.) This doesn’t mean, however, that never the twain shall meet. In an early proof-of-performance demo, a server with an Alacritech Gigabit Ethernet HBA was connected to a Nishan IP Storage switch via a single Gigabit Ethernet link. The Nishan switch was connected, in turn, to a Hitachi Freedom storage system, an enterprise–class (that translates into “wicked big” – OMas) FC product. The Alacritech accelerator maximized the sustained rate of iSCSI data at over 219 megabytes per second with less than 8 percent CPU utilization, while the Nishan switch provided wire–speed conversion from iSCSI to the Fibre Channel storage.

An important caveat is in order here: To many applications, different types of storage are not equivalent. This has a great deal to do with the way that developers implement their applications. If an application makes “low level calls,” whereby the software communicates directly with hardware, an internal ATA drive for instance, then NAS and SAN become second-class citizens as far as that application is concerned. That method of programming was sometimes required in the Stone Age, when computers were slow. On the other hand, if an application communicates via appropriate abstractions provided by the operating system, then any storage supported by the OS should be equivalent. A modern, well-behaved DAW shouldn’t care what flavor of storage it’s using; DAS, NAS or SAN. This is especially true of host-based DAWs, since many hardware–based products haven’t quite caught up with the state of the art in storage or networking. The upshot is, the more modern an application, the more likely it will seamlessly work with iSCSI storage.

A quick digression is in order here. DAS, or Direct Attached Storage, is the garden variety we all know and love, hard-wired to a computer. The DAS label applies regardless of the attach method, whether it’s IDE/ATA, SCSI or FireWire. NAS, or Network Attached Storage, is storage hanging on a LAN, most always using Ethernet and TCP, and can only provide file–level access. SANs (Storage Area Networks) most always use Fibre Chanel protocols and provide block–level access, letting a read or write request go into individual logical “blocks” on a disk that comprise part of a file. For more gory details, check the Bitstream for May 2000, where I first got into the subjects of SAN and NAS.

Late last year, SNIA, the Storage Networking Industry Association, submitted the iSCSI spec to the IETF, the Internet Engineering Task Force, which should freeze dry it into a RCF, their version of a standard. Once the standard comes down, the vendors who are shipping product may have to adjust their firmware to accommodate any changes. The first company to wade into iSCSI waters, Alacritech, has been shipping a variety of TOE–equipped, 100 and 1000Base-T HBAs and is still the leader. Alacritech was started in 1997 by industry visionary and groovy guy Larry Boucher, who serves as president and CEO. In a prior life, he was founder and CEO of Adaptec. Before that, he was director of design services at Shugart Associates, where he conceived the idea of the SCSI interface and authored its initial spec. Strangely enough, Adaptec has also been prepping product, and Intel has the PRO/1000 T, a transitional HBA that substitutes software running on a general-purpose processor for a hard-wired TOE. While the PRO/1000 allows skeptics to experiment on the cheap, it doesn’t have the wherewithal to do the job in a production environment.

So, will iSCSI be the savior of dweebkind? As if, but it will lead to a blurring of network and storage functions, all the while contributing to that seemingly inevitable decline in computing costs we’ve all come to expect.

Bio

This column was written while under the influence of Charlie Mingus’ exuberant Moanin’, which was recorded by the late, great Tom Dowd. His exceptional talent and amicable demeanor will be sorely missed.

Pedant In A Box

Wire Speed

This month’s jargon, “wire–speed,” means that a process or algorithm runs très rapidement, very fast. This is implied to also mean that it is, in fact, running in a hardware implementation, with the process designed into a chip–level device rather than some general-purpose CPU, DSP or FPLA doing the job in software.

A Central Processing Unit is the brains inside most computer–based devices. CPUs come in two basic varieties. CISC, or Complex Instruction Set Computers, are old-school, general-purpose devices that are broadly capable in a brute force way, sort of like a Camaro. The other approach to CPU design, RISC or Reduced Instruction Set Computers, are only capable of a streamlined number of tasks, but they perform those select tasks with great alacrity. This is akin to BMW’s Mini against that Chevy Camaro. Intel and AMD make CISC CPUs typically clocked close to 2 GHz, while Motorola, Sun and IBM make more efficient RISC CPUs clocked at around 1 GHz.

Digital Signal Processors take RISC one step further and limit their computational skills to only those used to process to transform a digitized signal, whether audio, video, radar, whatever. Analog Device’s SHARC, Texas Instrument’s TMS320 and Motorola’s 56k families are all DSPs.

Field Programmable Logic Arrays, and their brother FPGAs or Field Programmable Gate Arrays, are chips that are so general purpose they have no personality at all. FPLAs are also chip–level hardware collections of logic functions that can be electronically wired together in almost any combination, all in an instant. FPLAs are used to provide hardware versatility when a designer doesn’t want to commit to a specific chip or some esoteric function cannot be realized with an off–the–shelf part. Xilinx and Altera are two FPLA vendors whose products show up all the time in digital audio gear.