The bandwidth challenge - How much is enough?

How much bandwidth is enough? This article examines advancements in bandwidth technology and how it will impact grids and clusters.

How much bandwidth is enough? Forget the desktop for now. Very little processing happens there anyway. The real

power is in servers and clusters. With clusters becoming more mainstream, companies have significant options for processing and crunching raw data. Increased bandwidth will really make technology like clusters and grids more attractive. Let's look at the latest advancements in bandwidth technology.

At the recent SuperComputing show in Philadelphia, Force10's TeraScale E-Series processed the traffic from all the participants in the challenge sustaining 136.2 gigabits per second of throughput effectively doubling last years challenge. They used 44 line rate 10G ports in a single chassis.

The University of Tokyo sustained a single 7.2 Gigabit per second data stream across a 32,000 kilometer network earning them the land speed record. The stream traveled across North America, Europe and Asia. The path was supported by Force10 with locations in Amsterdam, Chicago and the conference hall in Pittsburgh. Sustained throughput awards went to California Institute of Technology, the Stanford Linear Accelerator Lab and the Fermi National Lab for 101.2 Gigabits per second throughput (that is about 4 times greater than last year).

I had the pleasure of speaking with Force10 to find out where they believe this technology is of benefit and why their technology differs from others. First, Force10 had the foresight to build a switch fabric that will be capable of supporting 100 G -- yes that's 100! -- without a forklift upgrade to the chassis. Add to this technology the fact that they can support a zero packet loss failover and the chassis remains a solid choice for critical applications whether configured in a grid or not.

Another nice advantage of the chassis is that it supports 3 processors. Unlike many other boxes that have a single CPU for all functions (if you have ever had your CPU top out on a box, this will make perfect sense), they have separate functions on each CPU. One handles routing, one handles switching and another works on management functions. By moving the management function to its own CPU, the net effect is out of band management inside the box. If any one CPU fails, the code is moved to another for execution.

Perhaps the strongest statement for this technology is: one billion packets per second. This is not a guess or extrapolated number. It is real. In asking the great folks at Force10 where they envision this technology moving, they see the ability to provide grid services to corporations on the rise. If you check out the Website www.top500.org, you will see that this trend is true. There are many more companies and enterprises entering the cluster and grid worlds. The ability to move, backup, and crunch data is on the rise. Resources wasted on inaccurate results and retransmissions cannot be recovered.

Areas that are adopting this technology or have plans in the immediate future are of course, the cluster and grids already mentioned. Also, portal and search engines that process millions and billions of packets are certainly next. With recent announcements by Google to start digitizing libraries they will need the throughput. This task would probably not have been possible or at least not practical even a short time ago.

Changing economics and higher density applications cost companies less money both in initial capital outlay as well as operating expenses down the road. Watch for the bandwidth challenge next year and newer applications that will eventually eat up available bandwidth and cry for more. What's the old adage -- the more you give them, the more they want?


Carrie Higbie has been involved in the computing and networking industries for nearly 20 years. She has worked with manufacturing firms, medical institutions, casinos, healthcare providers, cable and wireless providers and a wide variety of other industries in both networking design/implementation, project management and software development for privately held consulting firms and most recently Network and Software Solutions.

Carrie currently works with The Siemon Company where her responsibilities include providing liaison services to electronic manufacturers to assure that there is harmony between the active electronics and existing and future cabling infrastructures. She participates with the IEEE, TIA and various consortiums for standards acceptance and works to further educate the end user community on the importance of a quality infrastructure. Carrie currently holds an RCDD/LAN Specialist from BICSI, MCNE from Novell and several other certifications.


This was first published in January 2005

Dig deeper on Data Center Network Infrastructure

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchSDN

SearchEnterpriseWAN

SearchUnifiedCommunications

SearchMobileComputing

SearchDataCenter

SearchITChannel

Close