TECH Keynote West: Nat’l LambdaRail To Go Above and Beyond


Kicking off the technical portion of the SC2004 program on Tuesday, November 9, was be Tom West, President and CEO of National LambdaRail, Inc., a national effort comprised of members and associates from across the country focused on implementing and operating a national network infrastructure to serve the needs of the advanced research community. HPCwire caught up with Tom to discuss what will be happening with NLR in 2005.

HPCwire: Please describe National LambdaRail and its initiatives for our readers.

Tom West: National LambdaRail is deploying the first national-scale, fiber optic network infrastructure that is owned by the U.S. research community.

There are several aspects of what we’re doing that distinguishes NLR from other networking efforts. First, the optical dense wave division multiplexing technology we’re using allows us to deploy multiple, distinct networks over all or particular segments the national fiber optic footprint. This means we can accommodate many different kinds of networks, including both production networks–that is networks that are used to support projects–and experimental networks–networks that are used for research on networking itself.

Since these networks are operationally separate, they can provide different levels of reliability–or breakability–as needed. Initially, we’re deploying four separate national network services on four separate wavelengths of light, or lambdas. We’re also adding wavelengths on particular segments for particular projects.

Second, because the research community–through the members of NLR–owns the infrastructure, we have direct control over what we do with it. We have never been in this position before at a national-scale, so we’re able to do things in ways we couldn’t. For example, we can respond to requests for new services and capabilities much more flexibly.

Finally, NLR is uniquely dedicated to network research. In fact, in our bylaws, we are committed to providing at least half of the capacity on the infrastructure for network research. This capacity is critical to the kind of work, we believe, wis fundamental to taking the next big steps in networking development. And it is becoming increasing clear we need to take those steps to meet the requirements of cutting-edge science.

HPCwire: Last year, NLR announced that it successfully lit the initial segment on its national footprint between Chicago and Pittsburgh – connecting the Pittsburgh Supercomputing Center (PSC) to the Extensible Terascale Facility (ETF), the backplane network for the National Science Foundation’s Teragrid project, through the StarLight Facility in Chicago. Please update our readers on what’s happened since then? Is being back in Pittsburgh helping to bring its initial steps to reality?

TW: The link between Chicago and Pittsburgh for the ETF is a perfect example of how the NLR infrastructure is able respond to the requirements of specific projects. As you mention, it was the first production use of the NLR infrastructure, and it continues today.

Since then, and even before we completed the first phase of our deployment last month, we’ve received an incredible amount of additional interest. For example, last week we announced the CAVEWave that the Electronic Visualization Laboratory at the University of Illinois at Chicago is deploying for the NSF- supported OptIPuter project. This is a dedicated 10 gigabit per second wavelength on the NLR infrastructure that the OptIPuter project is using to link facilities in Chicago and California.

Right here in Pittsburgh at SC2004 this week, we have eight wavelengths demonstrations on the exhibit floor are taking advantage of the NLR infrastructure. We’re also working with a number of other projects, so stay tuned!

HPCwire: NLR recently announced that PSC and University of Pittsburgh have joined the NLR consortium. How have these two organizations’ involvement affected your progress?

TW: We’re extremely pleased that PSC and the University of Pittsburgh have joined NLR. There’s obviously a significant history of networking innovation here in Pittsburgh, so it’s good to have those institutions on board. Its important to note that the commitment required to become an NLR member is significant–$5 million–so this is a fairly serious decision on their part.

Of course, NLR is also fortunate to count companies like Cisco Systems as active participants . Their involvement really has been critical especially as we have deployed the infrastructure, but also in helping to form NLR as it was coming together over the last few years.

With PSC and the University of Pittsburgh joining, the total financial commitment by members and corporate participants now stand at over $100 million. Members are very active, and are providing ongoing support for getting the work of NLR done.

HPCwire: Last year, your schedule was listed as: Seattle to Portland, Ore., path, scheduled for completion by mid-January 2004 and Portland to Sunnyvale, Calif., scheduled to be ready by mid-April 2004. Other segments on the national footprint include Pittsburgh to Washington D.C., mid-March 2004; Washington D.C. to Atlanta, mid-April 2004; Denver to Seattle, early June 2004; Atlanta to Jacksonville, Fla., mid-July 2004; and Chicago to Denver, mid-July 2004. Implementation of Atlanta to Dallas; Dallas to San Diego; and, Washington D.C. to New York City are scheduled for July to December 2004. I know the Washington-Atlanta connection was launched in May, but are things still on schedule regarding the launch of other segments? What things may be hindering advancements?

TW: What you’ve described is what we call ‘Phase 1’, and we completed it ahead of schedule and under budget–a good start. In the next six months, we expect to have the complete network infrastructure–more than 10,000 miles of it–operational. We expect to continue executing, and schedule-wise, we’re in better-than-expected shape.

We haven’t really run into any serious speed bumps in terms of deployment plans. I think that’s a testament to the way NLR members are working together.

We are running into a lot of interest for the capabilities NLR can provide; we’re talking with a lot of folks that would like to take advantage of the infrastructure.

HPCwire: Where do you position NLR and its plans within 2005? TW: First, we’ll complete the build out of the national footprint. That will be a very significant milestone. We’ll continue to add to the number of projects the NLR infrastructure support, but we’ll also complete the deployment of the IP and Ethernet services that are planned for the NLR infrastructure.

Second, and this is very significant considering NLR’s commitment to network research, we’ll expect to begin supporting networking research projects. This, actually, might start before next year, but it’s something that we expect will start to gain serious momentum in 2005.

HPCwire: How will all of your advancements affect the HPC community? How will they affect the country as a whole? What specific purposes will the network be used for?

TW: The way that PSC has been able to leverage the NLR infrastructure for their ETF participation, and that fact that the OptIPuter project has been able to take advantage of the NLR infrastructure demonstrates the potential value NLR provides for the HPC community.

As Larry Smarr and Tom DeFanti, the folks working on the OptIPuter, have said, the capabilities NLR offers allows them to implement the ‘meta-computer’ vision, very cost-effectively. Tom DeFanti has said that the CAVEWave costs less than 32-node clusters at each would. So one of the things NLR offers is cost-effective very high-capacity networking among distributed sites. It think capabilities like those offered by NLR are going to play a key role in realizing a high-performance Grid computing infrastructure on a national scale.

We’re in the very early stages of NLR, but I think we’ve already demonstrated we can respond to the requirements of the HPC community, as well as to those of a variety of scientific disciplines. The research community is facing a fundamental challenge in networking. Even with the best of our current technology, there’s a looming collision between the needs of the leading edge applications and what the networks can provide, especially as the applications are deployed more widely.

Not only can the NLR infrastructure can be leveraged to meet immediate requirements of specific projects, but it will also, because of its support of network research, play a role in the larger picture of shaping the future of the networking we all use. The HPC community, especially the academic HPC community, played a crucial part of this kind of development in the past, and I expect they’ll do so again.

HPCwire: I’m afraid we’re out of time, Tom! But many thanks for answering in such detail. The HPC community will be watching these developments closely, I’m sure.

Catch Tom’s keynote presentation, “NLR: Providing the Nationwide Network Infrastructure for Network and ‘Big Science’ Research” on November 9th at SC2004. See more information.