Let the HECN Team at NASA Goddard Space Flight Center Explain
What could you do with a fast Internet connection? I’ve come to view that as a dangerous question when it involves the High-Performance Computing (HPC) world and specifically the Supercomputing Conference. In a previous post, I mentioned the SC17 conference is projected to have 25 100G connections to the public Internet. All that bandwidth will be provided to exhibitors & attendees through SCinet, the high-performance network built each year for the conference. I also discussed the hardware Extreme Networks is providing for this year, an SLX 9850 router and a ridiculously large quantity of 100G optics to use with it. For one week every year, SCinet could very well be the only place on earth where you can stream Netflix in 4K without buffering (only slightly joking here).
Seriously Though…Why All the Bandwidth?
Part of SCinet’s mission is supporting the SC17 Network Research Exhibition (NRE) demonstrations. These demos highlight new HPC concepts that are only possible over high-bandwidth connections and take place in exhibitor booths throughout the expo floor. Many of these demonstrations require multiple 100G links from SCinet.
What’s Possible with 100G?
A good example of what’s possible with SCinet’s network comes from the High-End Computer Networking (HECN) team at NASA Goddard Space Flight Center. HECN’s primary mission is building & supporting Goddard’s Science and Engineering Network and the team also conducts network R&D and evaluations of new technologies.
One of HECN’s long running R&D efforts is testing long distance data transfer over high-speed networks. This is a critical use case today because of the amount of data high-performance computing applications generate. It’s not unusual for HPC data, for example output from a climate simulation, to be copied to disk and shipped overnight to a user because downloading the same data could take days or weeks.
Each year at the Supercomputing Conference, HECN uses SCinet’s bandwidth and the conference location’s physical distance from their Greenbelt, MD data center to test & refine disk-to-disk file transfers over commodity Ethernet networks. In 2016, the team achieved 178 Gbps file transfers over 2 100G Ethernet connections.
What’s impressive about this feat: it was accomplished with only $20,000 servers on each end. Moving data this fast is easy when you can “throw money at the problem”, but it’s much harder when your budget is limited and academic research organizations constantly face this issue.
The HECN team’s testing at Supercomputing has been supported by Extreme Networks, previously Brocade, for the past 7 years by providing routers & switches to their effort. In past years, Extreme Networks provided our venerable MLXe router to provide 100G connections in NASA’s booth, but for SC17 we’re upgrading to our new SLX 9240 switch. This new 1U ToR switch supports 32 100G ports, all line-rate, and is part of Extreme’s new SLX platform.
For SC17, the HECN team will use this switch with a new RDMA over IP architecture to try and break 200Gbps disk-to-disk.
Want to Know More?
Learn more about the HECN team at Goddard Space Flight Center here or stop by their booth at SC17! Their demo will be in the Open Commons Consortium booth #1653.
Did you miss our other SC17 blogs? Read them now:
About the Author
Wilbur is a Senior Systems Engineer at Extreme Networks. He began his networking career in the United States Air Force in 2001. He joined Foundry Networks as a Federal Systems Engineer in 2007, and continued under Brocade. Today he continues to support Federal customers with focuses in Data Center Networks and Ground Systems Architectures for the DOD community.More Content by Wilbur Smith