Corporations might generate the profit in IT, but education and research institutions lead the way in technical innovation. That's certainly the case in networking, where Internet2, a shared 100 Gigabit Ethernet optical global network, connects hundreds of research institutions with the kind of high performance networking that many financial institutions strive for.
The 450 engineers attending the conference will grapple with cutting edge technologies, including software-defined networks (SDN), networking for big data and e-medicine. The goal is to help research institutions better collaborate across regions using technology. Rob Vietzke, vice president of networking at Washington, D.C.-based Internet2, shared a little more about what technologies engineers will be discussing this week.
Rob Vietzke: There is a history in the research and education community around creating platforms and space for innovation in networking. It's easier to name the technologies that didn't evolve from innovation in the campus environment than it is to name the ones that did. You think about the early era of networking when we had open protocols and an era of investment in platforms for innovation -- where you introduced Ethernet broad scale and things like search and social networking. You think about the early days of the protocol stack when you had thousands of people collaborating on what TCP should really be. That was an innovative, exciting time for networking. I think what you see in our efforts in WAN SDN are a belief and a hope that concepts of software-defined control in networking … can reopen that same area for massive innovation. For us this isn't just about efficiency in data center or managing a small data center; it's about the opportunity to make the network controllable again to open software standards, which we haven't had in 30 years as a community.
Can that kind of innovation only happen in a space where there are researchers and academics as opposed to commercial presence?
Vietzke: The best innovations have happened at times when we had capability sets exposed to a large community in advance of what was commercially viable. Ethernet… is probably a classic one. When the rest of the world was doing dial up, we had hundreds of thousands of Internet Ethernet drops with native IP addressing. You saw application innovation like never before with things like file sharing and video over IP, search and social networking. This combination of ubiquitous large-scale deployment in front of commercial viability, and also industry partnership, is where some of the most interesting things have happened. This is the kind of environment that is going to be critical to big changes in the future of networking.
Big data plays a big role at TIP 2013. Why is this so important to the networking community at this point?
Vietzke: Big data is important to the stake holders in the Internet2 community. [Consider] genetics researchers, who are no longer just looking at genomes and genes. They are researching at their local hospitals, but also collaborating with others around the world who are working on the same thing. And they want to do so in real time. The same is true for [those who are] modeling financial data out of the federal agencies in collaboration with economists on campuses to make sure that the next crash doesn't happen; to study how liquidity works in our financial markets and how to predict a liquidity lock up in the future. All of those are important big data experiments where the best minds in the world are on our campuses working with the data that is coming out of large national science projects or government agencies or the medical industry. It's our belief that creating the network technologies to enable them do to their work will foster the next set of innovations.
What does it mean to create the network that supports that kind of big data? Does it require huge amounts of capacity and throughput, or is it more complicated than that?
Vietzke: One of the things we hope happens in our large-scale SDN environment is to see a much tighter merging of the network with the application. That's one of the promises of software-defined networking; that over time the network folds into the virtual compute environment. And we think we have to do a better job not just at providing big pipes, but making sure that data-intensive science works. It's making sure that you not only have large pipe, but that you have a network that puts the bandwidth where it needs to be, or overprovisions [when necessary]. It's making sure [the data] gets all the way through the security regime to the researcher so they can do their work.
Why will e-medicine or e-health play a central role at this year's Techs in Paradise?
Vietzke: Some of the big gene research is happening in Asia -- there's a lot of collaboration between the U.S. genome community and facilities in China, for instance. We did a project with the Beijing Genome Institute where they had been transferring a gene sequence to the U.S., and they were seeing 26 hours as throughput between BGI and the U.S. collaborators. With some work on networking, we got that down to 30 seconds. When you think about a researcher who's got a stage-four cancer patient, and they want to sequence their genes and see if there are similarities between them and hundreds of other patients, 30 seconds vs. 26 hours really matters. You start to see an immediate impact from the data that is being collected.