How has the role of the storage track changed at Interop over the years?
I definitely see it as a more central piece now. When you expand beyond just network aspects and encompass data center, storage plays a critical role. I got involved with the storage track about five years ago, and I've seen a large growth in terms of people that are attending the sessions for a technology point of view and an infrastructure point of view. And the current program reflects that. Virtualization and cloud networks have become central at Interop, and that affects storage. How have these technologies changed the direction of storage and what networking users will be looking for in terms of answers?
As we look at virtualization, there really are three things that get virtualized. A lot of people tend to think about computing cycles, server virtualization and the ability to have a generic utility where applications can move as availability comes into question. But you have virtualization in the network as well, where you're virtualizing the paths between the server and getting networking out to end users. And storage also becomes a platform or utility, if you will. Storage virtualization, and the ability of an application to be able to have access to capacity wherever it exists, plays an equal role in this whole move toward virtualization and being able to leverage economies of scale for the computing environment.
I think virtualized I/O relates to applications and OSs (operating systems) that are running in virtual machines that can not necessarily have a closely coupled relationship with storage but can generate an I/O request and have the infrastructure really determine how we get to that storage resource. And there are aspects in the underlying technologies like PCI Express and Fibre Channel. One of the things that a couple of sessions will talk about is N-Port ID virtualization and the ability to direct I/O and having an underlying infrastructure be able to be flexible enough to connect any server computing platform to any storage resource. Convergence is a popular debate on our site, as is the battle over FCoE and iSCSI. I am assuming this debate will populate the sessions in your track. How do you address this battle and help users make decisions?
Really, the objective is always going to be the same. It's to connect servers to storage, and there have been a number of places where convergence can occur. With the current strategy, it's not so much a battle between iSCSI and Fibre Channel as it is just another convergence opportunity to be able to leverage underlying technologies in the large investments that exist already in terms of Fibre Channel infrastructure. One of the things that FCoE provides is an opportunity to leverage that Ethernet backbone infrastructure and continue to invest heavily in terms of that as an underlying physical transport and still be able to leverage and utilize the Fibre Channel LAN devices that have been a part of the data center for a long time.
iSCSI is another option traditionally targeted not so much for super-high performance enterprise data center environments but for the small and medium environments that never really could afford to get involved with Fibre Channel kinds of solutions. And there are some very good performing environments based upon iSCSI as well.
We're going to continue to have investments in both technologies, and it's less often where people are starting from scratch. [Most] people are coming from a space where they have already made heavy investments in technology and would like to be able to conserve that while still being able to converge, and that's what Ethernet provides.
We will look at techniques that are used within a storage environment; for example, spindle aggregation, the use of solid state disks, and various quality-of-service strategies that help accelerate performance. Where storage traditionally has been a bottleneck in terms of hard disk and the need to be able to position it, [we're] going to talk about different ways to get around that … and also with the advent of solid state, ways applications can gain higher performance. You will hold a "deep-dive session" where you will address aspects that are and aren't critical to storage network performance. What aspects aren't critical?
What I am going to focus on in terms of my session emphasis is path to performance. We're going to take a look at different techniques that are used to accelerate performance and take advantage of underlying bit rates. There are some situations where you can turn all the knobs and provide optimization in terms of bit rate and bandwidth aggregation, but that may or may not work, depending on the kind of data access that application is performing. We'll take a look at what's the best bang for the buck in terms of accelerating performance and what really might be more myth. How prepared are networking folks to deal with the storage issue as a whole?
That's one of the challenges in terms of this whole idea of convergence. Traditionally, the servers, storage, I/O relationship didn't really include networking. And now what we're doing is taking those technologies and we're asking for a technical convergence, and there's an operational convergence that comes into play as well. That's a major area that we need to take a look at -- sensitizing network personnel to performance requirements.