Without question, managing a large, active network is difficult. As network technology evolves and networks grow, new tools must be developed to manage these networks. Fortunately, commercial and academic efforts continue to develop new network management tool options, and work continues as large multi-tenant public clouds gain traction.
Much of the difficulty associated with network management systems stems from the fact that they must meet multiple goals, such as:
- Configure each component so packets from application code executing on one host successfully reach code running on a different host;
- Reconfigure components every time components or interconnects are added or removed, and do so while continuing to achieve the first goal; and
- Operate the network in the most efficient and cost-effective manner.
Simply meeting the first goal is not easy. Configuring each network component requires a long series of commands. A single error can disrupt packet traffic through that component, and an error in one component can cause problems throughout the network.
Adding, removing or upgrading network components may require configuration changes throughout the network, which must be done in a coordinated manner. It's sometimes necessary to shut down the network while carrying out the change. Errors are more likely to occur when operating under time constraints.
The third goal -- efficiency -- has been greatly aided by the recent development of software-defined networking (SDN). Traditional protocols, such as spanning tree and Open Shortest Path First, determine routes by exchanging packets between components. A link or component often requires an additional exchange, so recovery can take time.
SDN optimizes path selection by centralizing forwarding decisions in a controller with visibility across all network components. The OpenFlow protocol, widely adopted by those deploying SDN networks, specifies switch architecture and provides a set of commands to direct individual packet flows. Link or component failures are quickly detected, enabling the controller to update paths without a lengthy packet exchange.
OpenFlow was originally developed at Stanford and was then adopted by the Open Networking Foundation. Academic research has continued with a goal of further improving network efficiency and simplifying network management tasks.
Merlin: Part of the Frenetic Project
One of those efforts is the Frenetic Project, which develops network management tool options to build and manage efficient networks. Faculty and graduate students at Princeton, Cornell and several other universities have contributed work on tools such as the Frenetic and Pyretic languages, which speed network policy development by replacing a large number of individual OpenFlow commands with a few high-level commands.
The Merlin effort extends Frenetic's goals by developing a network management tool and framework that views the network as a whole. While OpenFlow directs flows through switches, Merlin addresses the complete data path, including switches plus additional middlebox functions such as deep-packet inspection (DPI), network address translation (NAT), packet shaping and other tasks. It generates OpenFlow commands to manage switches and the appropriate command formats for other network devices and services.
The Merlin network management tool defines a specification language and a compiler. Each language statement specifies a policy for an individual or group of sources to destination packet flows. Flows are identified by IP address and can be further identified by port number and protocol ID. Operations such as DPI or NAT can be specified for each path or paths. The language also supports a method for delegating sub-policy development to individual tenants in an environment, such as a public cloud.
Maximum, minimum bandwidth allocated
Additionally, the Merlin project provides a way to specify maximum or minimum bandwidth allocation. An individual flow or a set of flows can be limited to a maximum or guaranteed a minimum. For example, a specific video stream could be identified by source and destination IP address, or port and protocol identifier, and then assigned a minimum and maximum bandwidth allocation. Alternatively, all video streams to or from a specific end node could be limited. Maximum and minimum allocations can be specified to or from an end node for all traffic or between two end nodes.
The Merlin compiler computes a complete mapping of packet flows to paths through the network. The map is based on input describing capabilities and locations of each network component and the interconnects between them. Components may be physical devices or may be virtual and implemented within a host system.
The compiler uses linear programing techniques to determine an optimal allocation of paths through the network that satisfies all of the constraints required by the set of input statements. To do that, the compiler must select paths that meet two sets of constraints.
First, each path must also pass through any required middlebox functions in the order specified. Second, paths must be chosen that will satisfy all bandwidth constraints and do so within the available bandwidth. The compiler incorporates criteria such as minimum hops or balancing the amount of available but unallocated capacity throughout the network while still meeting the constraints.
Network management tool policies
Policy creation for portions of the network can be delegated in multi-tenant networks. Tenants can define sub-policies that govern flows within the tenant's portion of the network. The compiler verifies that no sub-policy violates a system-wide policy. Therefore, sub-policies must be more or equally restrictive than a system-wide policy. For example, if total bandwidth between two end nodes is specified, a sub-policy can specify a lesser allocation for a specific flow or group of flows between those end nodes, but not a greater allocation. Similarly, if system-wide policies do not require flows between nodes to pass through DPI, a sub-policy could add the requirement for specific flows.
Policies are enforced by code embedded in end nodes or network components. The code in each location is responsible only for enforcing policies for streams that will pass through that device. This code may also collect information, such as packet counts and other information valuable for network monitoring and problem resolution.
Merlin developers have tested a prototype network management tool in a small network, but much work needs to be done to move the concepts to a commercial environment. That cloud administration component that allocates applications to hosts and virtual machines must generate Merlin input statements, specifying IP addresses, bandwidth requirements and intermediate processing. Work also must be done to interface Merlin to the variety of middleboxes, but by demonstrating a way to compute and optimize complete data paths, Merlin has already advanced the development of tools to increase network efficiency.
Selecting the best network monitoring tool
Getting the right tool for network management