Warakorn - Fotolia
You're presented with the challenge of integrating OpenStack into your existing network. Unfortunately, you don't know the first thing about OpenStack--let alone OpenStack networking. Let's walk through the basic concepts involved with integrating a private OpenStack implementation into an existing network. First, a brief introduction.
OpenStack consists of three core components: compute (Nova), storage (Cinder) and network (Neutron). We will focus on Neutron. It's important to understand the purpose of Neutron. OpenStack implementation provides a tenant-facing application-programming interface (API) for the consumption of OpenStack-based resources. In the case of the network, tenants will leverage Neutron's APIs to provision networks and associated IP addressing for compute components. In addition, different plug-ins may provide additional higher-level services such as intrusion detection, firewall and load balancers on the tenant network.
Networks to consider when integrating OpenStack
There are four local networks within OpenStack that must be considered when integrating with an existing data center network. According to the OpenStack training documentation, the networks are:
- Management network: Used for internal communication between OpenStack components. The IP addresses on this network should be reachable only within the data center.
- Data network: Used for virtual machine (VM) data communication within the cloud deployment. The IP addressing requirements of this network depend on the OpenStack networking plug-in for use.
- External network: Used to provide VMs with Internet access in some deployment scenarios. The IP addresses on this network should be reachable by anyone on the Internet.
- API network: Exposes all OpenStack APIs, including the OpenStack networking API, to tenants. The IP addresses on this network should be reachable by anyone on the Internet. This may be the same network as the external network.
The external network, API network and management network are straightforward concepts that are very similar to existing data center applications. These networks are visible and managed, just as traditional data center network segments would be managed. The black hole that can be OpenStack networking is the data network. The default mechanism used to create multitenant client networks that can include duplicate network environments -- think dev and test -- is network virtualization. Each tenant's data network is virtualized and requires an OpenStack network gateway node to access networks outside of its environment.
Leveraging distributed virtual switches to create overlay
Most network engineers are accustomed to interacting with virtual networks via trunk ports that have been connected to a hypervisor such as Hyper-V or vSphere. These networks are normally extensions of the enterprise data network. It's common for a VM to communicate with a physical host connected to a switch port on a completely different physical switch. This would be uncommon in a standard OpenStack deployment. The virtual networks associated with the OpenStack-hosted VMs are normally overlays that are invisible to the underlying data center network.
OpenStack leverages distributed virtual switches across multiple hypervisors to create an overlay network. Tenants can leverage network functions virtualization (NFV) devices to create complex, multilayer virtual networks.
Just as in the physical world, the only method for these two logically separated networks to communicate would be via some type of gateway. In OpenStack networking, this would be a gateway node, which has a connection to both the virtual network via a virtual switch port and connection to the physical data center network. In a provider use case, this connectivity would typically be leveraged to provide access to the Internet. In the enterprise, these gateways may provide access to the actual private data network in order to connect legacy applications to new cloud-based applications.
There are a number of vendors looking to help pull this entire picture together. For example, both VMware (NSX) and Juniper (OpenContrail) offer virtualized network products that are designed to work well after OpenStack implementation in an enterprise. Another option is just to leverage the native Open Virtual Switch (OVS) without any additional management plug-ins.
Understanding HP's OpenStack strategy
OpenStack Quantum and its implications
Neutron and orchestration