Get started Bring yourself up to speed with our introductory content.

WAN technologies summarized

This excerpt from 'The Shortcut Guide to Optimizing WAN Application Delivery' introduces current and existing WAN environments and technologies. Topics include point-to-point links, circuit and packet switching methodologies, virtual circuits, dial-up services, and WAN endpoint devices.

The following material contains introductory subject matter on the topic of current and existing WAN environments and technologies. Topics include point-to-point links, circuit and packet switching methodologies, virtual circuits, dial-up services, and WAN endpoint devices.

WAN data communications encompass broad geographically dispersed areas facilitated by transmission services of common carriers, as depicted in Figure 3.5. WAN technologies generally function at layers 1 through 3 of the OSI TCP/IP reference model, which is why most network engineers also function at this level. However, WAN appliances can also be "application aware" in that they are capable of analyzing application protocol streams and manipulating their behavior according to policy.

managed WAN cloud

Figure 3.5: A key benefit of the "Managed WAN Cloud" is its ability to accommodate different kinds of WAN links for ingress and egress.

Point-to-point links

An established, individual communications path from subscriber to provider is referred to as a point-to-point link. In this arrangement, a carrier network (such as a local telephone company) provides a direct connection via leased lines (that may include copper wiring and other necessary hardware such as CSU/DSU units) to the customer's premises. Accordingly, both sets of links will generally use the same service provider network arrangements.

Circuits are normally priced according to bandwidth requirements and the distance between the two connection points. Point-to-point links are typically priced higher than Frame Relay links but also provide permanently established, exclusive connectivity between provider and subscriber regardless of the extent to which allocated bandwidth may be utilized. Another common term for such a link is leased line (which refers to the ongoing reservation of the connection between the two endpoints).

Circuit switching

Using circuit-switching communications, data paths are formed as needed and terminated when such use ceases. This setup operates much like a typical telephone network in that "conversations" are arbitrarily created and terminated, existing only for the duration of the "call" (which is actually an active data connection between at least two parties).

ISDN is a primary example of this kind of technology: a switched circuit is initiated whenever a router possesses data for a remote site, which essentially places a direct-dial call into the remote site's circuit. Once the two parties are authenticated and connected, they begin the transfer of data from source to destination. Upon completion, the call terminates.

Packet switching

WAN packet-switching technology uses a shared carrier infrastructure unlike the private, one-on-one pairings used in a circuit-switched network arrangement. This scenario enables the carrier to make more efficient use of its infrastructure, often resulting in better subscriber costs for similar levels of service. In a packet-switched environment, a shared WAN medium is distributed and utilized among a broad subscriber base that creates virtual connections between sites for packet delivery.

Such a topology is called a cloud and includes protocols such as Asynchronous Transfer Mode (ATM), Frame Relay, Switched Multimegabit Data Services (SMDS), and -- less commonly in the US -- X.25. Packet-switched connectivity is ideal for organizations whose WAN traffic is "bursty" or variable in nature and does not require strictly dedicated bandwidth or always-on WAN links.

WAN virtual circuits

So-called virtual circuits may be defined as any logical circuit created between two network devices across a shared network medium. There are two types: switched and permanent virtual circuits.

Switched virtual circuits are dynamically established on-demand and terminated once interaction between the two linked parties ceases. Three phases define a switched virtual circuit's operation: circuit establishment (connect parties), data transfer (exchange information), and circuit termination (end conversation). This setup is analogous to making a telephone call, which follows this sequence: dial the other party and establish the call (connect parties), conduct a conversation (exchange information), then disconnect the call (end conversation).

Once initially established, permanent virtual circuits (PVCs) remain locked into data transfer mode thereafter for an indefinite period of time. (Though such circuits can and do go down occasionally for various reasons, they are considered "always up" by virtue of their operation.) PVCs are utilized in situations in which data exchanges are constant and ongoing between two points. These circuits involve increased costs for usable bandwidth owing to their constant availability, but that bandwidth also comes with availability and reliability guarantees.

WAN dial-up services

Dial-on-demand routing and dial backup are two popular and cost-effective dial-up services for WAN connectivity. Dial-on-demand dynamically initiates calls when data is ready to be sent and specifies an idle timer that disconnects after a specified period of inactivity.

Dial backup uses switched circuits to provide backup services for a primary circuit such as point-to-point or packet switching. Routers are configured for failure detection and automatic failover until a disrupted primary connection is restored.

WAN devices

A typical WAN comprises numerous networking devices, most of which are not unique to the WAN environment itself. Modems, switches, and servers are non-specific, general-purpose elements in every business computing landscape. These devices bridge network connectivity among LAN and WAN segments, where each type provides different advantages and benefits, along with individually applicable disadvantages and drawbacks. Let's examine each representative category in turn.

WAN switches

Typical LAN-based Ethernet switches are multiport networking devices used in localized environments. Similarly, WAN switches perform identical functions for distributed networking contexts. They operate at the data-link layer (OSI Layer 2) and switch traffic from Frame Relay and SMDS.

Access servers

Central dial-in/dial-out gateways for dial-up connections are called access servers. These devices provide LAN and WAN networking equipment access to asynchronous devices. Network access servers function as control points for roaming and remote users so that they may access internal resources (or to an ISP) from external locations.

Analog modems

An analog modem translates between analog and digital signaling. This enables data-bearing communications to transmit via voice-based telephony. Digital signals are converted into an analog format suitable for transmission through analog carriers and then restored to digital format on the receiving end.

Channel service unit/data service unit

Digital phone line subscribers are connected to telephone service provider network equipment through channel service units (CSUs). End-user equipment (for example, routers, computers) interfaces this modem-like device to access network resources through the provider's local digital telephone loop such as a T1, E1, or DS-3.

The device that connects CSU to data terminal equipment (DTE) is called the data service unit (DSU). It adapts the physical interface on the DTE to the provider transmission facility (for example, E1, T1, DS-3) that converts between subscriber and provider protocol formats.

ISDN terminal adapter

An ISDN terminal adapter is like a modem in that it joins Basic Rate Interface (BRI) connections to different physical interfaces on a router. Unlike a modem, it does not convert between analog and digital signaling.

Understanding the WAN optimization landscape

Because of the many types of WAN links in use, there are likewise many challenges to and best practices for making the most of WAN optimization, where no two vendor solutions and strategies are exactly alike (though most share numerous elements in common). Some optimization techniques are incremental and provide only marginal performance improvements for application-specific traffic flows. Other optimization solutions are instrumental to sustaining long-term performance-enhancing goals and can fundamentally change the way certain network protocols operate -- most of which occurs transparently between the application and its end user. In addition, some vendors provide different classes of accelerated performance in their optimization products, including such major enterprise applications as Microsoft Office SharePoint, SAP, and Oracle Financials.

One key aspect in accelerating fundamental WAN performance comes from changing undesirable behaviors, primarily by eliminating excessive, repeated, and wasteful transmissions. The most efficient strategy in achieving this goal comes from avoiding unnecessary data transmissions altogether. Data caching, data compression, and data reduction are three techniques that haven't proven able to provide measurable benefits in this regard. Data caching, data compression, and data reduction strategies have been discussed in preceding chapters, so their redefinition here is unnecessary. Extensive acceleration strategies go well beyond these concepts, but WAN optimization absolutely involves multiple forms of data caching, compression, and reduction.

Actual data reduction implementations and methods vary widely among vendors and product platforms. For the purposes of this chapter, it suffices simply to distinguish among distinctive differences between data caching and data reduction approaches (data compression is completely different and mutually independent).

Data reduction and data caching compared and contrasted

Data reduction has the following advantages over data caching:

  • Application breadth -- Reduction strategies detect patterns across diverse types of traffic, whereas caching takes an application-specific object-level orientation. Data reduction is an endpoint-oriented technique that reduces traffic volume by sending placeholders in the absence of duplicate data (it gets restored and reissued on the other side). Byte-level granularity detects higher resolution by indexing blocks of patterns in network traffic even when an application protocol (such as backup or replication) performs similar functions at another level (especially at the file, block, or object level). When repeated sequences are replaced in data flows, corresponding index or dictionary references -- not repeated data elements -- are sent across the WAN link. This approach offers sometimes extraordinary reductions in data volume.
  • Application transparency -- No client-server modification is necessary to employ data reduction methods, but some caching environments require individual endpoint configurations for all participating clients.
  • Coherency -- Preservation of client-server communications eliminates the chances of delivering stale or inconsistent information when employing data reduction strategies. Maintaining cache currency and coherence can involve significant amounts of processing activity and WAN communications.
  • Effectiveness -- Data reduction operates at the byte level instead of the object level for data caching techniques. This offers a more granular, higher-resolution hit rate when detecting duplicate information, including information contained within apparently different objects.

Both data caching and data reduction employ a common data management strategy: that is, both depend upon a central point of access that also acts as the controlling authority for endpoint-to-endpoint transactions. Despite vast differences among vendor platforms and products, there are several noteworthy aspects common to all such implementations:

  • Long-term timeline storage -- Highly effective data reduction strategies leverage native drive storage housed in network appliances to maintain historical snapshots of application and protocol behavior. Access to several months of traffic patterns and data efficiently eliminates duplicate data delivery over the long term by recognizing extensive recurring patterns that may otherwise be missed in shorter-term trend analyses.
  • Effective capacity -- Vendors differ greatly in the methods they employ to parse and store data as part of a data reduction strategy. Some are more efficient than others, and make more effective use of available storage. Bidirectional solutions operate using two-way communications and can optimize flows in both directions, whereas unidirectional strategies must manage data separately for every individual traffic flow.
  • Application breadth -- Data reduction solutions operate at the network layer of the TCP/IP network stack to support any transport protocol including UDP. Solutions that specifically target TCP flows are designed to footprint and store bulk TCP application data (such as file transfers and email messages). Support for UDP streams expands the breadth of supported applications (including VoIP as used for IP Telephony and related services, and the Real Time Streaming Protocol -- RTSP, as used for streaming media playback over the Internet, primarily for entertainment videos).
  • Data protection -- Data reduction solutions take protective measures to safeguard end-user information that usually involves application of encryption mechanisms. Compression and reduction strategies work well on repetitive data elements, but effective encryption randomizes such data and renders those strategies ineffective. SSL acceleration originating and terminating on the WAN optimizer expedites overall traffic by permitting optimization mechanisms to operate even within encrypted (therefore unintelligible) transmission streams (essentially, this involves sharing keys or certificates, decrypting data streams in the device to seek out repetition, applying data reduction and caching techniques, then re-encrypting the reduced output for transmission across the WAN. The benefits of WAN optimization usually outweigh the associated overhead involved, making this approach entirely cost effective).
  • Granular matching -- Each solution also differs in how it seeks matching data patterns both in the granularity of the search employed and the resulting long-term database fingerprints. Some solutions work well for duplicate data strings or streams sent in rapid succession but may be ineffective when working with derived data or duplicates sent after older data ages out of the cache.

Finally, data compression seeks to reduce traffic traversing the WAN topology. Simple algorithms identify repetitive byte sequences within a single packet, whereas more sophisticated implementations go beyond the packet level to match packet sequences and entire protocol streams. Header compression provides further bandwidth gains through specialized algorithms designed for protocol-specific properties. Payload compression algorithms identify relatively short byte-pattern sequences in data-bearing protocols that recur over a measured duration, which are replaced with shorter references. Compression across various flows of traffic is called crossflow compression and works even on UDP-based traffic.

In each of these strategies, a centralized analysis and control point is required to monitor and modify entire network transactions through individual conversations. The proxy appliance or proxy server dutifully services this role and proves itself a greatly effective tool in enhancing performance and security capabilities for given client-server needs.

WAN optimization delivers the goods

Through a consistent notion of where and how data travels, what is repeated (and thus, may be copied), and which applications (and identities, services, and so forth) are utilizing WAN links, WAN optimization devices create a model for networked communications that lets them maximize the traffic that WAN links can handle, and make the most of the WAN resources and bandwidth at their disposal. Enterprises and organizations also quickly realize that these devices provide a "golden opportunity" to impose and enforce policy and police network usage at the very chokepoints unwanted traffic might otherwise overwhelm. It's rare that technology solutions pay off more handsomely than they're expected to do so, but WAN optimization enable savvy buyers to put their money right where the pain and pressure usually reside, and realize a veritable bonanza as a result.

In the fourth and final chapter of this e-book, we will dig into the challenges most likely to be encountered when implementing WAN optimization, and the best practices that organization and enterprises should heed when deploying WAN optimization solutions. By the time you finish that material, you should have some good ideas about how to respond to those challenges, and what best practices you'll want to implement as well.

For more information view Ed Tittel's breakdown of WAN bandwidth optimization tools and techniques.


WAN optimization basics
WAN optimization techniques
State-of-the-art acceleration techniques
WAN technologies summarized

About the author:
Ed Tittel is a 24-year computing industry veteran who has worked as a software developer, systems engineer, trainer, and manager. He has also contributed to more than 100 computer trade books, including several college textbooks, and writes regularly for TechTarget, Tom's Hardware, and Perhaps best known for creating the Exam Cram series of IT certification prep books in 1997, Ed also received the Best Networking Professional Career Achievement Award from the NPA in 2004, and has been a finalist in the "Favorite Study Guide Author" category in the Annual Reader's Choice Awards every year since those awards were launched in 2002.

This was last published in December 2008

Dig Deeper on WAN technologies and services