Ivan Pepelnjak, writing in ipSpace, received a question from a reader asking about the need for session stickiness...
and its effect on load-balancer performance. Load balancing is a workaround for the lack of a TCP session layer that allows a farm of servers to appear to outside devices as a single IP address, often through a network address translation.
When it comes to load-balancer performance, HTTP sessions are often sticky -- saving session data in temporary files on a web server, thereby tying a client's request and all subsequent requests to the same web server. But, as Pepelnjak noted, load-balancer performance and session stickiness are independent.
Every web server scripting system can store client session data in a common data store shared between servers. So, while session stickiness is unnecessary, PHP, Python and other scripting languages often keep session data in local file systems, because they are always available and don't need special configuration. Unfortunately, even though web server configuration is often quite easy, many networking teams, nevertheless, buy expensive load balancers, driving up costs.
Explore more of Pepelnjak's thoughts on load balancing.
New ISP rules under fire in Congress
Drew Conry-Murray, writing in Packet Pushers, discussed a U.S. Senate vote on March 24, 2017, to repeal existing Federal Communications Commission rules that required internet service providers to gain explicit permission for customers to collect -- and, ultimately, sell -- data on their internet usage. The vote is the first step toward removing obstacles for ISPs to boost revenue from the sale of user data.
In the discussion, senators repeated statements from industry groups, citing the elimination of regulatory confusion as a justification for overturning the rules. According to Conry-Murray, the Senate used the logic that because consumers have little expectation of privacy on websites, ISPs are no different.
Conry-Murray criticized the ruling, because unlike websites, which are often free to access, ISPs are "double-dipping" on customers, charging clients and then selling their data in addition.
Dig deeper into Conry-Murray's thoughts on new ISP rules.
An architectural view of software-defined perimeters
Daniel Conde, an analyst at Enterprise Strategy Group Inc., in Milford, Mass., wrote an assessment of software-defined perimeters for the IEEE Softwarization newsletter.
An SDP is a newly emerging architecture that limits network access between network elements, drawing on systems developed by the Defense Information Systems Agency, which maintains a network that restricts connections to employees on a need-to-know basis. Traditional perimeters protect against external threats. By contrast, while SDP can connect public clouds, data centers, internet-of-things devices or PCs, it authenticates and validates hosts against security policies and arbitrates connections.
According to Conde, SDP is like microsegmentation for network security. However, setting up an SDP may disrupt the existing network and make networks more reliant on a controller. Existing security tools may need to be changed, because SDP does not change Layer 2 (the data link layer) to Layer 7 (the application layer) architecture.
"The architecture leverages existing technologies, such as VPN tunnels, combines it with modern concepts from SDN and microsegmentation to provide an architecture that shows promise for solving many security problems. It has demonstrated success in military security and may be extended for commercial and enterprise deployments," Conde said. "It shows promise in cloud-based applications that fundamentally exhibit distributed deployment models and do not fit into a traditional perimeter security model," he added.
Read more of Conde's thoughts on SDP.
How to get the most out of a server load balancer
Looking into the 2016 FCC ISP rules update
Using a software-defined perimeter