A large website operator is using the scripting capabilities of Riverbed Stingray application delivery controllers to enable its DevOps team to automate application delivery policies as it deploys new applications and services.
DM Services Inc. runs Stingray application delivery controllers (ADCs) on two Solaris servers to handle 25 million daily page views and a peak flow of 80 Mbps from the public Internet to Vivastreet, a popular classified ads site serving Western Europe and South America. W3 Ltd., which owns Vivastreet, uses DM Services to provide both infrastructure and services from its New York-based data center.
"Stingray Traffic Manager balances the load of all our global Web requests to the website," said Ben Block, chief technology officer of DM Services. "We are in 19 countries. The load balancers will make various decisions based on different criteria and send that off to different application servers."
Prior to Riverbed Stingray, DM Services was load-balancing Vivastreet with open source HAProxy load balancers, which were fully capable of handling the site's traffic but lacked the scripting capabilities that Block's team needed for building complicated load-balancing rules.
DM Services needed to write scripts that stripped out header information or cookie information and redirected traffic based on cookies or IP addresses. The contractor also wanted to strip out debugging information used by the development team, which the infrastructure team didn't want users to see when the website loaded on their browsers. Stingray's TrafficScript capabilities fulfilled this need, Block said.
In addition, the ADC's scripting features allowed DM Services' DevOps organization to automate the rollout and pooling of services to support Vivastreet's core application.
"It gives the operations team and the sysadmin team the ability to really help the [software] engineers to segment the services," Block said. "Let's say we have a new service we want to provide -- for example, we redeveloped our search API [application programming interface] that is used by our mobile app. That re-architected app or service was deployed to one of our Web servers when it went into a beta release. The DevOps guys were able to create a pool [of servers] just for that service, saying any request for this domain name [should] route to just this one application server. [A pool of one server was enough], because we knew it was in beta. Then they were very easily able to say, 'We're going to deploy a second application server and add it to the pool.' In that respect, the DevOps team is able to very easily control the routing of requests and where different services and applications live within our infrastructure to manage stress tests and new loads for applications or services that we aren't quite sure how they're going to behave."
The DevOps team hasn't integrated its Stingray ADCs into an orchestration platform such as Chef or Puppet, although Block said that is something the unit is thinking about. Instead, Block's team is focused on using Jenkins, the open source continuous integration services platform, to pull servers out of application pools via APIs, deploy applications on them and put them back into those pools, Block said.
"The [software] engineering team is building applications and enhancing the features and executing the roadmap day in and day out. They understand our internal application better than the DevOps guys. Sometimes they will help the DevOps guys, say this is how we should route traffic or we need to worry about this cookie. But generally, it's the DevOps guys that are reconfiguring Stingray and managing it."
Dig deeper on Application Acceleration and Server Load Balancing