kentoh - Fotolia
Very few things move faster on land than a Formula 1 race car, and perhaps even fewer IT departments can match the blink-or-you’ll-miss-it pace at which an F1 team's IT pros are required to operate.
Williams Martini Racing, a British F1 team and constructor, will compete in Grand Prix circuits this year in 21 countries across the globe, including Abu Dhabi, Australia, Azerbaijan, Brazil, Canada, China, Italy, Japan, Monaco and Russia. Upon arrival, each team gets access to an empty garage at the circuit -- often supplied with only a power feed -- to house the cars, parts and technology supporting the vehicles. Two of Williams' IT pros follow the team on the road, setting up Cisco wired and wireless networking gear, plus two racks of Dell's converged servers and storage, at a momentum that seems to rival the mechanics in the pit stop.
The IT support staff typically has one-and-a-half to two days to set up the infrastructure, complete any software updates, run the cabling and test all the equipment, according to Graeme Hackland, the team's IT director. They have only four to six hours after the race to tear it down and pack it up carefully enough to survive the drive or flight to the next race.
Technology plays a huge part in F1 racing. Cars are outfitted with 200 to 300 sensors transmitting data via radio waves while they're on the track, Hackland explains. Once in the garage, the car connects back to the IP network to transmit additional metrics. At the same time, auto engineers stay glued to real-time data analytics applications to track the car's condition and performance.
As a result, network reliability, simplicity and resiliency are the team's biggest technology priorities, Hackland says.
Graeme HacklandIT director, Williams Martini Racing
"Downtime just hurts the performance of the car. If the engineers can't see the data, they can't make aggressive calls and strategize. They can't call the car in at the right time. They can't see what their competitors are doing or react to changes they are making," he says. "We'd rather have a slightly slower network that's reliable."
The team's network is made up of a 10 Gigabit Ethernet backbone supported by two core switches, with 100 Mbps running in the access layer. Its wireless network, built with Cisco's Meraki gear, provides Wi-Fi to the garage as well as any temporary offices used by marketing staff. Cisco firewalls and strong encryption keep competitors' eyes away from the team's data. And wherever Williams competes, it always sets up a 100 Mbps MPLS link via British Telecom to the team's factory in England.
Another issue with races in desert or tropical climates is that the garages have no traditional cooling systems, making it especially challenging to keep systems running smoothly and ensure network reliability.
"It's not a data center anyone would recognize," Hackland says. "In the hotter countries like Malaysia, Bahrain and Abu Dhabi, we will put fans [in] just to move the air around. But in these garages, space is at a premium, so heat can be an issue. We sometimes get very close to the upper [limit of] operating temperatures of the equipment."
Read more from this series: Networks in cool places
Table of contents: Four examples of networks in unexpected places
SMM Pogo: Building a network on a mountain, underground at minus 20 degrees
U.S. Army PEO C3T: Boosting bandwidth amid the Ebola crisis
Chitale Dairy: Milk production doesn’t stop, neither does network infrastructure
When it comes to exploring emerging or potentially complex technologies, Hackland says Williams takes a very conservative approach for its network at the track. That’s because network reliability is essential.
"We're very suspicious of things like spanning tree and all of these protocols that you can use for high availability, because they do tend to add complexity to the network. And when they go wrong, it tends to be very difficult to diagnose," Hackland says. "We can't afford to have a degradation of performance because we’re trying to be too clever."
Network upgrades: Making a roadmap
Are today's networks reliable enough for cloud computing?
Why is TCP considered a reliable protocol?