Mike_Kiev - Fotolia

News Stay informed about the latest enterprise technology news and product updates.

How leaf-spine architecture differs from fabric extenders

This week, bloggers compare leaf-spine architecture and fabric extenders, look into HPE's acquisition of SimpliVity and discuss the need for AI in SDDCs.

Ivan Pepelnjak, writing in ipSpace, tackled the difference between fabric extenders and leaf-spine architecture in a network anchored by Cisco Nexus switches.

According to Pepelnjak, fabric extenders differ from leaf-spine architecture in a few key respects. Spine nodes in well-designed leaf-spine architecture are entirely independent, whereas Nexus switches, set up as a VPC pair, share configuration -- and risk. Leaf nodes, meantime, are independent devices in traditional leaf-spine designs. Fabric extenders act as line cards of the spine switches. All traffic must travel over the spine layer in a fabric extender architecture, in contrast to independent leaf nodes that offer local packet switching. Above all, Pepelnjak said because leaf-spine architectures can have more than two spines, they are more resilient and tend to have fewer distributed problems from a single failure. He cautions that these fabric characteristics are only applicable to Layer 2 or Layer 3, working with Transparent Interconnection of Lots of Links (TRILL), shortest path bridging, Virtual Extensible LAN or different forms of vendor systems.

Explore more of Pepelnjak's thoughts on leaf-spine architecture.

HPE seals the deal with SimpliVity acquisition

Terri McClure, an analyst at Enterprise Strategy Group in Milford, Mass., sees Hewlett Packard Enterprise in a hurry to enter the hyper-converged infrastructure market. Although HPE announced its acquisition of hyper-convergence vendor SimpliVity in January, the deal was finalized rapidly, closing on Feb. 17. ESG research indicates that hyper-converged offerings are figuring prominently in IT managers' plans. In fact, among managers responsible for cloud infrastructure, 85% indicated that they are planning to adopt hyper-converged technologies. However, in the same survey, 57% of respondents said that they still plan to deploy a more traditional three-tier infrastructure in five years' time.

"Hyper-converged adoption is on fire," McClure said. "HPE needed SimpliVity. It had hyper-converged solutions, the HC 250 and 380 -- but these were really pretty basic 'VDI vending machines' -- as HPE themselves called them. SimpliVity opens up HPE to address a much broader set of use cases," she added.

Currently, Dell EMC has a substantial lead in the hyper-converged market, along with Nutanix, which enjoys good name recognition and strong channel relationships. However, McClure sees challenges for Nutanix, which is hardware agnostic and heavily reliant on Dell EMC as a sales channel. According to McClure, HPE's acquisition of SimpliVity will be much smoother than the Dell-EMC combination. That, along with HPE's strong relationship with VMware and Microsoft, as well as its ability to leverage a new data management layer, may give HPE a strong position in the market, McClure said.

Dig deeper into McClure's thoughts on HPE and SimpliVity.

If data centers only had a brain

Torsten Volk, an analyst at Enterprise Management Associates based in Boulder, Colo., sees a key factor lacking in today's software-defined data center (SDDC). The SDDC needs a "brain" capable of taking all the inputs it receives and delivering the information needed to support an organization's security needs. Artificial intelligence and machine learning will be part of this brain, but as Volk has said, these tools today only leverage a fraction of what's possible. The new generation of security tools must be able to go far beyond figuring out whether an AWS bill is too high or spotting metrics that might indicate a breach.

"Today, IT operations and security [can only deliver a fraction of] what's technically possible in AI and machine learning, simply as there are not enough software architects and developers who are able to plan and implement this type of solution," Volk said. Volk suggests that the SDDC "brain" should access data from all relevant business software, including accounting, email, ERP, CRM, public and private clouds as well as hypervisors. "The 'brain' will constantly contextualize this internal data with a much larger body of external information obtained from publicly accessible sources that will inject additional knowledge into the mix," he said.

Read more of Volk's thoughts on AI and machine learning for data centers. 

Next Steps

The case for leaf-spine topologies

Bare metal defines SDDCs

HPE acquires SimpliVity

Dig Deeper on Data Center Network Infrastructure