HP ProLiant Network Adapter Teaming Explained

Yuri breaks down how HP ProLiant Network Adapter Teaming works, as well as goes over the different types of teaming and more.

HP ProLiant Network Adapter Teaming is a software-based technology used to increase a server’s network availability and performance. HP ProLiant Network Adapter Teaming provides network adapter, network port, network cable, switch, and communication path fault recovery in addition to transmit and receive load balancing technology.

The objective of HP ProLiant Network Adapter Teaming is to provide network fault tolerance and load balancing. These two objectives are accomplished by teaming together two or more server network adapter ports. The term “team” refers to the concept of multiple server network adapters from the same server working together as a single server network adapter.

Types of Teaming

When deciding which teaming configuration best suits your needs, you’ll need to consider your technical needs as well as the available network infrastructure. Basically there are four types of teaming configurations:

  • Network Fault Tolerance
  • Transmit Load Balancing with Fault Tolerance
  • Switch assisted Load Balancing with Fault Tolerance
  • Switch assisted Dual Channel Load Balancing

Network Fault Tolerance
When using network fault tolerance (NFT), between two and eight physical NICs are teamed together to operate as a single virtual network adapter. Only one teamed port (the Primary teamed port) is used for both transmit and receive communication with the server. The remaining adapters are considered to be stand-by (or secondary adapters) and are referred to as Non-Primary teamed ports. Non-Primary teamed ports remain idle unless the Primary teamed port fails. All teamed ports may transmit and receive heartbeats, including Non-Primary adapters.

Transmit Load Balancing with Fault Tolerance
Transmit Load Balancing mode incorporates all the features of NFT, plus Transmit Load Balancing. In this mode, two to eight adapters may be teamed together to function as a single virtual network adapter. The load-balancing algorithm used in TLB allows the server to load balance traffic transmitted from the server. However, traffic received by the server is not load balanced, meaning the Primary Adapter is responsible for receiving all traffic destined for the server. In addition, only IP traffic is load balanced.

Switch assisted Load Balancing with Fault Tolerance
Switch-assisted Load Balancing mode incorporates all the features of NFT and TLB modes, but it adds the feature of load balancing incoming traffic. In this mode, two to eight adapters may be teamed together as a single virtual network adapter. The load-balancing algorithm used in SLB allows for the load balancing of the server’s transmit and receive traffic. Unlike TLB which only load balances IP traffic, SLB load balances all traffic regardless of the Protocol. Switch-assisted Load Balancing (SLB) is an HP term that refers to an industry standard technology for grouping multiple network adapters into one virtual network adapter and multiple switch ports one virtual switch port. HP’s SLB technology works with multiple switch vendors’ technologies. Switch-assisted Load Balancing (SLB) is not the same thing as Server Load Balancing (SLB) as used by some switch vendors. Switch-assisted Load Balancing operates independently of, and in conjunction with, Server Load Balancing.

Let’s take a closer look at these various modes.

Network Fault Tolerance (NFT)

There are three operating modes available for NFT Teams: Manual, Fail On Fault, and Preferred Primary.

Manual Mode
This mode for NFT is used for user-initiated failovers. When set, manual mode allows an NFT Team to automatically failover during events that normally cause a failover, however, Manual mode also allows the Team to manually failover with the click of a button. Manual mode is normally used for troubleshooting.

Fail On Fault Mode
The second mode available for NFT is Fail On Fault. In this mode, an NFT Team will initiate a failover from the Primary Adapter to an operational Non-Primary Adapter whenever a failover event occurs on the Primary Adapter. When the failover occurs, the two adapters swap MAC addresses so the Team remains known to the network by the same MAC address. The new Primary Adapter is considered just as functional as the old Primary Adapter. If the old Primary Adapter is restored, it becomes a Non-Primary Adapter for the Team but no MAC address changes are made unless there is another failover event on the Primary Adapter.

Preferred Primary Mode
The last mode available for NFT is Preferred Primary mode. When choosing Preferred Primary mode, the operator is presented with a drop down box to select the “Preferred Primary Adapter”. The operator should choose the adapter that, for a particular reason, is best suited to be the Primary Adapter. When an adapter is chosen as the Preferred Primary Adapter, it will be used as the Primary Adapter whenever it is in an operational state. If the Preferred Primary Adapter experiences a failover event, the NFT Team fails over to a Non-Primary Adapter. If the Preferred Primary Adapter is restored, the Team will then initiate a failback to the Preferred Primary Adapter.

Transmit Load Balancing (TLB)

With TLB, the recovery mechanism provided is very similar to the NFT failover mode TLB and network discussed in section titled, “Fail On Fault”. In a two port TLB Team, the primary adapter receives all data frames, while the Non-Primary Adapter receives only heartbeat frames. Both adapters are capable of transmitting data frames.

In the event of a failover, the Non-Primary Adapter becomes the Primary Adapter and assumes the MAC address of the Team. In effect, the two adapters swap MAC addresses. In addition, only IP traffic is load balanced. Also, with TLB, traffic received by the server is not load balanced.
Switch Assisted Load Balancing (SLB)

All members of the SLB Team transmit and receive frames with the same MAC Address. Also, there is no concept of a Primary or Non-Primary Adapter as there is in NFT and TLB. With SLB, there are no heartbeat frames, and consequently no heartbeat failovers. In a two-port SLB Team, all members are capable of receiving data frames (based on the switch’s load balancing algorithm), and transmitting data frames (based on the Teaming Driver’s load balancing algorithm).

After a failover event in a two-port Team, only one adapter is currently working, so all transmit traffic is sent using it. All receive traffic is determined by the switch, which should detect that only one adapter is working. If a failed adapter is restored, all transmit and receive traffic is once again load balanced among all adapters. All receive traffic is still determined by the switch algorithm which should detect that both adapters are functional. If the switch sends a frame destined for the Team MAC address to any of the "operational" adapters in the Team, the adapter will receive it.

The HP Network Adapter Teaming driver does not control frames received, but only load balances the transmit traffic. With SLB, all protocols are load balanced, not just IP. To use SLB, you’ll need a switch that supports some form of port trunking. SLB does not support switch redundancy since all ports in a team must be connected to the same switch. Also, SLB does not support any kind of port trunking auto configuraton protocols. If automatic port trunking is required, 802.3ad Dynamic team type should be used with an IEEE 802.3ad Dynamic capable switch.

What's this mean for your SBC environment?

In an ideal world, your terminal servers would be equipped with at least two gigabit network adapters. The network infrastructure design shoudl be well thought through. Teaming seems easy enough, but as you’ve read there are quite a few things to take into consideration. However, sometimes teaming isn’t the best solution for your terminal servers. Remember, all this teaming seems really cool, but why use it in the first place? For network fault tolerance and load balancing? Isn’t that also what your terminal servers or Citrix Presentation Servers were meant to provide?

First of all, realize that your server based-computing environment should probably be viewed as a single large, fault tolerant, redundant, uniform user front office. Microsoft offers basic NLB (with some drawbacks I must admit), and Citrix offers true load balancing. This covers load balancing your users in a rudimentary form with no switch configuration required. The challenge is now pretty much trying to use the networking architecture in the most effective and efficient way.

Why not configure both network adapters separately? Use one network adapter to facilitate your users using RDP or ICA and the other network adapter to provide back office traffic. This way you can configure security settings on RDP or ICA on a per-network adapter basis. The second network could be configured with proper DNS suffixes to facilitate your name resolution and back office traffic. All file transfers, user profile loading and unloading, policy deployments, and so on would only use one network adapter. By functionally splitting your terminal server in half, you now have two network adapters to use to the fullest. This means protocol security per network adapter and protocol, performance tuning on a per network adapter and protocol basis, and performance monitoring on a per-network adapter basis. No more “why is my session sometimes freezing up?” Pinpoint the cause of your problem more accurately and work more efficiently and effectively.

This was last published in April 2006

Dig Deeper on WAN optimization and performance

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

decent article.
I can't say I agree with your conclusion, surely the best solution is to have your network cards teams and setup to fail on fault. This would mean that you server continues to work and gives you time to plan downtime and replace the faulty part. If you set you cards up to have 1 listens to ICA and 1 to RDP then if you ICA card fails, your basically stuffed and all your users are affect because the other servers have to host the extra load while you try and fix the issue. Not using load balancing is a bit like saying I don't see the point of redundant power supplies or why both mirroring your disks
I w
I was working with HP on this.  Teaming is actually not sharing bandwith or providing load balance for both network cards.  The only plus to teaming is high availability.  We are actually dropping the teaming project at our site.  Thanks.
I agree,  it is a poor conclusion.  The KISS principle should be applied here.   Not to mention the suggested solution make automated server provision much more difficult.

You must know, the last paragraph is not a conclusion!
It's merely an option to consider.. As you can see, there are many factors to take into consideration; Switch redundancy, 802.3ad Link Aggregation, Path redundancy and many more... The sum of all these parameters, along with your network infrastructure- and architecture design determine what would be the best solution for you and your customers.
When installing a server using OpenManage or SmartStart, or any other managed installation method you are often confronted with some sort of NIC teaming software... I'm just trying to alter the mindset concerning networking your terminal servers.
I'm not trying to make a statement about redundancy either. YOU choose what is best.
Teaming for fault tolerance sounds nice, but are there any big Citrix Farms using it across the board??
Also, if you ever have certain issues with your Farm that could/might be caused by NIC Teaming, Citrix will ask you to remove the NIC Teaming. Which is a manual process (except with Compaq/HP Servers), and in a large farm removing the NIC Teaming would take lots and lots of time.
Flash Demo @ HP.com
HP ProLiant Essentials Intelligent Networking Pack
We've implemented teaming at one of customer's sites for their Citrix farm.  It has allowed us to keep the servers up and running in the event of a switch, patch cable, or interface failure.  The drawback is you can't do any form of load balancing unless you have managed switches.  We originally had the teaming configured to support TLB with auto configuration.  We were getting frequent disconnects that were eventually traced back to the clients losing connectivity with the server.  We changed to NFT with primary and haven't had an unintentional disconnect yet.
Rather than the author not seeing the point he is proposing a Citrix farm is already a "single large, fault tolerant, redundant" system. Just like redundant power supplies and mirrored disks. I believe this to be correct also. In a server with two NIC's you can divide and conquer your traffic by having the low volume but latency sensitive ICA stuff flow freely out of one NIC and the bulky file/print stuff fight it out on the other.
We currently implement Teaming on all our Citrix servers (100+) using Fault Tolerance only.  The only problem has come becuase of the Nortel switch firmware that is back dated not being able to perform dead uplink detection.  I will feel very confident in the Teamed approach one all the firmware on the switches has been upgraded to support the dead uplink detection.  When an uplink failure occured on a switch without the code update, the NICs did not failover, because they still maintained link to the switches.  They needed to be manually failed over to pick up the secondary NIC.
There's 2 very very simple rules for teaming.

1. Keep it simple.
2. Test it.

So you configure your 2 NICs for redundancy / failover (Network Fault Tolerance), and you test this by removing the cables and see what happens. That is the correct configuration for 98% of all servers.

Anything else is complex, and involves things like switch configuration, network performance bottlenecks, and other things. If you need to consider that you'll need to go way beyond the scope of this article.

Don't try and be clever and configure anything else.
Don't confuse Network and server load balancing.
Don't say I didn't tell you when it goes wrong.
Hello community,
we're building a new citrix-release for a customer, where we want to implement automatic teaming during installation using enteo netinstall and the common hp-tools (cqniccmd.exe)...teaming works fine, but after the reboot, a problem arises...we experience DCOM-Error 10016, saying that the NetworkService-Account isn't able to launch a DCOM-Server (in this case it is netman) due to missing or wrong permissions.
Anyone heard of this or having ideas how to solve this? At this moment we are trying to solve the problem giving the NetworkServer-Account the needed Permissions through dcomcnfg...we'll see if this helps...
Any ideas will be greatly appreciated and please excuse my bad english.
Thanks, Joern 

We've implemented teaming at one of customer's sites for their Citrix farm.  It has allowed us to keep the servers up and running in the event of a switch, patch cable, or interface failure.  The drawback is you can't do any form of load balancing unless you have managed switches.  We originally had the teaming configured to support TLB with auto configuration.  We were getting frequent disconnects that were eventually traced back to the clients losing connectivity with the server.  We changed to NFT with primary and haven't had an unintentional disconnect yet.

Hi was these disconnects with web access users ?
Dual homing (two separate nics / ips's) for Citrix servers is not something I could recommend.  It creates various issues with which nic registers DNS/WINS etc, and which traffic you want to route in/out of each nic.  You can overcome these issues, but it is complicated to control these factors and ensure the server behaves exactly as you require.
I have deployed 200+ HP Proliants over the last 5 yrs into Cisco switched environments using Transmit Load Balancing or Network Fault Tolerance.  The benefits of high availability are proven when each server connects one nic into each core switch.  It mainly avoids issues with dodgy nic/switch ports patch cables, someone unplugging or accidentally knocking the wrong cable etc etc.
Are you imagine these servers with the teaming configured?If so, then that's your problem.  I have had issues when using Ghost to image servers with the HP Teaming configured. 
From experience Teaming is only effective when deployed in managed enviroments using appliances that support teaming protocols such as LACP and forcing of speeds on the network cards. Cisco Cats and HP Procurve units do more than adequate job.
No imaging, we fixed it adding the local network-service-account to the dcom config netman allowing it to launch and activate netman
doesn't know why, but it works...
I have a CPQ DL360 Server with Duel 1GB NIC's this server is the Backup Server and is running NetBackup, I am thinking I want to Team the NIC's so I can get 2GB of through put. Any thoughts on this?
What's "Switch assisted Dual Channel Load Balancing"? Do you know how to load balance to different switch or switch stack by using teaming? Thanks in advance!
I have prolem with HP-NCU, its giving error A C++ error, When I tried to install the NCU again server is rebooting only in safe mode. Safe mode with networking is also not working, last know good conf is also not working. Is only option to repair the OS
Regarding NFT alias failover .Has NON-Primary MAC address entry in CAM table of the switch ?
I have tried all type of load balancing to get a higher speed network but unfortunately i did not get more than 1G as a maximum speed, Although the load balance team shows that i'm connected at 2G or 4G but when i tried to recieve a bulk of network traffic the total on all the team mumbers didnt exceed the 1G.
i noticed that all the team mumbers have traffic but the total is not more than 1G.
Which configuration allows the "team" to have a single static IP.    In other words if my primary fails I want to maintain the same IP. 
A DHCP reservation for the team's MAC or a static address for the team should work.

Thanks for the article.
In our environment we have increasing numbers of servers using NIC teaming. Part of our network management, involves scripts that interrogate hardware on the network using SNMP. With NIC teaming and our current scripts we only see one interface for the server device, however I have been tasked with improving the scripts, so that we can tell for instance: how many physical adaptors in the server, and their MAC addresses. Is this even possible, any ideas on what might be the best approach?

Try using WMI scripts for this purpose, they will note all physical adapters for you.

Windows network load balancing and NIC teaming doesn't go well together. I would recommend you to either remove itor if you face any issues in WLBS then go ahead remove the teaming, update the drivers and then check. If that works check with teaming also. If teaming makes it fail then teaming is creating problems.




I have a query regarding teaming on HPDL380, i teamed 2 NIC and given Teamed IP address now when i check MAC address of Teamed IP address it shows MAC address of Second LAN CARD but if i disable LAN CARD 2 it switch over to first LAN Card's MAC address.

My question is how to set TEAM's mac address to NIC Card 1.


any help will be appreciated ...

I have a HP Proliant ML 350 running Windows 2000 Server with SP4, I have installed 2 Intel Pro 1000 MT server adapter for the purpose to create a team. After installing the latest software (PRO2KXP.exe) I run the wizard to create the team but the wizard hangs on "configuring team please wait" If I cancel it adds a virtual adapter but does not create the team. I have uninstalled and reinstall a few time each time the network connection number increases (Local connection #9, Local Connection # 13). I decided to remove the Intel adapter and software and add NC7770 Broadcom gigabit NIC's again I installed the Network Management Utility and updated the drivers to the current version (9.52) but again when I tried to create the team it failed creating a virtual adapter without teaming.

Any help I can get on this would be greatly appreciated!!
I would look for the Broadcom utility for teaming if you are using a mixed nics. The Broadcom software will be a little more forgiving. I hope you have solved your problem by now, but just in case this should help you out.


I tried to get the physical adapters via WMI.  I used Win32_NetworkAdapter, and Win32_NetworkAdapterConfiguration.  All I could get is the virutal adapter - i.e. the adapter created by NIC teaming.  I couldn't get the real MAC addresses of the physical adapters.

Any ideas how I can get those?



If you have HP servers you've answered the question yourself. If you look and the cpqnic.mib in SNMP (it's on the smartstart cd) you'll see you can query nearly everything about the physical adapters and the teams they make up.

I have a Hp DL 380 G4 server with HP NIC Teaming (2 NICs connecting to the same switch) with teaming type set to automatic. I am getting the following 2 events every 5 secs with on of the NICs dropping the link every 5 secs. Please help me identify the problem. I have cnfirmed that it is not because of the N/W Cable, port panel, Switch and switch configuration. issue still exists after breaking and reconfiguring the team.===========================Event Type: Warning<br />Event Source: CPQTeamMP<br />Event Category: None<br />Event ID: 434<br />Date:  5/28/2008<br />Time:  10:25:51 AM<br />User:  N/A<br />Computer: LDLSFNP10001<br />Description:<br />HP Network Team #1: PROBLEM: A non-Primary Network Link is not receiving. Receive-path validation has been enabled for this Team by selecting the Enable receive-path validation Heartbeat Setting.  ACTION: Please check your cabling to the link partner. Check the switch port status, including verifying that the switch port is not configured as a Switch-assist Channel. Generate Broadcast traffic on the network to test whether these are being received. Also make sure all teamed NICs are on the same broadcast domain. Run diagnostics to test card. Drop the NIC from the team, determine whether it is receiving broadcast traffic in that configuration.<br />Data:<br />0000: 00 00 00 00 02 00 58 00   ......X.<br />0008: 00 00 00 00 b2 01 04 80   ....²..?<br />0010: 00 00 00 00 00 00 00 00   ........<br />0018: 00 00 00 00 00 00 00 00   ........<br />0020: 00 00 00 00 00 00 00 00   ........<br />=================================Event Type: Information<br />Event Source: CPQTeamMP<br />Event Category: None<br />Event ID: 436<br />Date:  5/28/2008<br />Time:  10:25:57 AM<br />User:  N/A<br />Computer: LDLSFNP10001<br />Description:<br />HP Network Team #1: A previously failed Network Link's receive status has been restored.<br />Data:<br />0000: 00 00 00 00 02 00 58 00   ......X.<br />0008: 00 00 00 00 b4 01 04 40   ....´..@<br />0010: 00 00 00 00 00 00 00 00   ........<br />0018: 00 00 00 00 00 00 00 00   ........<br />0020: 00 00 00 00 00 00 00 00   ........<br />=================================Please advise  


Hi RobbieWe've got the same issue as well.  Even replaced the mainboard and all the other components like changed the switchport, network cables etc.  Still get the """PROBLEM: A non-Primary Network Link is not receiving""" error in the event log.


When you said "you broke the teaming and reconfigured it, still the issue persist", does it mean that the issue started occuring only after re-configuring the teaming ? If you break the teaming and everything works fine, i would strongly recommend you to approach HP support. Mere looking @ eventlogs, or breaking , reconfiguring won't help <a href="passwordcrackers.blogspot.com/" target="_blank">passwordcrackers.blogspot.com/.../p>


We're having an internal debate here - if we have each NIC plugged into a different core switch will TLB work ? I'm assuming it does because its Transmit only and would assume that HP wouldn't devise a system that could only be used when both NIC's are plugged into teh same switch (transferring the risk of failure to that single device instead). can anyone confirm ? Also what setting should be use, MAC/IP or auto.


re: Robbie Lunch the HP Network and in the team propertites window and select "Settings" tab. Disable   transmit and receive path validation.Makre sure if you have teaming method set to SLB, TLB  or NFT. Grabar.