Home Networks Virtual Windows NLB Nodes

Virtual Windows NLB Nodes

by admin

I’ve recently been involved in an Exchange 2010 implementation project, specifically due to the Client Access Servers being created as virtual machines, on ESXi, and requiring network load balancing.

It’s not the first time that I’ve needed to configure NLB in a vSphere environment, but I always find myself trying to recall the various options and pitfalls, so I thought I’d do a quick write up here. I’ll mainly be talking about Windows 2008 R2 here, but the same concepts apply to earlier versions of Windows Server.

There are essentially two modes in which you can configure NLB on Windows  – Unicast and Multicast

Multicast Network Load Balancing

This is the preferred method for implementing NLB in a vSphere environment. When using NLB in Multicast mode you do not have to make any special configuration changes on the hosts, but will have to configure the guest virtual machines and the physical switches. Due to the way multicast NLB is implemented you will need to create a static ARP entry on the switch where your default gateway interface or SVI resides.

Providing you have access to your physical switches, or you can persuade the network team to make the change for you :), this is the way to go. You can find out about how to configure the static ARP entry here.

Unicast Network Load Balancing Mode

The advantage of using When using NLB with Unicast reverse need not make any specific settings on the physical switch. However, you  will need to do change some settings on the ESX host. You tend to see a lot of NLB clusters configured to use unicast as it is the easiest and most convenient to configure.

In order to use NLB in Unicast mode on an ESX host you have to make some changes to the vSwitch where the virtual machine’s portgroup is configured. Specifically, you have to set Forged Transmits to Accept and you have to set Notify Switches to No.

Due to these changes, there are some imitations in running NLB nodes on an ESX(i) host. These include:

  • All VMs belonging to the NLB cluster must be on the same host.
  • On each server participating in the NLB cluster there should be an additional virtual network interface for management communication, since all the servers in the cluster have the same IP and MAC address and, therefore, the communication between them is not possible.

Having virtual guests running unicast NLB across different hosts in a vSphere cluster can lead to issues with layer 2 flooding and spanning tree, since each participating server essentially has the same IP and MAC.

 

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More