In this post I will cover how to deploy the Cisco n1000v switch on vSphere 5.5 – including the initial deployment of the switch appliance, some initial configuration, followed by deployment of a second VSM to make the deployment highly available.
If you’re still reading then you likely already know what the n1000v is and what it does, but if not there is some great info on it here. If you want to try it out, there is the free essentials edition, which I will be using throughout this post, specifically version 5.2.1 SV3 1.5a.
Deploying the n1000v Virtual Appliance
These steps will be familiar if you have deployed a virtual appliance before. Once you have downloaded the required files, start by selecting ‘Deploy OVF Template’ in vCenter, then select the ‘ova’ file:
On the next page, review and accept the EULA, then select and name and folder for the deployed appliance:
On the next page, you need to choose the deployment type. I chose to do a custom install:
Select which datastore to deploy the appliance to on the next page, before configuring the network details:
The n1000v VSM has three network connections. In my lab I don’t have any VLANs set up so I’ve attached everything to the same portgroup – but in a production deployment you would split these out onto different VLANs. The three connections have different functions. First there is the ‘Control’ connection – this is used to provide control connectivity between the Nexus 1000v VSMs and the ESXi host VEMs. The management network is used to provide connectivity to the Nexus 1000v virtual supervisor module – the management IP of the switch will be on this network, and is used for SSH / Telnet traffic etc. Finally, the packet network is used to provide internal packet connectivity between the VSMs and the VEMs.
Once the port groups have been selected, the last step is to input some configuration details. These include a domain ID (this is a digit between 1 and 1023. If multiple VSMs are deployed they must share the same domain ID), the management IP and gateway, and a password for the ‘admin’ account:
On the last screen, review the selected configuration – then click ‘Finish’ to deploy the appliance.
Nexus 1000v Initial Configuration
Once the appliance has been deployed, power it on, then log in to the console using the ‘admin’ account:
We need to do a bunch of initial configuration tasks to get the switch up and running. Luckily, there is a setup wizard, which you can run from ‘configuration mode’:
switch# configure terminal switch(config)# setup
First, we need to enter the HA role. I will eventually deploy a second VSM, so will set the role for the first one to primary:
Next, enter the domain ID:
Next, Enter yes to enter the basic configuration dialog:
The first option is to create another user account, followed by configuring SNMP, then setting a new for the new switch:
Next, we need to set the management IP address for the switch:
Next, Enter ‘no’ when asked to configure advanced IP options. Next, you’ll be given the opportunity to enable SSH, Telnet, HTTP server and NTP:
Next, select ‘yes’ to configure the SVS parameters and then enter the mode (L2 or L3), and the control and packet VLAN IDs:
Configure svs domain parameters? (yes/no) [y]: yes Enter SVS Control mode (L2 / L3) : L2 Enter control vlan <1-3967, 4048-4093> : 1 Enter packet vlan <1-3967, 4048-4093> : 1
I have used VLAN 1 for the packet and control VLANs, as mentioned earlier, due to this being in my lab environment. Next, you will be given a summary of all the configuration choices made, which will then need to be saved:
The new configuration is saved to non-volatile storage.
Connecting the Nexus 1000v Switch to vCenter
A plug-in needs to be imported into vCenter before we can use the switch. Start by putting the IP address of the new switch into a browser:
We need to download the cisco_nexus_1000v_extension.xml file. We need to register this file as a plugin in vCenter. Open the ‘Plug-In Manager’ then right click and select ‘New Plug-in..’:
Select the XML file, then click ‘Register Plug-in’:
Once the plug-in has been registered successfully we need to go back to the console of the n1000v virtual machine, to set it up so that it will communicate with vCenter. To do so, enter the following commands:
svs connection < Name of the Connection > protocol vmware-vim remote ip address port 80 vmware dvs datacenter-name max-ports 8192 Connect
Once done you should see a number of tasks start to run in vCenter:
Running the following command on the switch will list the connection status:
nexus1000v-1# show svs connections connection myvcenter: ip address: 172.16.1.3 remote port: 80 protocol: vmware-vim https certificate: default datacenter name: vLab admin: max-ports: 8192 DVS uuid: bc 76 1c 50 f9 1d 91 2d-51 68 26 0d 4f 29 95 83 config status: Enabled operational status: Connected sync status: Complete version: VMware vCenter Server 5.5.0 build-1891313 vc-uuid: 1101A338-77F6-4D3E-A6BB-6B00974B7EC1 ssl-cert: self-signed or not authenticated
The new switch will also now show up in vCenter:
Configuring Uplink Group and Port Groups on the n1000v
I’m not going to go into a lot of detail here, instead I will share the configuration I used to create a new uplink group and a new port group on the n1000v. This is just sample configuration which I found to work fine in my lab – in a production deployment there would be additional configuration steps, such as enabling etherchannel. To create a new ‘uplink’ group I ran the following commands:
port-profile type ethernet system-uplink switchport mode trunk switchport trunk allowed vlan 1,80,90 no shutdown system vlan 1,80,90 state enabled vmware port-group
This is an Ethernet port-profile and will be used for the ESXi hosts uplinks. The other type of port profile is vethernet, which is used to create port groups. I used the following sample config to create a new port-group:
port-profile type vethernet Prod-VMS switchport mode access switchport access vlan 1 no shutdown state enabled vmware port-group
Looking at the switch in vCenter we now have an uplink group and a new port group to which we can attach virtual machines:
Connecting ESXi Hosts to the Nexus 1000v Switch
We can now connect an ESXI host to the n1000v switch. This process is much the same as when adding a host to a VMware dvSwitch. To do so, right click on the switch, and select ‘Add Host’:
Select the host to add, then select the host’s VMnics to add to the switch, along with which uplink group they are to be a member of:
I don’t want to migrate any host or virtual machine networking at this time:
At the end of the wizard, click finish to add the host to the switch. A bunch of tasks will start running, including update manager being invoked to install the VEM – virtual Ethernet module – onto the ESXi host:
Once done, the host will be attached to the new vSwitch. And that’s it for now. This post ended up a bit longer than I thought, so I’ll cover adding a second VSM in my next post.