The vSphere Auto Deploy Server is user for provisioning ESXi hosts. It uses a PXE boot service combined with vSphere host profiles to provision and customize hosts. It is a similar concept to how Citrix provisioning server works. Auto Deploy is capable of deploying both stateful and stateless ESXi host installations. A stateful ESXi host installation is one where the ESXi hypervisor installation persists across reboots whilst a stateless host does not persist across reboots, and is stored only in the RAM of the physical host.
Before we can install the Auto Deploy Server, there are a few prerequisites that must be met. These are detailed in the Auto Deploy guide from VMware. First of all you should verify that the host machine has a supported processor and operating system. Auto Deploy supports the same processors and operating systems as vCenter Server.
You should have the following installed:
- Windows Installer 3.0
- Microsoft .NET 2.0
- Microsoft Powershell 2.0
- vSphere PowerCLI
- TFTP Server
In addition, it’s recommended that you install VMware’s Syslog Server and the ESXi Dump Collector (which will be detailed shortly). Many of these prerequisites will already be met if you are installing this on your vCenter server.
Installing the ESXi Dump Collector
Dump Collector is a recommended install when using Auto Deploy. When a ESXi host core dumps, the dump file is normally stored on the servers local storage. With Stateless hosts, provisioned by Auto Deploy, this isn’t possible, and the core dump will only be stored in RAM. The dump collector ensures that the core dump isn’t lost, so that it can aid in troubleshooting. You can start the install from the vCenter Support Tools section of the vCenter media:
You will get what should be a familiar install wizard, so I’ll not replicate all the screens here. Ones to pay attention to are the install location and the repository size:
Next you can choose what type of installation you want to use:
As this was being installed on my lab vCenter server I chose the integrated installation. You will then be prompted for credentials to connect to your vCenter installation:
The next screen will ask you to specify a port for the dump collector to use. I left this at the default 6500.
The next screen will ask how you would like the dump server to be identified on the network. I used the default, which was my servers hostname.
Click finish to complete the install process.
Configuring and Testing the ESXi Dump Collector
When used with Auto Deploy you will likely be using host profiles to configure your hosts to use the dump collector. In the mean time we can test it’s working by configuring an alternate host to use it. When not using host profiles the configuration is done using ESXCLI. After logging onto the host either locally or using SSH, run the following command to view the current configuration:
esxcli system coredump network get
This should show that no configuration has been set. To configure the host to use the dump collector:
esxcli system coredump network set -v vmk0 –server-ipv4=192.168.0.239
Commands shown below as seen on the console:
We can see from the output that the host is now configured to use the dump collector. We can test a connection to it by running the following:
Finally, to run a true test we can force our host to crash by running:
vsish -e set /reliability/crashMe/Panic 1
We can see from the PSOD output that the coredump is being sent to the collector:
Installing the Auto Deploy Server
Now we can move onto installing the Auto Deploy Server. The install is straight forward and fairly similar to the dump collector install so it’s not necessary to go through all the screens here. There are a couple worth mentioning however. The first is the install destination as it gives you the opportunity to choose the size and location of the Auto Deploy repository. This is where your auto deploy images will be stored.
In my lab environment I kept the default of 2GB. The next screen will prompt you for credentials to connect to your vCenter server. This is followed by a screens where you can configure the ports you want Auto Deploy to use, and how you want it to be identified on the network:
On the next screen, click finish to complete the installation. Once that is done you should see some new options on your vCenter home screen:
Installing and Configuring a TFTP Server For Use with Auto Deploy
The last part of the build is to install and configure a TFTP server for use with Auto Deploy, and to configure your DHCP scope to support booting from the files held on it. I decided to use a Cisco TFTP server as I already had it in place in my lab, however there are many freely available on the internet. Once installed and running you will need to make a note of your TFTP server’s root directory. Shown here in my install:
The next step is to get the Auto Deploy boot files, and extract them to our TFTP server’s root directory. We will start by clicking on the Auto Deploy icon on vCenter’s home screen to access the Auto Deploy status screen:
Click on ‘Download TFTP Boot Zip’ to download the boot files for your Auto Deploy configuration. You can then copy/extract the files to your TFTP server’s root directory.
Finally, in order to have your hosts boot using the TFTP servers files, you need to configure DHCP appropriately. In my lab I used a Windows DHCP server, and created a new scope for the subnet where I was running Auto Deploy. Configure DHCP options 66 and 67 to make use of your TFTP server:
Option 66 should contain the IP address of your TFTP server, whilst Option 67 should contain the filename of the boot file. In this case ‘undionly.kpxe.vmw-hardwired‘.
At this point you should be able to boot up a host, have it receive a DHCP lease, and see it connect to your TFTP server. Though, as we have not configured an ESXi image for Auto Deploy yet, that is as far as it will go.
References and Useful Links
VMware Auto Deploy Guide
Troubleshooting the Dump Collector Service