I experienced this issue when working on a particular platform, it was a weird one so thought I would document it.
The Issue
I had a Windows Server 2019 template that I had created and tested deploying within vCenter, no issues at all. I then proceeded to create a blueprint in vRA (vRealize Automation) to allow Windows Server 2019 VMs to be built. Tested a build and failed (timed out and deleted). Windows was just sitting there with the spinning dots doing nothing.
As with all troubleshooting I started back at the beginning with the basics. The Windows Server 2019 template was on a vSphere 6.5 host while vRA deployments I noticed were using vSphere 6.0 hosts. This shouldn’t be a problem as Windows Server 2019 is supported on vSphere 6.0 but I did a test deployment within vCenter and got the same result; Windows was just sitting there with the spinning dots doing nothing. This at least allowed me to discount a vRA issue so progress was made ๐
Next I looked a bit closer during the VM deployment and it appears the VM passed the initial clone and power on but seemed to fail (spinning dots) post sysprep. Great (I thought), generally looking at sysprep logs will tell you what is wrong (although they are sometimes cryptic). In this case no; it was a generic MRTGeneralize error ๐
Rather than burning through time looking at sysprep issues and because deploying to the vSphere 6.5 hosts works (so looked to be specific to vSphere 6.0 hosts) I decided to move on. At this point I decided to take a look at the other Windows templates (2008 R2, 2012 R2 & 2016) as these have already been working for ages. Same setup; template sitting on vSphere 6.5 host and being deployed to vSphere 6.0 hosts using vRA.
The only difference were the hardware versions. The 2012 R2 & 2016 templates were hardware version 10 (vSphere 5.5) while the Windows 2019 template was hardware version 11 (vSphere 6.0). No reason to suggest this would be an issue as after all it was at the lowest common denominator (vSphere 6.0 hosts), however usually any difference at all needs to be investigated.
After a quick test it seems downgrading the hardware version of the Windows 2019 template from version 11 (vSphere 6.0) to version 10 (vSphere 5.5) fixed the issue! Again these is no real logic to this and is most likely a VMware bug, however a fix is a fix ๐
Detailed below in ‘The Solution’ is how to downgrade the hardware version of the template (well VM) on the fly without needing to unregister/register the VM in vCenter.
The Solution
The troubleshooting and finding of the fix is detailed above in ‘The Issue’. This section details the steps to apply the fix; downgrading the hardware version on the fly without needing to unregister/register the VM in vCenter.
1) Convert the Windows 2019 template to a VM (as template settings can’t be edited)
2) SSH to the host running the (now) Windows 2019 VM. You may need to enable SSH on the host first depending on your environment
3) Edit the Windows 2019 VM vmx file. There are numerous ways to do this but one of the easiest is using the vi
command: vi /vmfs/volumes/{datastore}/{vm}/{vm}.vmx
Substitute {datatore} & {vm} with the relevant datastore and VM names. Locate the virtualHW.version="11"
line and change to virtualHW.version="10"
then save and close the vmx file
4) The edited vmx file will not take effect until the VM is reloaded. Normally this would mean unregistering the VM from vCenter and registering it again, however (a) this is a bit overkill and (b) does or can bring new issues (new object id for instance. Instead we can reload the VM on the fly (VM must be powered off) using some commands.
Run vim-cmd vmsvc/getallvms
to get a list of the VMs on the host and to find the Vmid of the Windows 2019 VM.
Run vim-cmd vmsvc/reload {vmid}
(substituting {vmid}) to reload the Windows 2019 VM
5) Convert the Windows 2019 VM back to a template
Be First to Comment