Tag Archives: vsphere

“Cannot execute upgrade script on host” during ESXi 6.5 upgrade

I was onsite at one of my customers to update a small VMware vSphere 6.0 U3 environment to 6.5 U2c. The environment consists of three hosts. Two hosts in a cluster, and a third host is only used to run a HPE StoreVirtual Failover Manager.

The update of the first host, using the Update Manager and a HPE custom ESX 6.5 image, was pretty flawless. But the update of the second host failed with “Cannot execute upgrade script on host”

typographyimages/ pixabay.com/ Creative Commons CC0

I checked the host and found it with ESXi 6.5 installed. But I was missing one of the five iSCSI datastores. Then I tried to patch the host with the latest patches and hit “Remidiate”. The task failed with “Cannot execute upgrade script on host”. So I did a rollback to ESXi 6.0 and tried the update again, but this time using ILO and the HPE custom ISO. But the result was the same: The host was running ESXi 6.5 after the update, but the upgrade failed with the “Upgrade Script” error. After this attempt, the host was unable to mount any of the iSCSI datastores. This was because the datastores were mounted ATS-only on the other host, and the failed host was unable to mount the datastores in this mode. Very strange…

I checked the vua.log and found this error message:

Focus on this part of the error message:

The upgrade script failed due to an illegal character in the output of esxcfg-info. First of all, I had to find out what this 0x80 character is. I checked UTF-8 and the windows1252 encoding, and found out, that 0x80 is the € (Euro) symbol in the windows-1252 encoding. I searched the output of esxcfg-info for the € symbol – and found it.

But how to get rid of it? Where does it hide in the ESXi config? I scrolled a bit up and down around the € symbol. A bit above, I found a reference to HPE_SATP_LH . This took immidiately my attention, because the customer is using StoreVirtual VSA and StoreVirtual HW appliances.

Now, my second educated guess of the day came into play. I checked the installed VIBs, and found the StoreVirtual Multipathing Extension installed on the failed host – but not on the host, where the ESXi 6.5 update was successful.

I removed the VIB from the buggy host, did a reboot, tried to update the host with the latest patches – with success! The cross-checking showed, that the € symbol was missing in the esxcfg-info  output of the host that was upgraded first. I don’t have a clue why the StoreVirtual Multipathing Extension caused this error. The customer and I decided to not install the StoreVirtual Multipathing Extension again.

Powering on a VM with shared VMDK fails after extending a EagerZeroedThick VMDK

I hope that you are not reading this blog post while searching for a solution for a failed cluster. If so, feel free to leave a comment if this blog post saved your evening or weekend. :)

Last friday, a change at one of my customers went horribly wrong. I was not onsite, but they contacted me during the night from friday to saturday, because their most important Windows Server Failover Cluster was unable to start after extending a shared VMDK.

cripi/ pixabay.com/ Creative Commons CC0

They tried something pretty simple: Extending an virtual disk of a VM. That is something most of us doing pretty often. The customer did this also pretty often. It was a well known task… Except the fact, that the VM was part of a Windows Server Failover Cluster. With shared VMDKs. And the disks were EagerZeroedThick, because this is a requirement for shared VMDKs.

They extended the disk using the vSphere Web Client. And at this point, the change was doomed to fail. They tried to power-on the VMs, but all they got was this error:

VMware ESX cannot open the virtual disk, “/vmfs/volumes/4c549ecd-66066010-e610-002354a2261b/VMNAME/VMDKNAME.vmdk” for clustering. Please verify that the virtual disk was created using the ‘thick’ option.

A shared VMDK is a VMDK in multiwriter mode. This VMDK has to be created as Thick Provision Eager Zeroed. And if you wish to extend this VMDK, you must use  vmkfstools  with the option -d eagerzeroedthick. If you extend the VMDK using the Web Client, the extended portion of the disk will become LazyZeroed!

VMware has described this behaviour in the KB1033570 (Powering on the virtual machine fails with the error: Thin/TBZ disks cannot be opened in multiwriter mode). There is also a blog post by Cormac Hogan at VMware, who has described this behaviour.

That’s a screenshot from the failed cluster. Check out the type of the disk (Thick-Provision Lazy-Zeroed).

Patrick Terlisten/ vcloudnine.de/ Creative Commons CC0

You must use vmkfstools  to extend a shared VMDK – but vmkfstools is also the solution, if you have trapped into this pitfall. Clone the VMDK with option -d eagerzeroedthick.

Another solution, which was new to me, is to use Storage vMotion. You can migrate the “broken” VMDK to another datastore and change the the disk format during Storage vMotion. This solution is described in the “Notes” section of KB1033570.

Both ways will fix the problem. The result will be a Thick Provision Eager Zeroed VMDK, which will allow the VMs to be successfully powered on.

Veeam backups fails because of time differences

Last week I had an interesting incident at a customer. The customer reported that one of multiple Veeam backup jobs jobs constantly failed.

jarmoluk/ pixabay.com/ Creative Commons CC0

The backup job included two VMs, and the backup of one of these VMs failed with this error:

The verified the used credentials for that job, but re-entering the password does not solved the issue. I then checked the Veeam backup logs located under %ProgramData%\Veeam\Backup (look for the Agent.Job_Name.Source.VM_Name.vmdk.log) and found VDDK Error 3014:

The user, that was used to connect to the vCenter, was an Active Directory located account. The account were granted administrator privileges root of the vCenter. Switching from an AD located account to Administrator@vsphere.local solved the issue. Next stop: vmware-sts-idmd.log on the vCenter Server appliance. The error found in this log confirmed my theory, that there was an issue with the authentication itself, not an issue with the AD located account.

To make a long story short: Time differences. The vCenter, the ESXi hosts and some servers had the wrong time. vCenter and ESXi hosts were using the Domain Controllers as time source.

This is the ntpq  output of the vCenter. You might notice the jitter values on the right side, both noted in milliseconds.

After some investigation, the root cause seemed to be a bad DCF77 receiver, which was connected to the domain controller that was hosting the PDC Emulator role. The DCF77 receiver was connected using an USB-2-LAN converter. Instead of using a DCF77 receiver, the customer and I implemented a NTP hierarchy using a valid NTP source on the internet (pool.ntp.org).

Unsupported hardware family ‘vmx-06’

A customer of mine got an appliance from a software vendor. The appliance was delivered as ZIP file with a VMDK, a MF, and an OVF file. Unfortunately, the appliance was created with VMware Workstation 6.0 with virtual machine hardware version 6, which is incompatible with VMware ESXi (Virtual machine hardware versions). During deployment, my customer got this error:

The OVF file includes a line with the VM hardware version.

If you change this line from vmx-06 to vmx-07, the hash of the OVF changes, and you will get an error during the deployment of the appliance because of the wrong file hash.

Solution

You have to change the SHA256 hash of the OVF, which is included in the MF file.

To create the new SHA256 hash, you can use the PowerShell cmdlet Get-FileHash .

Replace the hash and save the MF file. Then re-deploy the appliance.

Andreas Lesslhumer wrote a similar blog post in 2015:
“Unsupported hardware family vmx-10” during OVF import

Hell freezes over – VMware virtualization on Microsoft Azure

Update

On November 22, 2017, Ajay Patel (Senior Vice President, Product Development, Cloud Services, VMware) published a blog post in reaction to Microsofts announcement (VMware – The Platform of Choice in the Cloud). Especially these statements are interesting:

No VMware-certified partner names have been mentioned nor have any partners collaborated with VMware in engineering this offering. This offering has been developed independent of VMware, and is neither certified nor supported by VMware.

and

Microsoft recognizing the leadership position of VMware’s offering and exploring support for VMware on Azure as a superior and necessary solution for customers over Hyper-V or native Azure Stack environments is understandable but, we do not believe this approach will offer customers a good solution to their hybrid or multi-cloud future.

Looks like VMware is not happy about Microsofts annoucement. And this blog post clearly states, that VMware will not partner with VMware to bringt VMware virtualization stack on Azure.

I don’t know if this is a wise decision of VMware. The hypervisor, their core product, is a commodity nowadays. We are taking about a bare-metal solution, so it’s not different from what VMware build with AWS. It’s more about how it is embedded in the cloud services and cloud control plane. If you use VMware vSphere, Horizon and O365, the step to move virtualization workloads to VMware on Azure is smaller, than move it to AWS.

On November 23, 2017, the register published this interesting analysis: VMware refuses to support its wares running in Azure.

Original post

Yesterday, Microsoft announced new services to ease the migration from VMware to Microsoft Azure. Corey Sanders (Director of Compute, Azure) posted a blog post (Transforming your VMware environment with Microsoft Azure) and introduced three new Azure services.

Microsoft Azure

Microsoft/ microsoft.com

Azure Migrate

The free Azure Migrate service does not focus on single server workloads. It is focused on multi-server application and will help customers through the three staged

  • Discovery and assessment
  • Migration, and
  • Resource & Cost Optimization

Azure Migrate can discover your VMware-hosted applications on-premises, it can visualize dependencies between them, and it will help customers to create a suitable sizing for the Azure hosted VMs. Azure Site Recovery (ASR) is used for the migration of workloads from the on-premises VMware infrastructure to Microsoft Azure. At the end, when your applications are running on Microsoft Azure, the free Azure Cost Management service helps you to forecast, track, and optimize your spendings.

Integrate VMware workloads with Azure services

Many of the current available Azure services can be used with your on-premises VMware infrastructure, without the need to migrate workloads to Microsoft Azure. This includes Azure Backup, Azure Site Recovery, Azure Log Analytics or managing Microsoft Azure resources with VMware vRealize Automation.

But the real game-changer seesm to bis this:

Host VMware infrastructure with VMware virtualization on Azure

Bam! Microsoft announces the preview of VMware vSphere on Microsoft Azure. It will run on bare-metal on Azure hardware, beside other Azure services. The general availability is expected in 2018.

My two cents

This is the second big announcement about VMware stuff on Azure (don’t forget VMware Horizon Cloud on Microsoft Azure). And although I believe, that this is something that Microsoft wants to offer to get more customers on Azure, this can be a great chance for VMware. VMware customers don’t have to go to Amazon, when they want to host VMware at a major public cloud provider, especially if they are already Microsoft Azure/ O365 customers. This is a pretty bold move from Microsoft and similar to VMware Cloud on AWS. I’m curious to get more details about this.

Creating console screenshots with Get-ScreenshotFromVM.ps1

Today, I had a very interesting discussion. As part of an ongoing troubleshooting process, console screenshots of virtual machines should be created.

The colleagues, who were working on the problem, already found a PowerCLI script that was able to create screenshots using the Managed Object Reference (MoRef). But unfortunately all they got were black screens and/ or login prompts. Latter were the reason why they were unable to run the script unattended. They used the Get-VMScreenshot script, which was written by Martin Pugh.

I had some time to take a look at his script and I created my own script, which is based on his idea and some parts of his code.

This file is also available on GitHub.

One important note: If you want to take console screenshots of VMs, please make sure that display power saving settings are disabled! Windows VMs are showing a black screen after some minutes. Please disable this using the energy options, or better using a GPO. Otherwise you will capture a black screen!

Why I moved from NFS to vSAN… and why it went wrong

I wanted to retire my Synology DS414slim, and switch completely to vSAN. Okay, no big deal. Many folks use vSAN in their lab. But I’d like to explain why I moved to vSAN and why this move failed. I think some of my thoughts are also applicable for customer environments.

So far, I used a Synology DS414slim with three Crucial M550 480 GB SSDs (RAID 5) as my main lab storage. The Synology was connected with two 1 GbE uplinks (LAG) to my  network, and each host was connected with 4x 1 GbE uplinks (single distributed vSwitch). The Synology was okay from the capacity perspective, but the performance was horrible. RAID 5, SSDs and NFS were not the best team, or to be precise, the  CPU of the Synology was the main bottleneck.

nas_ds414slim

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

1,2 GHz is not enough, if you want to use NFS or iSCSI. I never got more than 60 MB/s (sequential). The random IO performance was okay, but as soon as the IO increased, the latencies went through the roof. Not because the SSDs were to slow, but because the CPU of the Synology was not powerful enough to handle the NFS requests.

Workaround: Add more flash storage

The workaround for the poor random IO performance was adding more flash storage. This time, the flash storage was added to the hosts. I used PernixData FVP to boost my lab. FVP was a quite cool product (unfortunately it was a cool product.) PernixData granted me, as a PernixPro, some licenses for my lab.

End of an era

The acquisition of PernixData by Nutanix, the missing support für vSphere 6.5, and the end of availability of all PernixData products led to the decision to remove PernixData FVP from my lab. Without PernixData FVP, my lab was again a slow train crawling up a hill. Four HPE ProLiant, with enough CPU (40 cores) and memory resources (384 GB RAM) were tied down by slow IO.

Redistribution of resources

I had

  • three 480 GB SSDs, and
  • three 40 GB SSDs

in stock. The 40 GB SSDs were to small and slow, so I replaced them with 120 GB SSDs. I was able to equip three of my four hosts with SSDs. Three hosts with flash storage were enough to try VMware vSAN.

Fortunately, not all hosts have to add capacity to a vSAN cluster. Hosts can also only consume storage from a vSAN cluster. With this in mind, vSAN appeared to be a way out of my IO dilemma. In addition, using the 480 GB SSDs as capacity tier, a vSAN all-flash config was possible.

Migration

It took me a little time to move around VMs to temporary locations, while keeping my DC and my VCSA available. I had to remove my datastore on the Synology to free up the 480 GB SSDs. The necessary vSAN licenses were granted by VMware (vExpert licenses).

The creation of the vSAN cluster itself was easy. Fortunately, wiping partitions from disks is easy. You can use the vSphere Web Client to do this.

vsan_wipe_partitions

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The initial performance was quite good, much better than expected and much better than the NFS performance of the old Synology NAS. I enabled deduplication and compression, but as soon as I moved VMs to the vSAN datastore, the throughput dropped and latencies went through the roof. It was totally unusable. Furthermore, I got health alarms:

vsan_congestion_error

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

As the load increased, the errors became more severe.

vsan_performance_error

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0


I was able to solve this with a blog post of Cormac Hogan (VSAN 6.1 New Feature – Handling of Problematic Disks). Even without compression and deduplication, the performance was not as expected and most times to low to work with. At this point, I got an idea what was causing my vSAN problems.

Do not use consumer-grade hardware with vSAN

To be honest: The budget is the problem. I had to take consumer-grade SSDs.

This is a screenshot from the vSAN Observer. esx1 to esx3 are equipped with SSDs, esx4 is only consuming storage from the vSAN cluster.

vsan_observer_perf

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Red is not the color to highlight good things…

An explanation attempt

This blog post of Duncan Epping (Why Queue Depth matters!) is a bit older, but still valid in my case. The controller I use  (HPE Smart Array P410i) has a a deep queue (1011), the RAID device has a queue length of 1024, but the SATA SSDs have only a queue length of 32. Here’s the disk adapter and disk device view of ESXTOP.

The consumer-grade SSDs drowned in IOs, unable to handle parallel read and write operations. There’s nothing much that I can do. Currently there are two options:

  • Replacing the SSDs with devices, that have a deeper queue depth
  • Replace the Synology NAS with a more powerful NAS and move back to NFS

I don’t know which way I will go. To get this clear:

  • This is my lab, not a customer environment
  • It is not a vSAN related problem
  • It is because of consumer-grade hardware

Do not try this at production kids. Go vSAN, but please use the right hardware.

Replacing an expired lookup service SSL certificate on a vSphere PSC

A few days ago, I ran into a very nasty problem. Fortunately, it was in my lab. Some months ago, I replaced the certificates of my vCenter Server Appliance (VCSA), and I’ve chosen to use the VMware Certificate Authority (VMCA) as a subordinate of my AD-based enterprise CA. The VMCA was used as intermediate CA. The certificates were replaced using the  vSphere 6.0 Certificate Manager (/usr/lib/vmware-vmca/bin/certificate-manager), and I followed the instructions of KB2112016 (Configuring VMware vSphere 6.0 VMware Certificate Authority as a subordinate Certificate Authority).

The VCSA was migrated from vSphere 5.5, and with vSphere 5.5 I was also using custom certificates. These certificates were also issued by my AD-based enterprise CA, and these certificates were migration during the vSphere 5.5 > 6.0 migration. So at the end, I replaced custom certificates with VMCA (as an intermediate CA) certificates.

Everything was fine, until a power outage. After powering-on my VMs, I noticed several errors. After logging into the vSphere Web Client, I got an error message at the top of the page:

While searching for the cause, I checked the URL of the Platform Services Controller (https://vcsa1.lab.local/psc/login) and got this:

psc_error_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0


This error led me to KB2144086 (Updating certificates using certificate manager on vCenter Server or PSC 6.0 Update 1b fails), but was able to proof, that I have used different subject names for the different solution user certificates.

While digging in the PSC logs, I found this error in the /var/log/vmware/psc-client/psc-client.log:

Finally, I found Aaron Smiths blog post “Troubleshooting Expired PSC Certificates with vSphere 6“, who had the same problem. I checked the certificate of the Lookup Service and there it was:

psc_error_2

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

This was the original custom certificate, issued by my AD-based enterprise CA, and installed on my vSphere 5.5 VCSA.

Aaron also offered the solution by referencing KB2118939 (Replacing the Lookup Service SSL certificate on a Platform Services Controller 6.0). I followed the instructions in KB2118939 and replaced the certificate of the Lookup Service with a certificate of the VMCA.

Take care of your certificates

With vSphere 6.0, the Lookup Service should be accessed through the HTTP Reverse Proxy. This proxy uses the machine certificate. Therefore, an expired Lookup Certificate is not obvious. If you connect directly to the Lookup Service using port 7444, you will see the expired certificate. The Lookup Service certificate is not replaced with a custom certificate, if you replace the different solution user certificates.

If you have a vSphere 6.0 VCSA, which was migrated from vSphere 5.5, and you have replaced the certificates on that vSphere 5.5 VCSA with custom certificates, you should check your Lookup Service certificate immidiately! Follow KB2118939 for further instructions.

Credit to Aaron Smith for this blog post. Thank you!

Monitoring hardware status with Python and vSphere API calls

Apparently it’s “how to monitor hardware status” week on vcloudnine.de. Some days ago, I wrote an article about using SNMP for hardware monitoring. You can also use the vSphere Web Client to get the status of the host hardware. A third way is through the vSphere API. I just want to share a short example how to use vSphere API calls and pyVmomi. pyVmomi is the Python SDK for the VMware vSphere API.

Get hardware status with vSphere API calls

I just want to share a small example, that shows the basic principle. The script gathers the temperature sensor data of a ProLiant DL360 G7 running ESXi 6.0 U2 using vSphere API calls.

The output of the script looks like this:

Nothing fancy. You can easily loop through numericSensorInfo to gather data from other sensors. Use the Managed Object Browser (MOB) to navigate through the API. This is handy if you search for specific sensors. If you need accurate data, the vSphere API is the way to go. If you focus on something lightweight, try SNMP.

Missing hardware status tab in the vSphere Client

I thought, everyone knows it, but I’m always being asked “Where’s the hardware status tab?” after an update from vSphere 5.x to 6. Many customers still use the vSphere Client (C # client), and steer clear of the vSphere Web Client. To be honest: Me too. I often use the C# client, especially if I do mass operations, or for a quick look at something.

This is really nothing new, the answer is clear. But I think it’s a good idea to write it down. At least for myself. As a reminder to use the vSphere Web Client.

The hardware status tab

Many customers used the hardware status tab to get a quick overview about the health of the ESXi host hardware.

vsphere_client_hw_status_tab

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

But after an update to vSphere 6 the hardware status tab is missing in the vSphere Client. This is an expected behaviour! VMware has published an knowledge base article about this (The Hardware Status Tab is no longer available in the vSphere Client after upgrading to vCenter Server 6.0). The only solution is to use the vSphere Web Client.

Use the vSphere Web Client

Meanwhile, the old vSphere Client has many downsides. All features introduced in vSphere 5.5 and later are only available through the vSphere Web Client. This also applies to the hardware status tab.

vsphere_web_client_hw_status_tab

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

You find the hardware status on the “Monitor” tab of a host. It offers the same information as the legacy hardware status tab from the vSphere Client.

Do yourself a favor and use the vSphere Web Client. Always, any time.