Tag Archives: vsphere

vCenter Migration from 6.0 to 6.7 fails due to missing user role

Actually, yesterday should be the day at which I migrate one of the last physical Windows vCenter servers installed in my customer base. Actually… the migration failed twice. And each time I had to rollback, power-on the old physical server, reset the computer account etc.

The update was from VMware vCenter Server 6.0 Update 3d (7462484) on a Windows 2012 R2 server to vCenter Server 6.7 Update 3 (Appliance). The migration failed at 62% with the following message:

I found the same error in the content-library-firstboot.py_9150_stderr.log file of the downloaded log bundle.

Okay, that’s a pretty long error message and I had no idea where I should start searching. But it seems related to the Content Library of the vCenter. And it looks like it is related to the privileges.

A forum post led me to the content library administrator role. The author had to deal with a failed migration (6.5 to 6.7), but his conten administrator role was missing. In my case, the role was existent.

Sorry for the german translation. As you can see, the role was existent… Obviously. I tried to add a new role with the name com.vmware.Content.Admin, as mentioned in the forum post, and… a new role appeared.

You might notice the “Beispiel” or “Example”. That’s the difference. Whatever the other role is or what its look like, it is definitely not the original content library administrator role.

And to make a long story short: The migration was successful after this small change.

User vdcs does not have the expected uid 1006

Sorry for the long delay since my last blog post – busy times, but with lots of vSphere. :) Today, I did an upgrade of a standalone vCenter Server Appliance at one of my healthcare customers. The vCenter was on 6.0 U3 and I had to upgrade it to 6.7 U2. It was only a small deployment with three hosts, so nothing fancy. And as with in many other vSphere upgrades, I came across this warning message:

Warning User vdcs does not have the expected uid 1006
Resolution Please refer to the corresponding KB article.

I saw this message multiple times, but in the past, there was no KB article about this, only a VMTN thread. And this thread mentioned, that you can safely ignore this message, if you don’t use a Content Library. Confirmation enough to proceed with the upgrade. :)

Meanwhile, there is a KB article:

Uploading content to the library fails with error: Content Library Service does not have write permission on this storage backing (52559)

This is a statement from the KB article:

Note: You can safely ignore this message if you are not using Content Library Service before the upgrade, or using it only for libraries not backed by NFS storage.

Currently, I don’t have cusomters with NFS backed Content Libraries, but if you do, you might want to take a look at it. Especially if you have done an upgrade from 6.0 to 6.5 or 6.7 and you want to start using Content Libraries now.

Poor performance with Windows 10/ 2019 1809 on VMFS 6

TL;DR: This bug is still up to date and has not been fixed yet! Some user in the VMTN thread mentioned a hotpatch from VMware, which seems to be pulled. A fix for this issue will be available with ESXi 6.5 U3 and 6.7 U3. The only workaround is to place VMs on VMFS 5 datastores, or avoid the use of snapshots if you have to use VMFS 6. I can confirm, that Windows 1903 is also affected.

One of my customers told me that they have massive performance problems with a Horizon View deployment at one of their customers. We talked about this issue and they mentioned, that this was related to Windows 10 1809 and VMFS 6. A short investigation showed, that this issue was well known, and even VMware is working on this. In their case, another IT company installed the Cisco HyperFlex solution and the engineer was unaware of this issue.

Image by Manfred Antranias Zimmer from Pixabay

What do we know so far? In October 2018 (!), shortly after the release of Windows 10 1809, a thread came up in the VMTN (windows 10 1809 slow). According to the posted test results, the issue occurs under the following conditions.

  • Windows 10 1809
  • VMware ESXi 6.5 or 6.7 (regardless from build level)
  • VM has at least one snapshot
  • VM is placed on a VMFS 6 datastore
  • Space reclamation is enabled or disabled

The “official” statement of the VMware support is:

The issue is identified to be due to some guest OS behavior change in this version of windows 10, 1809 w.r.t thin provisioned disks and snapshots, this has been confirmed as a bug and will be fixed in the following releases – 6.5 U3 and 6.7U3, which will be released within End of this year (2019).

https://communities.vmware.com/message/2848206#2848206

I don’t care if the root cause is VMFS 6 or Windows 10. But VMware and Microsoft needs to get this fixed fast! Just to make this clear: You will face the same issues, regardless if you run Windows 10 in a VM, use Windows 10 with Horizon View, or Windows 10 with Citrix. When VMFS 6 and Snapshots comes into play, you will ran into this performance issue.

I will update this blog post when I get some news.

“Cannot execute upgrade script on host” during ESXi 6.5 upgrade

This posting is ~1 year years old. You should keep this in mind. IT is a short living business. This information might be outdated.

I was onsite at one of my customers to update a small VMware vSphere 6.0 U3 environment to 6.5 U2c. The environment consists of three hosts. Two hosts in a cluster, and a third host is only used to run a HPE StoreVirtual Failover Manager.

The update of the first host, using the Update Manager and a HPE custom ESX 6.5 image, was pretty flawless. But the update of the second host failed with “Cannot execute upgrade script on host”

typographyimages/ pixabay.com/ Creative Commons CC0

I checked the host and found it with ESXi 6.5 installed. But I was missing one of the five iSCSI datastores. Then I tried to patch the host with the latest patches and hit “Remidiate”. The task failed with “Cannot execute upgrade script on host”. So I did a rollback to ESXi 6.0 and tried the update again, but this time using ILO and the HPE custom ISO. But the result was the same: The host was running ESXi 6.5 after the update, but the upgrade failed with the “Upgrade Script” error. After this attempt, the host was unable to mount any of the iSCSI datastores. This was because the datastores were mounted ATS-only on the other host, and the failed host was unable to mount the datastores in this mode. Very strange…

I checked the vua.log and found this error message:

Focus on this part of the error message:

The upgrade script failed due to an illegal character in the output of esxcfg-info. First of all, I had to find out what this 0x80 character is. I checked UTF-8 and the windows1252 encoding, and found out, that 0x80 is the € (Euro) symbol in the windows-1252 encoding. I searched the output of esxcfg-info for the € symbol – and found it.

But how to get rid of it? Where does it hide in the ESXi config? I scrolled a bit up and down around the € symbol. A bit above, I found a reference to HPE_SATP_LH . This took immidiately my attention, because the customer is using StoreVirtual VSA and StoreVirtual HW appliances.

Now, my second educated guess of the day came into play. I checked the installed VIBs, and found the StoreVirtual Multipathing Extension installed on the failed host – but not on the host, where the ESXi 6.5 update was successful.

I removed the VIB from the buggy host, did a reboot, tried to update the host with the latest patches – with success! The cross-checking showed, that the € symbol was missing in the esxcfg-info output of the host that was upgraded first. I don’t have a clue why the StoreVirtual Multipathing Extension caused this error. The customer and I decided to not install the StoreVirtual Multipathing Extension again.

Powering on a VM with shared VMDK fails after extending a EagerZeroedThick VMDK

This posting is ~1 year years old. You should keep this in mind. IT is a short living business. This information might be outdated.

I hope that you are not reading this blog post while searching for a solution for a failed cluster. If so, feel free to leave a comment if this blog post saved your evening or weekend. :)

Last friday, a change at one of my customers went horribly wrong. I was not onsite, but they contacted me during the night from friday to saturday, because their most important Windows Server Failover Cluster was unable to start after extending a shared VMDK.

cripi/ pixabay.com/ Creative Commons CC0

They tried something pretty simple: Extending an virtual disk of a VM. That is something most of us doing pretty often. The customer did this also pretty often. It was a well known task… Except the fact, that the VM was part of a Windows Server Failover Cluster. With shared VMDKs. And the disks were EagerZeroedThick, because this is a requirement for shared VMDKs.

They extended the disk using the vSphere Web Client. And at this point, the change was doomed to fail. They tried to power-on the VMs, but all they got was this error:

VMware ESX cannot open the virtual disk, “/vmfs/volumes/4c549ecd-66066010-e610-002354a2261b/VMNAME/VMDKNAME.vmdk” for clustering. Please verify that the virtual disk was created using the ‘thick’ option.

A shared VMDK is a VMDK in multiwriter mode. This VMDK has to be created as Thick Provision Eager Zeroed. And if you wish to extend this VMDK, you must use vmkfstools with the option -d eagerzeroedthick. If you extend the VMDK using the Web Client, the extended portion of the disk will become LazyZeroed!

VMware has described this behaviour in the KB1033570 (Powering on the virtual machine fails with the error: Thin/TBZ disks cannot be opened in multiwriter mode). There is also a blog post by Cormac Hogan at VMware, who has described this behaviour.

That’s a screenshot from the failed cluster. Check out the type of the disk (Thick-Provision Lazy-Zeroed).

Patrick Terlisten/ vcloudnine.de/ Creative Commons CC0

You must use vmkfstools to extend a shared VMDK – but vmkfstools is also the solution, if you have trapped into this pitfall. Clone the VMDK with option -d eagerzeroedthick.

Another solution, which was new to me, is to use Storage vMotion. You can migrate the “broken” VMDK to another datastore and change the the disk format during Storage vMotion. This solution is described in the “Notes” section of KB1033570.

Both ways will fix the problem. The result will be a Thick Provision Eager Zeroed VMDK, which will allow the VMs to be successfully powered on.

Veeam backups fails because of time differences

This posting is ~1 year years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Last week I had an interesting incident at a customer. The customer reported that one of multiple Veeam backup jobs jobs constantly failed.

jarmoluk/ pixabay.com/ Creative Commons CC0

The backup job included two VMs, and the backup of one of these VMs failed with this error:

The verified the used credentials for that job, but re-entering the password does not solved the issue. I then checked the Veeam backup logs located under %ProgramData%\Veeam\Backup (look for the Agent.Job_Name.Source.VM_Name.vmdk.log) and found VDDK Error 3014:

The user, that was used to connect to the vCenter, was an Active Directory located account. The account were granted administrator privileges root of the vCenter. Switching from an AD located account to Administrator@vsphere.local solved the issue. Next stop: vmware-sts-idmd.log on the vCenter Server appliance. The error found in this log confirmed my theory, that there was an issue with the authentication itself, not an issue with the AD located account.

To make a long story short: Time differences. The vCenter, the ESXi hosts and some servers had the wrong time. vCenter and ESXi hosts were using the Domain Controllers as time source.

This is the ntpq output of the vCenter. You might notice the jitter values on the right side, both noted in milliseconds.

After some investigation, the root cause seemed to be a bad DCF77 receiver, which was connected to the domain controller that was hosting the PDC Emulator role. The DCF77 receiver was connected using an USB-2-LAN converter. Instead of using a DCF77 receiver, the customer and I implemented a NTP hierarchy using a valid NTP source on the internet (pool.ntp.org).

Unsupported hardware family ‘vmx-06’

This posting is ~2 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

A customer of mine got an appliance from a software vendor. The appliance was delivered as ZIP file with a VMDK, a MF, and an OVF file. Unfortunately, the appliance was created with VMware Workstation 6.0 with virtual machine hardware version 6, which is incompatible with VMware ESXi (Virtual machine hardware versions). During deployment, my customer got this error:

The OVF file includes a line with the VM hardware version.

If you change this line from vmx-06 to vmx-07, the hash of the OVF changes, and you will get an error during the deployment of the appliance because of the wrong file hash.

Solution

You have to change the SHA256 hash of the OVF, which is included in the MF file.

To create the new SHA256 hash, you can use the PowerShell cmdlet Get-FileHash .

Replace the hash and save the MF file. Then re-deploy the appliance.

Andreas Lesslhumer wrote a similar blog post in 2015:
“Unsupported hardware family vmx-10” during OVF import

Hell freezes over – VMware virtualization on Microsoft Azure

This posting is ~2 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
Update

On November 22, 2017, Ajay Patel (Senior Vice President, Product Development, Cloud Services, VMware) published a blog post in reaction to Microsofts announcement (VMware – The Platform of Choice in the Cloud). Especially these statements are interesting:

No VMware-certified partner names have been mentioned nor have any partners collaborated with VMware in engineering this offering. This offering has been developed independent of VMware, and is neither certified nor supported by VMware.

and

Microsoft recognizing the leadership position of VMware’s offering and exploring support for VMware on Azure as a superior and necessary solution for customers over Hyper-V or native Azure Stack environments is understandable but, we do not believe this approach will offer customers a good solution to their hybrid or multi-cloud future.

Looks like VMware is not happy about Microsofts annoucement. And this blog post clearly states, that VMware will not partner with VMware to bringt VMware virtualization stack on Azure.

I don’t know if this is a wise decision of VMware. The hypervisor, their core product, is a commodity nowadays. We are taking about a bare-metal solution, so it’s not different from what VMware build with AWS. It’s more about how it is embedded in the cloud services and cloud control plane. If you use VMware vSphere, Horizon and O365, the step to move virtualization workloads to VMware on Azure is smaller, than move it to AWS.

On November 23, 2017, the register published this interesting analysis: VMware refuses to support its wares running in Azure.

Original post

Yesterday, Microsoft announced new services to ease the migration from VMware to Microsoft Azure. Corey Sanders (Director of Compute, Azure) posted a blog post (Transforming your VMware environment with Microsoft Azure) and introduced three new Azure services.

Microsoft Azure

Microsoft/ microsoft.com

Azure Migrate

The free Azure Migrate service does not focus on single server workloads. It is focused on multi-server application and will help customers through the three staged

  • Discovery and assessment
  • Migration, and
  • Resource & Cost Optimization

Azure Migrate can discover your VMware-hosted applications on-premises, it can visualize dependencies between them, and it will help customers to create a suitable sizing for the Azure hosted VMs. Azure Site Recovery (ASR) is used for the migration of workloads from the on-premises VMware infrastructure to Microsoft Azure. At the end, when your applications are running on Microsoft Azure, the free Azure Cost Management service helps you to forecast, track, and optimize your spendings.

Integrate VMware workloads with Azure services

Many of the current available Azure services can be used with your on-premises VMware infrastructure, without the need to migrate workloads to Microsoft Azure. This includes Azure Backup, Azure Site Recovery, Azure Log Analytics or managing Microsoft Azure resources with VMware vRealize Automation.

But the real game-changer seesm to bis this:

Host VMware infrastructure with VMware virtualization on Azure

Bam! Microsoft announces the preview of VMware vSphere on Microsoft Azure. It will run on bare-metal on Azure hardware, beside other Azure services. The general availability is expected in 2018.

My two cents

This is the second big announcement about VMware stuff on Azure (don’t forget VMware Horizon Cloud on Microsoft Azure). And although I believe, that this is something that Microsoft wants to offer to get more customers on Azure, this can be a great chance for VMware. VMware customers don’t have to go to Amazon, when they want to host VMware at a major public cloud provider, especially if they are already Microsoft Azure/ O365 customers. This is a pretty bold move from Microsoft and similar to VMware Cloud on AWS. I’m curious to get more details about this.

Creating console screenshots with Get-ScreenshotFromVM.ps1

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Today, I had a very interesting discussion. As part of an ongoing troubleshooting process, console screenshots of virtual machines should be created.

The colleagues, who were working on the problem, already found a PowerCLI script that was able to create screenshots using the Managed Object Reference (MoRef). But unfortunately all they got were black screens and/ or login prompts. Latter were the reason why they were unable to run the script unattended. They used the Get-VMScreenshot script, which was written by Martin Pugh.

I had some time to take a look at his script and I created my own script, which is based on his idea and some parts of his code.

This file is also available on GitHub.

One important note: If you want to take console screenshots of VMs, please make sure that display power saving settings are disabled! Windows VMs are showing a black screen after some minutes. Please disable this using the energy options, or better using a GPO. Otherwise you will capture a black screen!

Why I moved from NFS to vSAN… and why it went wrong

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

I wanted to retire my Synology DS414slim, and switch completely to vSAN. Okay, no big deal. Many folks use vSAN in their lab. But I’d like to explain why I moved to vSAN and why this move failed. I think some of my thoughts are also applicable for customer environments.

So far, I used a Synology DS414slim with three Crucial M550 480 GB SSDs (RAID 5) as my main lab storage. The Synology was connected with two 1 GbE uplinks (LAG) to my  network, and each host was connected with 4x 1 GbE uplinks (single distributed vSwitch). The Synology was okay from the capacity perspective, but the performance was horrible. RAID 5, SSDs and NFS were not the best team, or to be precise, the  CPU of the Synology was the main bottleneck.

nas_ds414slim

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

1,2 GHz is not enough, if you want to use NFS or iSCSI. I never got more than 60 MB/s (sequential). The random IO performance was okay, but as soon as the IO increased, the latencies went through the roof. Not because the SSDs were to slow, but because the CPU of the Synology was not powerful enough to handle the NFS requests.

Workaround: Add more flash storage

The workaround for the poor random IO performance was adding more flash storage. This time, the flash storage was added to the hosts. I used PernixData FVP to boost my lab. FVP was a quite cool product (unfortunately it was a cool product.) PernixData granted me, as a PernixPro, some licenses for my lab.

End of an era

The acquisition of PernixData by Nutanix, the missing support für vSphere 6.5, and the end of availability of all PernixData products led to the decision to remove PernixData FVP from my lab. Without PernixData FVP, my lab was again a slow train crawling up a hill. Four HPE ProLiant, with enough CPU (40 cores) and memory resources (384 GB RAM) were tied down by slow IO.

Redistribution of resources

I had

  • three 480 GB SSDs, and
  • three 40 GB SSDs

in stock. The 40 GB SSDs were to small and slow, so I replaced them with 120 GB SSDs. I was able to equip three of my four hosts with SSDs. Three hosts with flash storage were enough to try VMware vSAN.

Fortunately, not all hosts have to add capacity to a vSAN cluster. Hosts can also only consume storage from a vSAN cluster. With this in mind, vSAN appeared to be a way out of my IO dilemma. In addition, using the 480 GB SSDs as capacity tier, a vSAN all-flash config was possible.

Migration

It took me a little time to move around VMs to temporary locations, while keeping my DC and my VCSA available. I had to remove my datastore on the Synology to free up the 480 GB SSDs. The necessary vSAN licenses were granted by VMware (vExpert licenses).

The creation of the vSAN cluster itself was easy. Fortunately, wiping partitions from disks is easy. You can use the vSphere Web Client to do this.

vsan_wipe_partitions

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The initial performance was quite good, much better than expected and much better than the NFS performance of the old Synology NAS. I enabled deduplication and compression, but as soon as I moved VMs to the vSAN datastore, the throughput dropped and latencies went through the roof. It was totally unusable. Furthermore, I got health alarms:

vsan_congestion_error

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

As the load increased, the errors became more severe.

vsan_performance_error

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0


I was able to solve this with a blog post of Cormac Hogan (VSAN 6.1 New Feature – Handling of Problematic Disks). Even without compression and deduplication, the performance was not as expected and most times to low to work with. At this point, I got an idea what was causing my vSAN problems.

Do not use consumer-grade hardware with vSAN

To be honest: The budget is the problem. I had to take consumer-grade SSDs.

This is a screenshot from the vSAN Observer. esx1 to esx3 are equipped with SSDs, esx4 is only consuming storage from the vSAN cluster.

vsan_observer_perf

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Red is not the color to highlight good things…

An explanation attempt

This blog post of Duncan Epping (Why Queue Depth matters!) is a bit older, but still valid in my case. The controller I use  (HPE Smart Array P410i) has a a deep queue (1011), the RAID device has a queue length of 1024, but the SATA SSDs have only a queue length of 32. Here’s the disk adapter and disk device view of ESXTOP.

The consumer-grade SSDs drowned in IOs, unable to handle parallel read and write operations. There’s nothing much that I can do. Currently there are two options:

  • Replacing the SSDs with devices, that have a deeper queue depth
  • Replace the Synology NAS with a more powerful NAS and move back to NFS

I don’t know which way I will go. To get this clear:

  • This is my lab, not a customer environment
  • It is not a vSAN related problem
  • It is because of consumer-grade hardware

Do not try this at production kids. Go vSAN, but please use the right hardware.