Tag Archives: horizon view

VMware EUC Access Point appliance – Name resolution not working after deployment

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

As part of a project, I had to deploy a VMware EUC Access Point appliance. Nothing fancy, because the awesome VMware Access Point Deployment Utility makes it easy to deploy.

Unfortunately, the deployed Access Point appliance was not working as expected. When I tried to access my Horizon View infrastructure behind the Access Point appliance, I got a HTTP 504 error. The REST API interface was working. I was able to exclude invalid certificates, routing, or firewall policies. I re-deployed the appliance using the the IP address of the connection server, instead of the FQDN. And this worked… I checked the name resolution with nslookup and the name resolution failed. So that was probably the problem.

One per line

To make a long story short: The DNS server, I entered in the VMware Access Point Deployment Utility, were added in a single line to the /etc/resolv.conf

nameserver 192.168.92.11,192.168.92.12

This is wrong, even if the VMware Access Point Deployment Utility claims something different.

euc_deployment_dns

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

There must be a single “nameserver” entry for each DNS server.

nameserver 192.168.92.11
nameserver 192.168.92.12

You can easily change this after the deployment. Add only one DNS server during the deployment, and then add the second DNS server after the deployment.

I would like to highlight, that Chris Halstead mentioned this behaviour a year ago in his blog post “VMware Access Point Deployment Utility“. Chris is the author of the Deployment Utility.

VMware Horizon View space reclamation fails due to activated CBT

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Nearly two weeks ago, I wrote a blog post (VMware Horizon View space reclamation fails) about failing space reclamation on automated desktop pools with linked clones. Today I write about the same error, but caused by another problem. In both cases, the error is logged in the View Administrator and the vSphere (Web) Client. On the View Administrator, the following error is shown:

“Failed to perform space reclamation on machine COMPUTER NAME in Pool POOL NAME”

view_admin_reclaim_error

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The vSphere Web Client shows a different error message:

“A general system error occurred: Wipe Disk failed: Failed to complete wipe operation.”

web_client_wipe_disk_error

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

If a issue with the permissions of the initiating user can be excluded, you should check if Change Block Tracking (CBT) is enabled for the parent VM and the linked clones. The easiest way to check this, is the vSphere Web Client.

web_client_adv_settings

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Highlight the VM > Manage > VM Options > Advanced settings > Configuration parameters.

Check the configuration parameters for:

ctkEnabled = true
scsi0:0.ctkEnabled = true

To solve this issue, power off the parent VM, remove all snapshots and change the following advanced parameters:

ctkEnabled = false
ctkDisallowed = true
scsi0:0.ctkEnabled = false

Then take a new snapshot and recompose the desktop pool.

VMware has documented this in VMware KB2032214 (Deploying or recomposing View desktops fails when the parent virtual machine has CBT enabled). Even if the name of the article doesn’t mention failing space reclamation, it’s mentioned as one possible symptom.

VMware Horizon View space reclamation fails

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

A customer notified me, that he observed an issue with the space reclamation on two automated desktop pools with linked clones. His environment is based on Horizon View 6.2.1 and vSphere 5.5U3. The error was logged in the View Administrator and the vSphere (Web) Client. In the View Administrator, the following error was visible:

“Failed to perform space reclamation on machine COMPUTER NAME in Pool POOL NAME”

view_admin_reclaim_error

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

In the vSphere Web Client, the error messages was different.

“A general system error occurred: Wipe Disk failed: Failed to complete wipe operation.”

web_client_wipe_disk_error

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

As you can see, a service account was created for view management tasks. A custom user role was also created and this role was assigned to the service account. The necessary privileges were taken from the Horizon View documentation (Privileges Required for the vCenter Server User). There is a nice table, that lists all necessary privileges. All necessary privileges? No, one important privilege was missing.

I’ve tried to reproduce this issue in my lab. Unfortunately, the space reclamation was working in my lab. To be honest: I’m not using a specific service account and a custom user role in my lab. I’m using a domain admin account, that has the “Administrator” role assigned in the vCenter. I searched a bit in the VMware Knowledge Base, but I was unable to find a suitable KB entry for my problem. I re-created the view manager user role in my lab and changed the role for the “Administrator” account. After that, I was able to reproduce the problem. So it was clear, that the role was the problem. I checked the privileges again and found an interesting privilege, that was not listed in the VMware documentation.

web_client_composer_missing_right

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

You can find this privilege under: All Privileges > Virtual Machine > Interaction > Perform wipe or shrink operations.

Setting this privilege solved the observed issue. With this knowledge in mind, I found a blog post from Andy Barnes, that describes this issue and the solution.

Guest customizations fails after upgrade to VMware vSphere 6

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

VMware vSphere 6 is now an year old and it was time to update my lab to vSphere 6. The update went smooth, and everything has worked as expected. Some days later, I updated the master VM of a small automated desktop pool. I’m using VMware Horizon 6.2.1 in my lab to deploy a small number of Windows 8.1 VMs for tests, administration etc. The recompose of the pool failed during the guest customization.

view_error_decrypt_password

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

I checked the customization specification immediately and got an error in the vSphere C# client.

vcsa_error_decrypt_password

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Interestingly, I got no error in the vSphere Web Client:

vcsa_error_decrypt_password_web_client

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

After re-entering the Administrator password, the  customization specification was usable again. No errors so far.

A quick search in the VMware KB lead me to the article “Virtual machines with customizations fail to deploy when using Custom SSL Certificates (1019893)“. But this article doesn’t apply to vCenter 6.0. For the notes: I’m using CA-signed certificates in my environment. It seems to be a good idea to re-enter the passwords in customization specifications after a vCenter migration/ upgrade (5.x to 6.x or from VCSA 5.x to 6.x).

Considerations when using Microsoft NLB with VMware Horizon View

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

A load balancer is an integral component of (nearly) every VMware Horizon View design. Not only to distribute the connections among a number of connection or security servers, but also to provide high availability in case of a connection or security server failure. Without a load balancer, connection attempts will fail, if a connection or security server isn’t available. Craig Kilborn wrote an excellent article about the different possible designs of load balancing. Craig highlighted Microsoft Network Load Balancing (NLB) as one of the possible ways to implement load balancing. Jason Langer also mentioned Microsoft NLB in his worth reading article “The Good, The Bad, and The Ugly of VMware View Load Balancing“.

Why Microsoft NLB?

Why should I use Microsoft NLB to load balance connections in my VMware Horizon View environment? It’s a question of requirements. If you already have a load balancer (hopefully redundant), then there is no reason to use Microsoft NLB for load balancing. Really no reason? A single load balancer is a single point of failure and therefore you should avoid this. Instead of using a single load balancer, you could use Microsoft NLB. Microsoft NLB is for free, because it’s part of Windows Server. At least two servers can form a highly available load balancer and you can install the NLB role directly onto your Horizon View connection or security servers.

How does it work?

Microsoft Windows NLB is part of the operating system since Windows NT Server. Two or more Windows servers can form a cluster with at one or more virtual ip addresses. Microsoft NLB knows three different operating modes:

  • Unicast
  • Multicast
  • Multicast (IGMP)

Two years ago I wrote an article why unicast mode sucks: Flooded network due HP Networking Switches & Windows NLB. This leads to the recommendation: Always use multicast (IGMP) mode!

Nearly all switches support IGMP Snooping. If not, spend some money on new switches. Let me get this clear: If your switches support IGMP Snooping, enable this for the VLAN to which the cluster nodes are connected. There is no need to configure static multicast mac addresses or dedicated VLANs to avoid flooding.

If you select the multicast (IGMP) mode, each cluster node will periodically send an IGMP join message to the multicast address of the group. This address is always 239.255.x.y, where x and y corresponds to the last two octets of the  virtual ip address. Upon receiving these multicast group join messages, the switch can send multicasts only to the ports of the group members. This avoids network flooding. Multicast (IGMP) simplifies the configuration of a Microsoft NLB.

  • Enable IGMP Snooping for the VLAN of the cluster nodes
  • Enable the NLB role on each server that should participate the cluster
  • Create a cluster with multicast (IGMP) mode on the first node
  • Join the other nodes
  • That’s it!

The installation process of a Microsoft NLB is not particularly complex, and if the NLB cluster is running, there is not much maintenance to do. As already mentioned, you can put the NLB role on each connection or connection server.

What are the caveats?

Sounds pretty good, right? But there are some caveat when using Microsoft NLB. Microsoft NLB does not support sticky connections and it does not support service awareness. Why is this a problem? Let’s assume that you have enabled “HTTP(S) Secure Tunnel”, “PCoIP Secure Gateway” and “Blast Secure Gateway”.

connection_server_settings

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

In this case, all connections are proxied through the connection or security servers.

The initial connection from the Horizon View client to the connection or security server is used for authentication and selection of the desired desktop pool or application. This is a HTTPS connection. At this point, the user has no connection to a pool or an application. When the user connects to a desktop pools or application, the client will open a second HTTPS connection. This connection is used to provide a secure tunnel for RDP. Because it’s the same protocol, the connection will be directed to the same connection or security server as before. The same applies to BLAST connections. But if the user connects to a pool via PCoIP, the View client will open a new connection using PCoIP with destination port 4172. If the PCoIP External URL refers to the load balanced URL, the connection can be directed to another connection or security server. If this is the case, the PCoIP connection will fail. This is because the source ip address might be the same, but another destination port is used. VMware describes this behaviour in KB1036376 (Unable to connect to the PCoIP Secure Gateway when using Microsoft NLB Clustering).

Another big caveat is the missing service awareness. Microsoft NLB does not check, if the load balanced service is available. If the load balanced service fails, Microsoft NLB will not stop to redirect requests to the the cluster node that is running the failed service. In this case, the users connection requests will fail.

Still the ugly?

So is Microsoft NLB still the ugly option? I don’t think so. Especially for small deployments, where the customer does not have a load balancer, Microsoft NLB can be a good option. If you want to load balance connection servers, Microsoft NLB can do a great job. In case of load balancing security servers, you should take a look at KB1036376, because you might need at least 3 public IP addresses for NAT. The missing service awareness can be a problem, but you can workaround it with a responsive monitoring.

In the end, it is a question of requirements. If you plan to implement other services that might require a load balancer, like Microsoft Exchange, you should take a look at a redundant, highly available load balancer appliance.

Screen resolution scaling has stopped working after Horizon View agent update

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Another inconvenience that I noticed during the update process from VMware Horizon View 6.1.1 to 6.2 was, that the automatic screen resizing stopped working. When I connected to a desktop pool with the VMware Horizon client, I only got the screen resolution of the VM (the resolution that is used when connecting to the VM with the vSphere console)), not 1920×1200 as expected. This issue only occured with PCoIP, not with RDP. I had this issue with a static desktop and a dynamic desktop pool, and it occurred after updating the Horizon View agent. The resolution scaling worked with a Windows 2012 R2 RDS host, when I connected to a RDS with PCoIP.

VMware KB1018158 (Configuring PCoIP for use with View Manager) did not solved the problem. I checked the VMX version, the video RAM config etc. Nothing has changed, everything was configured as expected. At this point it was clear to me, that this must be an issue with the Horizon View agent. I took some snapshots and tried to reinstall the Horizon View agent. I removed the Horizon View agent and the VMware tools from one of my static desktops. After a reboot, I installed the VMware tools and then the Horizon agent. To my surprise, this first attempt has solved the problem. I tried the same with my second static desktop pool VM and with the master VM of my dynamic desktop pool (don’t forget to recompose the VMs…). This workaround has fixed the problem in each case.

I don’t know if this is a bug. I haven’t found any hints in the VMware Community forum or blogs. Maybe someone knows the answer.

VMware Horizon View agent update on RDS host fails with “Internal Error 25030”

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

I’m running a small VMware Horizon View environment in my lab. Nothing fancy, but all you need to show what Horizon View can do for you. This environment includes a Windows Server 2012 R2 RDS host. During the update process from Horizon View 6.1.1 to 6.2, I had to update the View agent on this RDS host. This update installation failed with an “Internal Error 25030”, followed by a rollback. Fortunately I had a snapshot, so I went back to the previous state and tried the update again. This attempt also went awry.

To make a long story short: Read the fscking release notes! This quote is taken from the Horizon View 6.2 release notes:

When you upgrade View Agent 6.1.1 to View Agent 6.2 on an RDS host running on Windows Server 2012 or 2012 R2, the upgrade fails with an “Internal Error 25030” message.
Workaround: Uninstall View Agent 6.1.1, restart the RDS host, and install View Agent 6.2.

And this is not the first time that this error occurred. I found this quote in the the Horizon View 6.1.1 release notes:

When you upgrade View Agent 6.1 to View Agent 6.1.1 on an RDS host running on Windows Server 2012 or 2012 R2, the upgrade fails with an “Internal Error 25030” message.
Workaround: Uninstall View Agent 6.1, restart the RDS host, and install View Agent 6.1.1

If you take a closer look at these two statements, you might notice some similarities… But I do not want to be spiteful. The workaround did the trick. Simply uninstall the View agent (if it’s still installed after the rollback… that was not the case with me), reboot and reinstall the View agent.