Tag Archives: horizon view

Workaround for broken Windows 10 Start Menus with floating desktops

Last month, I wrote about a very annoying issue, that I discovered during a Windows 10 VDI deployment: Roaming of the AppData\Local folder breaks the Start Menu of Windows 10 Enterprise (Roaming of AppData\Local breaks Windows 10 Start Menu). During research, I stumbled over dozens of threads about this issue.

Today, after hours and hours of testing, troubleshooting and reading, I might have found a solution.

The environment

Currently I don’t know if this is a workaround, a weird hack, or no solution at all. Maybe it was luck that none of my 2074203423 logins at different linked-clones resulted in a broken start menu. The customer is running:

  • Horizon View 7.1
  • Windows 10 Enterprise N LTSB 2016 (1607)
  • View Agent 7.1 with enabled Persona Management

Searching for a solution

During my tests, I tried to discover WHY the TileDataLayer breaks. As I wrote in my earlier blog post, it is sufficient to delete the TileDataLayer folder. The folder will be recreated during the next logon, and the start menu is working again. Today, I added path for path to “Files and folders excluded from roamin” GPO setting, and at some point I had a working start menu. With this in mind, I did some research and stumbled over a VMware Communities thread (Vmware Horizon View 7.0.3 – Linked clone – Persistent mode – Persona management – Windows 10 (1607) – -> Windows 10 Start Menu doesn’t work)

User oliober did the same: He roamed only a couple of folders, one of them is the TileDataLayer folder, but not the whole Appdata\Local folder.

The “solution”

To make a long story short: You have to enable the roaming of AppData\Local, but then you exclude AppData\Local, and add only necessary folders to the exclusion list of the exclusion. Sounds funny, but it seems to work.

Feedback is welcome!

I am very interested in feedback. It would be great if you have the chance to verify this behaviour. Please leave a comment with your results.

As I already said: I don’t know if this is a workaround, a hack, a solution, or no solution at all. But for now, it seems to work. Microsoft deprecated TileDataLayer in Windows 10 1703. So for this new Windows 10 build, we have to find another working solution. The above described “solution” only works for 1607. But if you are using the Long Term Service Branch, this solution will work for the next 10 years. ;)

Some thoughts about using Windows Server 2012 R2 instead of Windows 10 for VDI

Disclaimer: The information from this blog post is provided on an “AS IS” basis, without warranties, both express and implied.

Last week, I had an interesting discussion with a customer. Some months back, the customer has decided to kick-off a PoC for a VMware Horizon View based virtual desktop infrastructure (VDI). He is currently using fat-clients with Windows 8.1, and the new environment should run on Windows 10 Enterprise. Last week, we discussed the idea of using Windows Server 2012 R2 as desktop OS.

Horizon View with Windows Server as desktop OS?

My customer has planned to use VMware Horizon View. The latest release is VMware Horizon View 7.2. VMware KB article 2150295 (Supported Guest Operating Systems for Horizon Agent and Remote Experience) lists all supported (non-Windows 10) Microsoft operating systems for different Horizon VIew releases. This article  shows, that Windows Server 2012 R2 (Standard and Datacenter) are both supported with all Horizon View releases, starting with Horizon View 7.0. The installation of a View Agent is supported, and you can create full- and linked-clone desktop pools. But there is also another important KB article: 2150305 (Feature Support Matrix for Horizon Agent). This article lists all available features, and whether they are compatible with a specific OS or not. According to this artice, the

  • Windows Media MMR,
  • VMware Client IP Transparency, and the
  • Horizon Virtualization Pack for Skype for Business

are not supported with Windows Server 2012 R2 and 2016.

From the support perspective, it’s safe to use Windows Server 2012 R2, or 2016, as desktop OS for a VMware Horizon View based virtual desktop infrastructure.

Licensing

Licensing Microsoft Windows for VDI is PITA. It’s all about the virtual desktop access rights, that can be acquired on two different ways:

  • Software Assurance (SA), or
  • Windows Virtual Desktop Access (VDA)

SA and VDA are available per-user and per-device.

Source: Microsoft

You need a SA or a VDA for each accessing device or user. There is no need for additional licenses for your virtual desktops! You will get the right to install Windows 10 Enterprise on your virtual desktops. This includes the  LTSB (Long Term Servicing Branch). LTSP offers updates without delivery of new features for the duration of mainstream support (5 years), and extended support (5 years). Another side effect is, that LTSB does not include most of the annoying Windows apps.

Do yourself a favor, and do not try to setup a VDI with Windows 10 Professional…

Service providers, that offer Desktop-as-a-Service (DaaS), are explicitly excluded from this licensing! They must license their stuff according to Microsofts Services Provider Licensing Agreement (SPLA).

How do I have to license Windows Server 2012 R2, if I want to use Windows Server as desktop OS? Windows Server datacenter licensing allows you to run an unlimited number of server VMs on your licensed hardware. To be clear: Windows Server is licensed per physical server, and there is nothing like license mobility! To license the access to the server, your need two different licenses:

  • Windows Server CAL (device or user), and
  • Remote Desktop Services (RDS) CAL (device or user)

The Windows Server CAL is needed for any access to a Microsoft Windows Server from a client, regardless what service is used (even for DHCP). The RDS CAL must be asssigned to any user or device, that is directly or indirectly interacting with the Windows Server desktop, or using a remote desktop access technology (RDS, PCoIP, Blast Extreme etc.) to access the Windows Server desktop.

With this license setup, you have licensed the Windows Server VM itself, and also the access to this VM. There is no need to purchase a SA or VDA.

Do the math

With this in mind, you have to do the math. Compare the licensing costs for Windows 10 and Windows Server 2012 R2/ 2016 in your specific situation. Setup a PoC to verify your requirements, and the support of your software on Windows Server.

Windows Server can be an interesting alternative compared to Windows 10. Maybe some of you, that already use it with Horizon View, have time to add some comments to this blog post. It would be nice to get some feedback about this topic.

Roaming of AppData\Local breaks Windows 10 Start Menu

One of my customers has started a project to create a Windows 10 Enterprise (LTSB 2016) master for their VMware Horizon View environment. Beside the fact (okay, it is more a personal feeling), that Windows 10 is a real PITA for VDI, I noticed an interesting issue during tests.

The issue

For convenience, I adopted some settings of the current Persona Management GPO for Windows 7 for the new Windows 10 environment. During the tests, the customer and I noticed a strange behaviour: After login, the start menu won’t open. The only solution was to logoff and delete the persona folder (most folders are redirected using native Folder Redirections, not the redirection feature of the View Persona Management). While debugging this issue, I found this error in the eventlog.

If you google this, you will find many, many threads about this. Most solutions describe, that you have to delete the profile due to wrong permissions on profile folders and/ or registry hives. I used Microsofts Procmon to verify this, but I was unable to confirm that. After further investigations, I found hints, that the TileDataLayer database could be the problem. The database is located in AppData\Local\TileDataLayer\Database and stores the installed apps, programs, and tiles for the Start Menu. AFAIK it also includes the Start Menu layout.

The database is part of the local part of the profile. A quick check proved, that it’s sufficient to delete the TileDataLayer folder. It will be recreated during the next logon.

The solution

It’s simple: Don’t roam AppData\Local. It should not be necessary to roam the local part of a users profile. The View Persona Management offers an option to roam the local part the profile. You can configure this behaviour with a GPO setting.

You can find this setting under Computer Configuration > Administrative Templates > VMware View Agent Configuration > Persona Management > Roaming & Synchronization

I was able to reproduce the observed behavior in my lab with Windows 10 Enterprise (LTSB 2016) and Horizon View 7.0.3. Because of this, I tend to recommend not to roam AppData\Local.

Horizon View: Server certificate does not match the external url

Certificates are always fun… or should I say PITA?  Whatever… During a small Horizon View PoC, I noticed an error message for the View Connection Server.

That’s right, Mr. Connection Server. The certificate subject name does not match the servers external URL, as this screenshot clearly shows.

But both settings are unused, because a VMware Access Point appliance is in place. If I remove the certificate, that was issued from a public certificate authority, I get an error message because of an invalid, self signed certificate.

I want to use the certificate on the Horizon View Connection Server, but I also want to get rid of the error message, caused by the wrong subject name. The customer uses split DNS, so he is using the same URL internally and externally, and the certificate uses the external URL as subject name.

Change the URLs

The solution is easy:

  1. Enable the checkboxes for the Secure Tunnel connection and the Blast Secure Gateway
  2. Change the hostname to the name, that matches the subject name of the certificate
  3. Uncheck the checkboxes again, and apply the settings

After a couple of secons, and a refresh of the dashboard, the error for the Connection Server should be gone.

VMware EUC Access Point appliance – Name resolution not working after deployment

As part of a project, I had to deploy a VMware EUC Access Point appliance. Nothing fancy, because the awesome VMware Access Point Deployment Utility makes it easy to deploy.

Unfortunately, the deployed Access Point appliance was not working as expected. When I tried to access my Horizon View infrastructure behind the Access Point appliance, I got a HTTP 504 error. The REST API interface was working. I was able to exclude invalid certificates, routing, or firewall policies. I re-deployed the appliance using the the IP address of the connection server, instead of the FQDN. And this worked… I checked the name resolution with nslookup and the name resolution failed. So that was probably the problem.

One per line

To make a long story short: The DNS server, I entered in the VMware Access Point Deployment Utility, were added in a single line to the /etc/resolv.conf

This is wrong, even if the VMware Access Point Deployment Utility claims something different.

euc_deployment_dnsThere must be a single “nameserver” entry for each DNS server.

You can easily change this after the deployment. Add only one DNS server during the deployment, and then add the second DNS server after the deployment.

I would like to highlight, that Chris Halstead mentioned this behaviour a year ago in his blog post “VMware Access Point Deployment Utility“. Chris is the author of the Deployment Utility.

VMware Horizon View space reclamation fails due to activated CBT

Nearly two weeks ago, I wrote a blog post (VMware Horizon View space reclamation fails) about failing space reclamation on automated desktop pools with linked clones. Today I write about the same error, but caused by another problem. In both cases, the error is logged in the View Administrator and the vSphere (Web) Client. On the View Administrator, the following error is shown:

“Failed to perform space reclamation on machine COMPUTER NAME in Pool POOL NAME”

view_admin_reclaim_error

The vSphere Web Client shows a different error message:

“A general system error occurred: Wipe Disk failed: Failed to complete wipe operation.”

web_client_wipe_disk_error

If a issue with the permissions of the initiating user can be excluded, you should check if Change Block Tracking (CBT) is enabled for the parent VM and the linked clones. The easiest way to check this, is the vSphere Web Client.

web_client_adv_settings

Highlight the VM > Manage > VM Options > Advanced settings > Configuration parameters.

Check the configuration parameters for:

To solve this issue, power off the parent VM, remove all snapshots and change the following advanced parameters:

Then take a new snapshot and recompose the desktop pool.

VMware has documented this in VMware KB2032214 (Deploying or recomposing View desktops fails when the parent virtual machine has CBT enabled). Even if the name of the article doesn’t mention failing space reclamation, it’s mentioned as one possible symptom.

VMware Horizon View space reclamation fails

A customer notified me, that he observed an issue with the space reclamation on two automated desktop pools with linked clones. His environment is based on Horizon View 6.2.1 and vSphere 5.5U3. The error was logged in the View Administrator and the vSphere (Web) Client. In the View Administrator, the following error was visible:

“Failed to perform space reclamation on machine COMPUTER NAME in Pool POOL NAME”

view_admin_reclaim_error

In the vSphere Web Client, the error messages was different.

“A general system error occurred: Wipe Disk failed: Failed to complete wipe operation.”

web_client_wipe_disk_error

As you can see, a service account was created for view management tasks. A custom user role was also created and this role was assigned to the service account. The necessary privileges were taken from the Horizon View documentation (Privileges Required for the vCenter Server User). There is a nice table, that lists all necessary privileges. All necessary privileges? No, one important privilege was missing.

I’ve tried to reproduce this issue in my lab. Unfortunately, the space reclamation was working in my lab. To be honest: I’m not using a specific service account and a custom user role in my lab. I’m using a domain admin account, that has the “Administrator” role assigned in the vCenter. I searched a bit in the VMware Knowledge Base, but I was unable to find a suitable KB entry for my problem. I re-created the view manager user role in my lab and changed the role for the “Administrator” account. After that, I was able to reproduce the problem. So it was clear, that the role was the problem. I checked the privileges again and found an interesting privilege, that was not listed in the VMware documentation.

web_client_composer_missing_right

You can find this privilege under: All Privileges > Virtual Machine > Interaction > Perform wipe or shrink operations.

Setting this privilege solved the observed issue. With this knowledge in mind, I found a blog post from Andy Barnes, that describes this issue and the solution.

Guest customizations fails after upgrade to VMware vSphere 6

VMware vSphere 6 is now an year old and it was time to update my lab to vSphere 6. The update went smooth, and everything has worked as expected. Some days later, I updated the master VM of a small automated desktop pool. I’m using VMware Horizon 6.2.1 in my lab to deploy a small number of Windows 8.1 VMs for tests, administration etc. The recompose of the pool failed during the guest customization.

view_error_decrypt_password

I checked the customization specification immediately and got an error in the vSphere C# client.

vcsa_error_decrypt_password

Interestingly, I got no error in the vSphere Web Client:

vcsa_error_decrypt_password_web_client

After re-entering the Administrator password, the  customization specification was usable again. No errors so far.

A quick search in the VMware KB lead me to the article “Virtual machines with customizations fail to deploy when using Custom SSL Certificates (1019893)“. But this article doesn’t apply to vCenter 6.0. For the notes: I’m using CA-signed certificates in my environment. It seems to be a good idea to re-enter the passwords in customization specifications after a vCenter migration/ upgrade (5.x to 6.x or from VCSA 5.x to 6.x).

Considerations when using Microsoft NLB with VMware Horizon View

A load balancer is an integral component of (nearly) every VMware Horizon View design. Not only to distribute the connections among a number of connection or security servers, but also to provide high availability in case of a connection or security server failure. Without a load balancer, connection attempts will fail, if a connection or security server isn’t available. Craig Kilborn wrote an excellent article about the different possible designs of load balancing. Craig highlighted Microsoft Network Load Balancing (NLB) as one of the possible ways to implement load balancing. Jason Langer also mentioned Microsoft NLB in his worth reading article “The Good, The Bad, and The Ugly of VMware View Load Balancing“.

Why Microsoft NLB?

Why should I use Microsoft NLB to load balance connections in my VMware Horizon View environment? It’s a question of requirements. If you already have a load balancer (hopefully redundant), then there is no reason to use Microsoft NLB for load balancing. Really no reason? A single load balancer is a single point of failure and therefore you should avoid this. Instead of using a single load balancer, you could use Microsoft NLB. Microsoft NLB is for free, because it’s part of Windows Server. At least two servers can form a highly available load balancer and you can install the NLB role directly onto your Horizon View connection or security servers.

How does it work?

Microsoft Windows NLB is part of the operating system since Windows NT Server. Two or more Windows servers can form a cluster with at one or more virtual ip addresses. Microsoft NLB knows three different operating modes:

  • Unicast
  • Multicast
  • Multicast (IGMP)

Two years ago I wrote an article why unicast mode sucks: Flooded network due HP Networking Switches & Windows NLB. This leads to the recommendation: Always use multicast (IGMP) mode!

Nearly all switches support IGMP Snooping. If not, spend some money on new switches. Let me get this clear: If your switches support IGMP Snooping, enable this for the VLAN to which the cluster nodes are connected. There is no need to configure static multicast mac addresses or dedicated VLANs to avoid flooding.

If you select the multicast (IGMP) mode, each cluster node will periodically send an IGMP join message to the multicast address of the group. This address is always 239.255.x.y, where x and y corresponds to the last two octets of the  virtual ip address. Upon receiving these multicast group join messages, the switch can send multicasts only to the ports of the group members. This avoids network flooding. Multicast (IGMP) simplifies the configuration of a Microsoft NLB.

  • Enable IGMP Snooping for the VLAN of the cluster nodes
  • Enable the NLB role on each server that should participate the cluster
  • Create a cluster with multicast (IGMP) mode on the first node
  • Join the other nodes
  • That’s it!

The installation process of a Microsoft NLB is not particularly complex, and if the NLB cluster is running, there is not much maintenance to do. As already mentioned, you can put the NLB role on each connection or connection server.

What are the caveats?

Sounds pretty good, right? But there are some caveat when using Microsoft NLB. Microsoft NLB does not support sticky connections and it does not support service awareness. Why is this a problem? Let’s assume that you have enabled “HTTP(S) Secure Tunnel”, “PCoIP Secure Gateway” and “Blast Secure Gateway”.

connection_server_settings

In this case, all connections are proxied through the connection or security servers.

The initial connection from the Horizon View client to the connection or security server is used for authentication and selection of the desired desktop pool or application. This is a HTTPS connection. At this point, the user has no connection to a pool or an application. When the user connects to a desktop pools or application, the client will open a second HTTPS connection. This connection is used to provide a secure tunnel for RDP. Because it’s the same protocol, the connection will be directed to the same connection or security server as before. The same applies to BLAST connections. But if the user connects to a pool via PCoIP, the View client will open a new connection using PCoIP with destination port 4172. If the PCoIP External URL refers to the load balanced URL, the connection can be directed to another connection or security server. If this is the case, the PCoIP connection will fail. This is because the source ip address might be the same, but another destination port is used. VMware describes this behaviour in KB1036376 (Unable to connect to the PCoIP Secure Gateway when using Microsoft NLB Clustering).

Another big caveat is the missing service awareness. Microsoft NLB does not check, if the load balanced service is available. If the load balanced service fails, Microsoft NLB will not stop to redirect requests to the the cluster node that is running the failed service. In this case, the users connection requests will fail.

Still the ugly?

So is Microsoft NLB still the ugly option? I don’t think so. Especially for small deployments, where the customer does not have a load balancer, Microsoft NLB can be a good option. If you want to load balance connection servers, Microsoft NLB can do a great job. In case of load balancing security servers, you should take a look at KB1036376, because you might need at least 3 public IP addresses for NAT. The missing service awareness can be a problem, but you can workaround it with a responsive monitoring.

In the end, it is a question of requirements. If you plan to implement other services that might require a load balancer, like Microsoft Exchange, you should take a look at a redundant, highly available load balancer appliance.

Screen resolution scaling has stopped working after Horizon View agent update

Another inconvenience that I noticed during the update process from VMware Horizon View 6.1.1 to 6.2 was, that the automatic screen resizing stopped working. When I connected to a desktop pool with the VMware Horizon client, I only got the screen resolution of the VM (the resolution that is used when connecting to the VM with the vSphere console)), not 1920×1200 as expected. This issue only occured with PCoIP, not with RDP. I had this issue with a static desktop and a dynamic desktop pool, and it occurred after updating the Horizon View agent. The resolution scaling worked with a Windows 2012 R2 RDS host, when I connected to a RDS with PCoIP.

VMware KB1018158 (Configuring PCoIP for use with View Manager) did not solved the problem. I checked the VMX version, the video RAM config etc. Nothing has changed, everything was configured as expected. At this point it was clear to me, that this must be an issue with the Horizon View agent. I took some snapshots and tried to reinstall the Horizon View agent. I removed the Horizon View agent and the VMware tools from one of my static desktops. After a reboot, I installed the VMware tools and then the Horizon agent. To my surprise, this first attempt has solved the problem. I tried the same with my second static desktop pool VM and with the master VM of my dynamic desktop pool (don’t forget to recompose the VMs…). This workaround has fixed the problem in each case.

I don’t know if this is a bug. I haven’t found any hints in the VMware Community forum or blogs. Maybe someone knows the answer.