Tag Archives: bug

Microsoft Exchange 2013/ 2016/ 2019 shows blank ECP & OWA after changes to SSL certificates

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
EDIT
This issue is described in KB2971270 and is fixed in Exchange 2013 CU6.

I published this blog post in July 2015 and it is still relevant. The feedback for this blog post was incredible, and I’m not joking when I say: I saved many admins weekends. ;) It has shown, that this error still occurs with Exchange 2016 and even 2019. Maybe not because of the same, with Exchange 2013 CU6 fixed bug, but maybe for other reasons. And the solution below still applies to it. Because of this I have decided to re-publish this blog post with a modified title and this little preamble.

Feel free to leave a comment if this blog post worked for you. :)

I ran a couple of times in this error. After applying changes to SSL certificates (add, replace or delete a SSL certificate) and rebooting the server, the event log is flooded with events from source “HttpEvent” and event id 15021. The message says:

An error occurred while using SSL configuration for endpoint 0.0.0.0:444. The error status code is contained within the returned data.

If you try to access the Exchange Control Panel (ECP) or Outlook Web Access (OWA), you will get a blank website. To solve this issue, open up an elevated command prompt on your Exchange 2013 server.

C:\windows\system32>netsh http show sslcert

SSL Certificate bindings:
-------------------------

    IP:port                      : 0.0.0.0:443
    Certificate Hash             : 1ec7413b4fb1782b4b40868d967161d29154fd7f
    Application ID               : {4dc3e181-e14b-4a21-b022-59fc669b0914}
    Certificate Store Name       : MY
    Verify Client Certificate Revocation : Enabled
    Verify Revocation Using Cached Client Certificate Only : Disabled
    Usage Check                  : Enabled
    Revocation Freshness Time    : 0
    URL Retrieval Timeout        : 0
    Ctl Identifier               : (null)
    Ctl Store Name               : (null)
    DS Mapper Usage              : Disabled
    Negotiate Client Certificate : Disabled

    IP:port                      : 0.0.0.0:444
    Certificate Hash             : a80c9de605a1525cd252c250495b459f06ed2ec1
    Application ID               : {4dc3e181-e14b-4a21-b022-59fc669b0914}
    Certificate Store Name       : MY
    Verify Client Certificate Revocation : Enabled
    Verify Revocation Using Cached Client Certificate Only : Disabled
    Usage Check                  : Enabled
    Revocation Freshness Time    : 0
    URL Retrieval Timeout        : 0
    Ctl Identifier               : (null)
    Ctl Store Name               : (null)
    DS Mapper Usage              : Disabled
    Negotiate Client Certificate : Disabled

    IP:port                      : 0.0.0.0:8172
    Certificate Hash             : 09093ca95154929df92f1bee395b2670a1036a06
    Application ID               : {00000000-0000-0000-0000-000000000000}
    Certificate Store Name       : MY
    Verify Client Certificate Revocation : Enabled
    Verify Revocation Using Cached Client Certificate Only : Disabled
    Usage Check                  : Enabled
    Revocation Freshness Time    : 0
    URL Retrieval Timeout        : 0
    Ctl Identifier               : (null)
    Ctl Store Name               : (null)
    DS Mapper Usage              : Disabled
    Negotiate Client Certificate : Disabled

    IP:port                      : 127.0.0.1:443
    Certificate Hash             : 1ec7413b4fb1782b4b40868d967161d29154fd7f
    Application ID               : {4dc3e181-e14b-4a21-b022-59fc669b0914}
    Certificate Store Name       : MY
    Verify Client Certificate Revocation : Enabled
    Verify Revocation Using Cached Client Certificate Only : Disabled
    Usage Check                  : Enabled
    Revocation Freshness Time    : 0
    URL Retrieval Timeout        : 0
    Ctl Identifier               : (null)
    Ctl Store Name               : (null)
    DS Mapper Usage              : Disabled
    Negotiate Client Certificate : Disabled

Check the certificate hash and appliaction ID for 0.0.0.0:443, 0.0.0.0:444 and 127.0.0.1:443. You will notice, that the application ID for this three entries is the same, but the certificate hash for 0.0.0.0:444 differs from the other two entries. And that’s the point. Remove the certificate for 0.0.0.0:444.

C:\windows\system32>netsh http delete sslcert ipport=0.0.0.0:444

SSL Certificate successfully deleted

Now add it again with the correct certificate hash and application ID.

C:\windows\system32>netsh http add sslcert ipport=0.0.0.0:444 certhash=1ec7413b4fb1782b4b40868d967161d29154fd7f appid="{4dc3e181-e14b-4a21-b022-59fc669b0914}"

SSL Certificate successfully added

That’s it. Reboot the Exchange server and everything should be up and running again.

“Cannot execute upgrade script on host” during ESXi 6.5 upgrade

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

I was onsite at one of my customers to update a small VMware vSphere 6.0 U3 environment to 6.5 U2c. The environment consists of three hosts. Two hosts in a cluster, and a third host is only used to run a HPE StoreVirtual Failover Manager.

The update of the first host, using the Update Manager and a HPE custom ESX 6.5 image, was pretty flawless. But the update of the second host failed with “Cannot execute upgrade script on host”

typographyimages/ pixabay.com/ Creative Commons CC0

I checked the host and found it with ESXi 6.5 installed. But I was missing one of the five iSCSI datastores. Then I tried to patch the host with the latest patches and hit “Remidiate”. The task failed with “Cannot execute upgrade script on host”. So I did a rollback to ESXi 6.0 and tried the update again, but this time using ILO and the HPE custom ISO. But the result was the same: The host was running ESXi 6.5 after the update, but the upgrade failed with the “Upgrade Script” error. After this attempt, the host was unable to mount any of the iSCSI datastores. This was because the datastores were mounted ATS-only on the other host, and the failed host was unable to mount the datastores in this mode. Very strange…

I checked the vua.log and found this error message:

2018-11-05T16:35:56.614Z info vua[A3CAB70] [Originator@6876 sub=VUA] Command '/tmp/vuaScript-xMVUfb/precheck.py --ip=172.19.0.14' finished with exit status 1
--> stderr: --------
--> INFO:root:Running esxcfg-info
--> Traceback (most recent call last):
-->   File "/build/mts/release/bora-9298722/bora/build/esx/release/vmvisor/sys-boot/lib64/python3.5/subprocess.py", line 385, in run
-->   File "/build/mts/release/bora-9298722/bora/build/esx/release/vmvisor/sys-boot/lib64/python3.5/subprocess.py", line 788, in communicate
-->   File "/build/mts/release/bora-9298722/bora/build/esx/release/vmvisor/sys-boot/lib64/python3.5/encodings/ascii.py", line 26, in decode
--> UnicodeDecodeError: 'ascii' codec can't decode byte 0x80 in position 1272423: ordinal not in range(128)

Focus on this part of the error message:

--> UnicodeDecodeError: 'ascii' codec can't decode byte 0x80 in position 1272423: ordinal not in range(128)

The upgrade script failed due to an illegal character in the output of esxcfg-info. First of all, I had to find out what this 0x80 character is. I checked UTF-8 and the windows1252 encoding, and found out, that 0x80 is the € (Euro) symbol in the windows-1252 encoding. I searched the output of esxcfg-info for the € symbol – and found it.

            \==+Heap : 
               |----Name............................................€A
               |----Growable........................................true
               |----Max Size........................................41848 bytes
               |----Max Available...................................40816 bytes
               |----Current Size....................................29560 bytes
               |----Current Size....................................29560 bytes
               |----Current Allocation..............................1032 bytes
               |----Current Available...............................1032 bytes
               |----Current Releasable..............................20400 bytes
               |----Percent Free of Current.........................96 
               |----Percent Free of Max.............................97 
               |----Percent Releasable..............................69

But how to get rid of it? Where does it hide in the ESXi config? I scrolled a bit up and down around the € symbol. A bit above, I found a reference to HPE_SATP_LH . This took immidiately my attention, because the customer is using StoreVirtual VSA and StoreVirtual HW appliances.

Now, my second educated guess of the day came into play. I checked the installed VIBs, and found the StoreVirtual Multipathing Extension installed on the failed host – but not on the host, where the ESXi 6.5 update was successful.

I removed the VIB from the buggy host, did a reboot, tried to update the host with the latest patches – with success! The cross-checking showed, that the € symbol was missing in the esxcfg-info  output of the host that was upgraded first. I don’t have a clue why the StoreVirtual Multipathing Extension caused this error. The customer and I decided to not install the StoreVirtual Multipathing Extension again.

Wrong iovDisableIR setting on ProLiant Gen8 might cause a PSOD

This posting is ~7 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

TL;DR: There’s a script at the bottom of the page that fixes the issue.

Some days ago, this HPE customer advisory caught my attention:

Advisory: (Revision) VMware – HPE ProLiant Gen8 Servers running VMware ESXi 5.5 Patch 10, VMware ESXi 6.0 Patch 4, Or VMware ESXi 6.5 May Experience Purple Screen Of Death (PSOD): LINT1 Motherboard Interrupt

And there is also a corrosponding VMware KB article:

ESXi host fails with intermittent NMI PSOD on HP ProLiant Gen8 servers

It isn’t clear WHY this setting was changed, but in VMware ESXi 5.5 patch 10, 6.0  patch 4, 6.0 U3 and, 6.5 the Intel IOMMU’s interrupt remapper functionality was disabled. So if you are running these ESXi versions on a HPE ProLiant Gen8, you might want to check if you are affected.

To make it clear again, only HPE ProLiant Gen8 models are affected. No newer (Gen9) or older (G6, G7) models.

Currently there is no resolution, only a workaround. The iovDisableIR setting must set to FALSE. If it’s set to TRUE, the Intel IOMMU’s interrupt remapper functionality is disabled.

To check this setting, you have to SSH to each host, and use esxcli  to check the current setting:

[root@esx1:~] esxcli system settings kernel list -o iovDisableIR

Name          Type  Description                                 Configured  Runtime  Default
------------  ----  ---------------------------------------     ----------  -------  -------
iovDisableIR  Bool  Disable Interrupt Routing in the IOMMU...   FALSE       FALSE    TRUE

I have written a small PowerCLI script that uses the Get-EsxCli cmdlet to check all hosts in a cluster. The script only checks the setting, it doesn’t change the iovDisableIR setting.

Here’s another script, that analyzes and fixes the issue.

HPE 3PAR OS updates that fix VMware VAAI ATS Heartbeat issue

This posting is ~7 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Customers that use HPE 3PAR StoreServs with 3PAR OS 3.2.1 or 3.2.2 and VMware ESXi 5.5 U2 or later, might notice one or more of the following symptoms:

  • hosts lose connectivity to a VMFS5 datastore
  • hosts disconnect from the vCenter
  • VMs hang during I/O operations
  • you see the messages like these in the vobd.log or vCenter Events tab
Lost access to volume <uuid><volume name> due to connectivity issues. Recovery attempt is in progress and the outcome will be reported shortly
  • you see the following messages in the vmkernel.log
ATS Miscompare detected beween test and set HB images at offset XXX on vol YYY

2015-11-20T22:12:47.194Z cpu13:33467)ScsiDeviceIO: 2645: Cmd(0x439dd0d7c400) 0x89, CmdSN 0x2f3dd6 from world 3937473 to dev &#34;naa.50002ac0049412fa&#34; failed H:0x0 D:0x2 P:0x0 Valid sense data: 0xe 0x1d 0x0.

Interestingly, not only HPE is affected by this. Multiple vendors have the same issue. VMware described this issue in KB2113956. HPE has published a customer advisory about this.

Workaround

If you have trouble and you can update, you can use this workaround. Disable ATS heartbeat for VMFS5 datastores. VMFS3 datastores are not affected by this issue. To disable ATS heartbeat, you can use this PowerCLI one-liner:

Get-AdvancedSetting -Entity hostname -Name VMFS3.UseATSForHBOnVMFS5 | Set-AdvancedSetting -Value 0 -Confirm:$false

Solution

But there is also a solution. Most vendors have published firwmare updates for their products. HPE has released

  • 3PAR OS 3.2.2 MU3
  • 3PAR OS 3.2.2 EMU2 P33, and
  • 3PAR OS 3.2.1 EMU3 P45

All three releases of 3PAR OS include enhancements to improve ATS heartbeat. Because 3PAR OS 3.2.2 has also some nice enhancements for Adaptive Optimization, I recommend to update to 3PAR OS 3.2.2.

Data Protector: Copy sessions to encrypted devices fail after update to 9.07

This posting is ~7 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Recently, a customer has informed me, that copy sessions to encrypted devices failed, after he has made an update to Data Protector 9.07. The copy sessions failed with this error:

|Critical| From: BMA@<hostname> "" Time: <Date><Time>
|90:6111| Error retrieving encryption key.

The customer uses tape encryption. The destination for the backups is a HPE StoreOnce, and a post-backup copy creates a copy of the data on tape. Backup to disk was running fine, but the copy to tape failed immediately.

The customer has opened a ticket at the HPE support and got instantly a hotfix to resolve this issue. HPE has documented this error in QCCR2A69192. If you run into the same issue, please request hotfix QCCR2A69802. This hotfix consolidates QCCR2A69192 and QCCR2A69318 (The BMA ends abnormally during backup/copy to tape).

Thanks to Stefan for the hint!

Receive Connector role not selectable in Exchange 2016 CU2

This posting is ~7 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Another bug in Exchange 2016 CU2. The Role of a new receive connector is greyed out. You can select “Front-End-Transport”. This is a screenshot from a german Exchange 2016 CU2.

receive_connect_role_not_selectable

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Solution

Use the Exchange Management Shell to create a new receive connector. Afterwards, you can modify it with the Exchange Control Panel (ECP).

[PS] C:\Windows\system32>New-Receiveconnector -Name "Client Frontend Dummy" -RemoteIPRange ("192.168.200.99") -TransportRo
le "FrontendTransport" -Bindings ("0.0.0.0:25") -usage "Custom" -Server "exchange1"

Identity                                Bindings                                Enabled
--------                                --------                                -------
EXCHANGE1\Client Frontend Dummy         {0.0.0.0:25}                            True

Microsoft has confirmed, that this is a bug in Exchange 2016 CU2.

WSUS on Windows 2012 (R2) and KB3159706 – WSUS console fails to connect

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

As any other environment, my lab needs some maintenance from time to time. I use a Windows 2012 R2 VM with the Windows Server Update Service (WSUS) role to keep my Windows VMs up to date. Like many others, I was surprised by KB3148812 (Update enables ESD decryption provision in WSUS in Windows Server 2012 and Windows Server 2012 R2), which broke my WSUS. But the fix was easy: Uninstall KB3148812 and reboot the server. The WSUS product team published an artice about this known issue in their blog: Known Issues with KB3148812. In the meantime, Microsoft has published a new update, which supersedes KB3148812: KB3159706.

WSUS dead again?

Today I wanted to check the update status of my VMs. Unfortunately, the WSUS console was unable to connect to the WSUS server.

wsus_2

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

I checked the status of the service and found the WSUS service stopped. But even after I had started the service, the WSUS console was unable to connect to the server. I found an error in the event logs (ID 507, source Windows Server Update Services), but the message “Update Services failed its initialization and stopped” wasn’t helpful. More helpful was a log entry:

2016-05-22 02:01:03.191 UTC	Warning	w3wp.19	SoapUtilities.CreateException	ThrowException: actor = http://wsus.lab.local:8530/SimpleAuthWebService/SimpleAuth.asmx, ID=79bf356b-4f58-4cac-a4aa-b52ec6e0bf38, ErrorCode=InternalServerError, Message=, Client=?
2016-05-22 02:01:10.066 UTC	Info	w3wp.16	SimpleAuth..ctor	Initializing SimpleAuth WebService ProcessID = 968, Process Start Time = 21.05.2016 21:43:13, Product Version = 6.3.9600.18324
2016-05-22 02:01:10.066 UTC	Error	w3wp.16	SimpleAuthImplementation..ctor	Exception in SimpleAuth constructor: System.Data.SqlClient.SqlException (0x80131904): Cannot open database "SUSDB" requested by the login. The login failed.
Login failed for user 'NT AUTHORITY\NETWORK SERVICE'.

After some searching and examination of the recently installed updates, I came across KB3159706.

Manual steps required to complete the installation of KB3159706

Open an elevated CMD and run this command:

"C:\Program Files\Update Services\Tools\wsusutil.exe" postinstall /servicing

The output should look similar to this:

C:\Users\Administrator.LAB>"C:\Program Files\Update Services\Tools\wsusutil.exe" postinstall /servicing
Log file is located at C:\Users\Administrator.LAB\AppData\Local\Temp\2\tmp63BD.tmp
Post install is starting
Post install has successfully completed

C:\Users\Administrator.LAB>

Then you have to install the “HTTP Activation” feature under “.NET Framework 4.5” features.

wsus_3

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

After a restart of the WSUS service, the WSUS should work again.

Summary

The installation of KB3148812 on a WSUS server will break the WSUS installation. Because of this, Microsoft has published KB3159706. If you install this update (in my case it was installed automatically over WSUS…), you have to execute some manual steps to ensure that WSUS works as expected. The WSUS product team is aware of this and they pointed this out in their blog article “The long-term fix for KB3148812 issues” (you will find a hint directly at top of the blog article).

Guest customizations fails after upgrade to VMware vSphere 6

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

VMware vSphere 6 is now an year old and it was time to update my lab to vSphere 6. The update went smooth, and everything has worked as expected. Some days later, I updated the master VM of a small automated desktop pool. I’m using VMware Horizon 6.2.1 in my lab to deploy a small number of Windows 8.1 VMs for tests, administration etc. The recompose of the pool failed during the guest customization.

view_error_decrypt_password

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

I checked the customization specification immediately and got an error in the vSphere C# client.

vcsa_error_decrypt_password

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Interestingly, I got no error in the vSphere Web Client:

vcsa_error_decrypt_password_web_client

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

After re-entering the Administrator password, the  customization specification was usable again. No errors so far.

A quick search in the VMware KB lead me to the article “Virtual machines with customizations fail to deploy when using Custom SSL Certificates (1019893)“. But this article doesn’t apply to vCenter 6.0. For the notes: I’m using CA-signed certificates in my environment. It seems to be a good idea to re-enter the passwords in customization specifications after a vCenter migration/ upgrade (5.x to 6.x or from VCSA 5.x to 6.x).

Screen resolution scaling has stopped working after Horizon View agent update

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Another inconvenience that I noticed during the update process from VMware Horizon View 6.1.1 to 6.2 was, that the automatic screen resizing stopped working. When I connected to a desktop pool with the VMware Horizon client, I only got the screen resolution of the VM (the resolution that is used when connecting to the VM with the vSphere console)), not 1920×1200 as expected. This issue only occured with PCoIP, not with RDP. I had this issue with a static desktop and a dynamic desktop pool, and it occurred after updating the Horizon View agent. The resolution scaling worked with a Windows 2012 R2 RDS host, when I connected to a RDS with PCoIP.

VMware KB1018158 (Configuring PCoIP for use with View Manager) did not solved the problem. I checked the VMX version, the video RAM config etc. Nothing has changed, everything was configured as expected. At this point it was clear to me, that this must be an issue with the Horizon View agent. I took some snapshots and tried to reinstall the Horizon View agent. I removed the Horizon View agent and the VMware tools from one of my static desktops. After a reboot, I installed the VMware tools and then the Horizon agent. To my surprise, this first attempt has solved the problem. I tried the same with my second static desktop pool VM and with the master VM of my dynamic desktop pool (don’t forget to recompose the VMs…). This workaround has fixed the problem in each case.

I don’t know if this is a bug. I haven’t found any hints in the VMware Community forum or blogs. Maybe someone knows the answer.

VMware Horizon View agent update on RDS host fails with “Internal Error 25030”

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

I’m running a small VMware Horizon View environment in my lab. Nothing fancy, but all you need to show what Horizon View can do for you. This environment includes a Windows Server 2012 R2 RDS host. During the update process from Horizon View 6.1.1 to 6.2, I had to update the View agent on this RDS host. This update installation failed with an “Internal Error 25030”, followed by a rollback. Fortunately I had a snapshot, so I went back to the previous state and tried the update again. This attempt also went awry.

To make a long story short: Read the fscking release notes! This quote is taken from the Horizon View 6.2 release notes:

When you upgrade View Agent 6.1.1 to View Agent 6.2 on an RDS host running on Windows Server 2012 or 2012 R2, the upgrade fails with an “Internal Error 25030” message.
Workaround: Uninstall View Agent 6.1.1, restart the RDS host, and install View Agent 6.2.

And this is not the first time that this error occurred. I found this quote in the the Horizon View 6.1.1 release notes:

When you upgrade View Agent 6.1 to View Agent 6.1.1 on an RDS host running on Windows Server 2012 or 2012 R2, the upgrade fails with an “Internal Error 25030” message.
Workaround: Uninstall View Agent 6.1, restart the RDS host, and install View Agent 6.1.1

If you take a closer look at these two statements, you might notice some similarities… But I do not want to be spiteful. The workaround did the trick. Simply uninstall the View agent (if it’s still installed after the rollback… that was not the case with me), reboot and reinstall the View agent.