Tag Archives: cluster

Database Availability Group (DAG) witness is in a failed state

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

As part of a maintenance job I had to update a 2-node Exchange Database Availability Group and a file-share witness server.

After the installation of Windows updates on the witness server and the obligatory reboot, the witness left in a failed state.

[PS] C:\Windows\system32>Get-DatabaseAvailabilityGroup -Identity DAG1 -Status | fl *wit*
WARNING: Database availability group ‘DAG01’ witness is in a failed state. The database
availability group requires the witness server to maintain quorum. Please use the
Set-DatabaseAvailabilityGroup cmdlet to re-create the witness server and the directory.

WitnessServer : fsw.domain.local
WitnessDirectory : C:\DAGFileShareWitnesses\DAG1.domain.local
AlternateWitnessServer :
AlternateWitnessDirectory :
WitnessShareInUse : InvalidConfiguration
DxStoreWitnessServers :

In my opinion, the re-creation of the witness server and the witness directory cannot be the correct way to solve this. There must be another way to solve this. In addition to this: The server was not dead. Only a reboot occured.

Check the basics

Both DAG nodes were online and working. A good starting point is a check of the cluster resources using the PowerShell.

In my case the cluster resource for the File Share Witness was in a failed state. A simple Start-ClusterResource  solved my issue immediately.

[PS] C:\Windows\system32>Get-ClusterResource

Name                                              State                                             OwnerGroup                                        ResourceType
----                                              -----                                             ----------                                        ------------
File Share Witness (\\fsw.domain.local            Failed                                            Cluster Group                                     File Share Witness


[PS] C:\Windows\system32>Get-ClusterResource | Start-ClusterResource

Name                                              State                                             OwnerGroup                                        ResourceType
----                                              -----                                             ----------                                        ------------
File Share Witness (\\fsw.domain.local            Online                                            Cluster Group                                     File Share Witness

In this case, it seems that the the cluster has marked the file share witness as unreliable, thus the resource was not started after the file share witness was back online again. In this case, I managed it to manually bring it back online by running Start-ClusterResource  on one of the DAG members.

Powering on a VM with shared VMDK fails after extending a EagerZeroedThick VMDK

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

I hope that you are not reading this blog post while searching for a solution for a failed cluster. If so, feel free to leave a comment if this blog post saved your evening or weekend. :)

Last friday, a change at one of my customers went horribly wrong. I was not onsite, but they contacted me during the night from friday to saturday, because their most important Windows Server Failover Cluster was unable to start after extending a shared VMDK.

cripi/ pixabay.com/ Creative Commons CC0

They tried something pretty simple: Extending an virtual disk of a VM. That is something most of us doing pretty often. The customer did this also pretty often. It was a well known task… Except the fact, that the VM was part of a Windows Server Failover Cluster. With shared VMDKs. And the disks were EagerZeroedThick, because this is a requirement for shared VMDKs.

They extended the disk using the vSphere Web Client. And at this point, the change was doomed to fail. They tried to power-on the VMs, but all they got was this error:

VMware ESX cannot open the virtual disk, “/vmfs/volumes/4c549ecd-66066010-e610-002354a2261b/VMNAME/VMDKNAME.vmdk” for clustering. Please verify that the virtual disk was created using the ‘thick’ option.

A shared VMDK is a VMDK in multiwriter mode. This VMDK has to be created as Thick Provision Eager Zeroed. And if you wish to extend this VMDK, you must use vmkfstools  with the option -d eagerzeroedthick. If you extend the VMDK using the Web Client, the extended portion of the disk will become LazyZeroed!

VMware has described this behaviour in the KB1033570 (Powering on the virtual machine fails with the error: Thin/TBZ disks cannot be opened in multiwriter mode). There is also a blog post by Cormac Hogan at VMware, who has described this behaviour.

That’s a screenshot from the failed cluster. Check out the type of the disk (Thick-Provision Lazy-Zeroed).

Patrick Terlisten/ vcloudnine.de/ Creative Commons CC0

You must use vmkfstools  to extend a shared VMDK – but vmkfstools is also the solution, if you have trapped into this pitfall. Clone the VMDK with option -d eagerzeroedthick.

vmkfstools -i old.vmdk new.vmdk -d eagerzeroedthick

Another solution, which was new to me, is to use Storage vMotion. You can migrate the “broken” VMDK to another datastore and change the the disk format during Storage vMotion. This solution is described in the “Notes” section of KB1033570.

Both ways will fix the problem. The result will be a Thick Provision Eager Zeroed VMDK, which will allow the VMs to be successfully powered on.

Data Protector Exchange GRE and IP-less Exchange DAG

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

When dealing with Microsoft Exchange restore requests, you will come across three different restore situations:

  • a database
  • a single mailbox
  • a single mailbox item (mail, calendar entry etc.)

Restoring a complete database is not a complicated task, but restoring a single mailbox, or a single mailbox item, is. First, you need to restore the mailbox, that includes the desired mailbox, into a recovery database. Then you can restore the mailbox, or the mailbox items, from the recovery database. Some of the tasks can only be done with the Exchange Management Shell.

The HPE Data Protector Granular Recovery Extension (GRE) for Microsoft Exchange helps you to simplify the necessary steps to recover a single mailbox, or mailbox items. But the GRE can only assist you during the restore. It hids the above described tasks behind a nice GUI. The backup of Microsoft Exchange is still something you have to do with HPE Data Protector.

Database Availability Group without an Administrative Access Point

With Exchange 2013 SP1, Microsoft introduced the IP-less Database Availability Group (DAG). This type of DAG does not need a Cluster Name Object (CNO), and therefore has no IP address. With Exchange 2016, the IP-less DAG is the default DAG configuration.

But how to backup a DAG, that has no IP address and no name? It is easier than imagined. You have to create a DNS A-Record that includes all IP addresses of the cluster nodes, resulting in a DNS round-robin A-Record. You also have to install the Data Protector Disk Agent and On-line Extension on all cluster nodes. After that, you simply import the DAG by using the DNS A-Record into Data Protector. Then you can proceed with the creation and configuration of a backup job, that uses the newly imported cluster.

Backup runs fine, but the GRE fails

During the test phase of a new Exchange 2016 cluster, a customer of mine discovered a strange error, when he tried to restore a mailbox, or mailbox item, using the Exchange GRE.

Either parsing of command builder output failed or no databases were backed up.
Data Protector Exchange GRE Error

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The customer and I double-checked the installation of the GRE on both nodes. Everything was fine. We also found out, that Data Protector was able to list the backup objects. This is a shortened output of the command.

Object Name                                                     Object type
===============================================================================
exchange-2010.domain.tld:/25e0d4ac-5a63-4035-b718-09d07bbbba47/DB3 E2010
dag-backup.domain.tld:/c7982a7e-76cd-4759-9d03-019c0c410957/DB03 E2010
dag-backup.domain.tld:/5d3faa67-6fd9-493e-bf56-5044f75620a8/DB01 E2010

As you can see, dag-backup.domain.tld is the DNS A-Record, that was created to backup the DAG with Data Protector.

Connection between A-Record and DAG name

It took some time to get this sorted, but at the end, a new A-Record was the key. The DAG has a name, e.g. customer-dag1.domain.tld. But there is no matching A-Record, and the DAG has no IP address.

When the GRE searches for available database backups, it stumbles over the mismatch between the DAG name, that is reported by the Exchange organization, and the name of the Data Protector client that was used to backup the databases.

The key to success was to change the DNS A-Record from dag-backup.domain.tld to customer-dag1.domain.tld. Latter is the name of the DAG, that is given during DAG creation. After removing the Data Protector client, the re-import of the DAG with the new A-Record, and a successful backup, the customer was able to restore mailboxes and mailbox items using the GRE for Microsoft Exchange.

This process is not described in detail in the Data Protector documentation. All you find is this foot note in the Data Protector Platform Integration Matrix (page 12, foot note 19):

Microsoft Exchange Server DAG configured without a Cluster Administrator Access Point is supported with Round Robin DNS mapping of DAG name to all the node IPs.

Make sure that the DNS round-robin A-Record matches your DAG name.

Windows Server 2012 Cluster with VMware vSphere 5.1/ 5.5

This posting is ~10 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

While I was poking around in my Twitter timeline, a tweet from Victor van den Berg (VCDX #121) got my attention.

My first though “What a step backwards!”. I have installed a bunch of Microsoft clusters in Virtual Infrastructure and vSphere enviroments and most times it was PITA. Especially with Raw Device Mappings (RDM) and bus sharing, which prevents vMotion a VM to another host (regardless of this: it’s not supported!). It’s ironic to invest a significant amount of money into a technology, which  increases availability and manageability, and another technology lowers availability due additional maintenance windows for cluster failovers. But that’s exactly what you get, when you use MSCS with SCSI bus sharing (RDM or VMFS). A way to address this issue is to use in-guest iSCSI. This means that you access the shared disks directly from the VM due a iSCSI initiator running in the VM. To do so, you have to present the disks for the cluster to the VMs, not to the ESXi hosts. To be honest: In-guest increases complexity. Especially then, when the customer doesn’t have a iSCSI infrastructure. A second method is in-guest SMB, which is currently only supported with Windows Server 2012. Just to clear up the matter with in-guest iSCSI and W2K12(R2):

Mostafa Khalil /VCDX #002) provided the crucial information:

In-guest iSCSI is supported with W2K12 on vSphere 5.5, which also supportes W2K12 R2 failover clustering. But there’s another interesting fact, that was new to me: Windows Server 2012 failover clustering isn’t supported with ESXi provided shared disks! I found a hint in VMware KB1037959.

Windows Server 2012 failover clustering is not supported with ESXi-provided shared storage (such as RDMs or virtual disks) in vSphere 5.1 and earlier. For more information, see the Miscellaneous Issues section of the vSphere 5.1 Release Notes. VMware vSphere 5.5 provides complete support for 2012 failover clustering.

This means, that you can run Windows Server 2012 failover cluster on vSphere 5.1, but only with in-guest iSCSI or in-guest SMB.

What’s new in vSphere 5.5?

Windows 2012 R2 failover clustering is now supported. But much more significant are changes regarding the storage protocols. With vSphere 5.5 RDMs (shared disks for quorum or data) can be on iSCSI and Fibre Channel over Ethernet (FCoE). Until vSphere 5.5 only Fibre-Channel (FC) was supported. iSCSI and FCoE are supported for cluster-in-a-box (CIB) and cluster across boxes (CAB). Just to make it clear: NFS isn’t supported. Neither for RDM nor for VMFS! Furthermore software and hardware initiator, and mixed setups are supported for iSCSI and FCoE. With vSphere 5.5 the VMW_PSP_RR can be used for the RDMs. There’s no need to change the PSP for the RDMs. VMware KB2052238 summarizes the changes in vSphere 5.5 together.

Final words

Clusters are never an easy thing. The complex support matrix does not make it easier. If you’re using Microsoft clusters, be sure to check the above mentioned knowledge base articles before you make a update of your vSphere enviroment or of your Microsoft cluster.