Tag Archives: troubleshooting

Using VCSA as remote syslog – Don’t forget the log rotation!

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
Important note: It seems that vCenter Server Appliance updates revert the changes. Please check the settings after each update!

The VMware vCenter Server Appliance (VCSA) can act as a remote syslog destition for ESXi hosts. This is very handy for troubleshooting and I really recommend to use this feature.  But VMware ESXi hosts can be really chatty and therefore it’s a good idea to keep an eye on the free disk space of the VCSA.

Yesterday, a colleague had an interesting support case. A customer reported that his Veeam Backup & Replication jobs failed and that he was unable to login to the vCenter with the vSphere Client and vSphere Web Client. My colleague checked the VCSA VM and noticed that the VPXD failed to start (“Waiting for vpxd to initialize: ….failed”). Together we checked the appliance and the log files. The vpxd.log (/var/log/vmware/vpx) was updated weeks ago, but the last entry was interesting: No space left on device. But there was free disk space on /storage/log. I immediately checked the inode count with df -i and there it was: No free inodes. Why is this a problem? Each name entry in the file system consumes an inode. If there are no free inodes, no new directories and files can be created. The error message is the same as for missing disk space. Something had to have created a lot of files on /storage/log. Because /var/log/vmware is a symbolic linkt to /storage/log/vmware, it had to be something on the /storage/log partition. We checked the remote syslog location under /storage/log/remote and found gigabytes and an incredible number of logs. After removing the logs, the VPXD was able to start and the inode count was on a normal level.

But why were there so many logs? We checked the logrotate config and found a faulty config for the remote syslog files. Instead of rotating logs and remove old ones, this config rotated all logs every day and potentiated the number of logs. Please note that there is no logrotate config to rotate remote syslog files by default! This one was added manually.

This is the default config for the remote syslog-collector of the VCSA:

As you can see, with these settings a folder for each host and each month is created. According to this VMTN posting, we changed the syslog-collector config a bit:

With this settings, only a single file per host is created. We made also a change to /etc/logrotate.d/syslog and added this at the end:

With this configuration 30 log files will be preserved. The number of log files or how often log rotation should happen (weekly or daily) can easily be adjusted. But these settings should be sufficient for small environments.

It’s important to understand that the VCSA has different disks and that the disks are mountend to different mount points within the root filesystem. This is from a vSphere 5.5 VCSA:

/var/log/vmware and /var/log/remote are links to /storage/log/vmware and /storage/log/remote. Make sure that there is always enough free diskspace on ALL disks! I also want to highlight VMware KB2092127 (After upgrading to vCenter Server Appliance 5.5 Update 2, pg_log file reports this error: WARNING: there is already a transaction in progress). This error hit me a couple of times…

Chicken-and-egg problem: 3PAR VSP 4.3 MU1 & 3PAR OS 3.2.1 MU3

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Since monday I’m helping a customer to put two HP 3PAR StoreServ 7200c into operation. Both StoreServs came factory-installed with 3PAR OS 3.2.1 MU3, which is available since July 2015. Usually, the first thing you do is to deploy the 3PAR Service Processor (SP). These days this is (in most cases) a Virtual Service Processor (VSP). The SP is used to initialize the storage system. Later, the SP reports to HP and it’s used for maintenance tasks like shutdown the StoreServ, install updates and patches. There are only a few cases in which you start the Out-of-the-Box (OOTB) procedure of the StoreServ without having a VSP. I deployed two (one VSP for each StoreServ) VSPs, started the Service Processor Setup Wizard, entered the StoreServ serial number and got this message:


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

“No uninitialized storage system with the specified serial number could be found”. I double checked the network setup, VLANs, switch ports etc. The error occured with BOTH VSPs and BOTH StoreServs. I started the OOTB on both StoreServs using the serial console. My plan was to import the StoreServs later into the VSPs. To realize this, I tried was to setup the VSP using the console interface. I logged in as root (no password) and tried the third option: Setup SP with original SP ID.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Not the worst idea, but unsuccessful. I entered the SP ID, SP networking details, a lot other stuff, the serial number of the StoreServ, the IP address, credentials finally got this message:

Hmm… I knew that P003 was mandatory for the VSP 4.3 MU1 and 3PAR OS 3.2.1 MU3. But could cause the missing patch this behaviour? I called HP and explained my guess. After a short remote session this morning, the support case was escalated to the 2nd level. While waiting for the 2nd level support, I was thinking about a solution. I knew that earlier releases of the VSP doesn’t check the serial number of the StoreServ or the version of the 3PAR OS. So I grabbed a copy of the VSP 4.1 MU2 with P009 and deployed the VSP. This time, I was able to finish the “Moment of Birth” (MOB). This release also asked for the serial number, the IP address and login credentials, but it didn’t checked the version of the 3PAR OS (or it doesn’t care if it’s unknown). At this point I had a functional SP running software release 4.1 MU2. I upgraded the SP to 4.3 MU1 with the physical SP ISO image and installed P003 afterwards. Now I was able to import the StoreServ 7200c with 3PAR OS 3.2.1 MU3.

I don’t know how HP covers this during the installation service. AFAIK there is no VSP 4.3 MU1 with P003 available and I guess HP ships all new StoreServs with 3PAR OS 3.2.1 MU3. If you upgrade from an earlier 3PAR OS release, please make sure that you install P003 before you update the 3PAR OS. The StoreServ Refresh matrix clearly says that P003 is mandatory. The release notes for the HP 3PAR Service Processor (SP) Software SP-4.3.0 MU1 P003 also indicate this:

SP-4.3.0.GA-24 P003 is a mandatory patch for SP-4.3.0.GA-24 and 3.2.1.MU3.

I’m excited to hear from the HP 2nd level support. I will update this blog post if I have more information.


Together with the StoreServ 8000 series, HP released a new version of the 3PAR Service Processor. The new version 4.4 is necessary for the new StoreServ models, but it also supports 3PAR OS < 3.2.2 (which is the GA release for the new StoreServ models). So if you get a new StoreServ 7000 with 3PAR OS 3.2.1 MU3, simply deploy a SP version 4.4.

Microsoft Exchange 2013 shows blank ECP & OWA after changes to SSL certificates

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
This issue is described in KB2971270 and is fixed in CU6.

I ran a couple of times in this error. After applying changes to SSL certificates (add, replace or delete a SSL certificate) and rebooting the server, the event log is flooded with events from source “HttpEvent” and event id 15021. The message says:

If you try to access the Exchange Control Panel (ECP) or Outlook Web Access (OWA), you will get a blank website. To solve this issue, open up an elevated command prompt on your Exchange 2013 server.

Check the certificate hash and appliaction ID for, and You will notice, that the application ID for this three entries is the same, but the certificate hash for differs from the other two entries. And that’s the point. Remove the certificate for

Now add it again with the correct certificate hash and application ID.

That’s it. Reboot the Exchange 2013 server and everything should be up and running again.

DataCore mirrored virtual disks full recovery fails repeatedly

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Last sunday a customer suffered a power outage for a few hours. Unfortunately the DataCore Storage Server in the affected datacenter weren’t shutdown and therefore it crashed. After the power was back, the Storage Server was started and the recoveries for the mirrored virtual disks started. Hours later, three mirrored virtual disks were still running full recoveries and the recovery for each of them failed repeatedly.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The recovery ran until a specific point, failed and started again. When the recovery failed, several events were logged on the Storage Server in the other datacenter (the Storage Server that wasn’t affected from the power outage):

Source: DcsPool, Event ID: 29

Source: disk, Event ID: 7

Source: Cissesrv, Event ID: 24606

The DataCore support quickly confirmed what we already knew: We had trouble with the backend storage on the DataCore Storage Server that was serving the full recovies for the recovering Storage Server. The full recoveries ran until the point at which a non-readable block was hit. Clearly a problem with the backend storage.


To summarize this very painful situation:

  • VMFS datastore with productive VMs on DataCore mirrored virtual disks with no redundancy
  • Trouble with the backend storage on the DataCore Storage Server, that was serving the mirrored virtual disks with no redundancy

Next steps

The customer and I decided to evacuate the VMs from the three affected datastores (each mirrored virtual disks represents a VMFS datastore). To avoid more trouble, we decided to split the unhealthy mirrors. So we had three single virtual disks. After the shutdown of the VMs on the affected datastores, we started a single storage vMotions at a time to move the VMs to other datastores. This worked until the storage vMotion hit the non-readable blocks. The storage vMotions failed and the single virtual disks went also into the status “Failed”. After that, we mounted the single virtual disks from the other DataCore Storage Server (that one, that was affected from the power outage and which was running the full recoveries). We expected that the VMFS on the single virtual disks was broken, but to our suprise we were able to mount the datastores. We moved the VMs from the datastores to other datastores. This process was flawless. Just to make this clear: We were able to mount the VMFS on virtual disks, that were in the status “Full Recovery pending”. I was quite sure that there was garbage on the disks, especially if you consider, that there was a full recovery running that never finished.

The only way to remove the logical block errors is to rebuild the logical drive on the RAID controller. This means:

  • Pray for good luck
  • Break all mirrored virtual disks
  • Remove the resulting single virtual disks
  • Remove the disks from the DataCore disk pool
  • Remove the DataCore disk pool
  • Remove the logical drives on the RAID controller
  • Remove the arrays on the RAID controller
  • Replace the faulty physical disks
  • Rebuild the arrays
  • Rebuild the logical drives
  • Create a new DataCore disk pool
  • Add disks to the DataCore disk pool
  • Add mirrors to the single virtual disks
  • Wait until the full recoveries have finished
  • Treat yourself to a beer

Final words

This was very, very painful and, unfortunately, not the first time I had to do this for this customer. The customer is in close contact to the vendor of the backend storage to identify the root cause.

Windows guest customization fails after cloning a VM

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Last week I got a call from a customer. The customer has tried to deploy new Citrix XenApp servers, and because the VMware template was a bit outdated, he tried to clone a provisioned and running Citrix XenApp VM. During this, the customer applied a guest customization specification to customize the guest OS (IP address, hostname etc). Until this point everything was fine. But after the clone process, the guest customization started, but never finished.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Using the VMware template, deployment and customization were successful. So the main problem was, that the customer was unable to use a provisioned and running Windows guest to deploy new Windows guests. I checked to logs and found this error messages in the setupact.log (you can find this log under C:\windows\system32\sysprep\panther):

I checked the rearm count with slmgr.vbs /dlv and saw, that the remaining count was 1.

Cloning and customizing a Windows VM with a rearm count of 1 leads to the observed behaviour. After the cloning and the start of the customization, the rearm count is 0. Microsoft describes this behaviour in KB929828.


This error may occur if the Windows Software Licensing Rearm program has run more than three times in a single Windows image.

To resolve this issue, you must rebuild the Windows image.

vExpert Maish Saidel-Keesing wrote about this in his blog in 2011. He explained it very well, make sure that you read his three blog posts!

In my case, rebuilding the template wasn’t an option. Therefore I had to reset the rearm count. I searched a while and found a solution that has worked for me. I’m quite sure that Microsoft doesn’t allow this, therefore I will not describe this procedure in detail. You will find it easily in the web…

The main task is to remove the WPA registry key. This key is protected under normal operation, so you have to do this using WinRE (Windows ecovery Environment) or WinPE (Windows Preinstallation Environment). After the removal of the WPA registry key, reboot the VM, add a new key using slmgr.vbs /ipk and active the Windows installation. You can check the rearm counter using slmgr.vbs /dlv and you will notice that the rearm counter is resetted.

Always keep in mind that you can’t use sysprep with a Windows installations an infinite number of times.

HP StoreOnce: Avoid special characters in NAS share description

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

While I was playing with my shiny, new HP StoreOnce VSA in my lab, I noticed a curious behavior. I created a NAS share for some tests with Veeam Backup & Replication. Creating a new share is nothing fancy. You can create a share in two ways:

  • using the GUI, or
  • using the CLI

So I created a new share:


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Nothing special, as you can see. I opened up a Explorer, typed in the IP address of my StoreOnce VSA and… saw no share.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

I repeated this process a couple of times, always with the same result. Then I went to the CLI and checked the newly created share:

So far, so good. I removed the share and tried to create the share using the CLI:

The command failed, no share was created. I verified the syntax, but the syntax of the command was correct. I started to simplify the command and removed the description.

The share was added with the default description. I removed the share and tried it again with my description. The command failed again. After removing the ampersand (&) from the description, the share could be added. I tried the same from the GUI. Using the GUI, a share with a ampersand (&) in the description field could be added, but it wasn’t accessible. Even if I removed the ampersand (&) from the share description. I had to remove and re-create the share with a valid description. Unfortunately the GUI allows you to create the share, even if the CLI command fails with the same settings. The GUI also doesn’t allow you to create the share with an empty description.

At this point, I can’t say if this is a bug or a known behaviour. I’m in contact with HP to clarify this. But you should avoid the usage of special characters in the NAS share description.


Today, I got an e-mail from the HP StoreOnce Engineering. They informed me, that it’s not only the ampersand (&) you should avoid. You should avoid a set of special characters

  • `
  • *
  • &
  • %
  • +
  • multiple space in a row

These characters can cause minor issues with Windows tools, like the Explorer. As a result, these special characters were banned in the latest 3.12.x CIFS server code. However this ban was not messaged in the GUI. As a fix, this ban will be lifted from 3.12.2 software to allow the use of the above mentioned special characters.

vCenter Server Appliance: Troubleshooting full database partition

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

A customer of mine had within 6 months twice a full database partition on a VMware vCenter Server Appliance. After the first outage, the customer increased the size of the partition which is mounted to /storage/db. Some months later, some days ago, the vCSA became unresponsive again. Again because of a filled up database partition. The customer increased the size of the database partition again  (~ 200 GB!!) and today I had time to take a look at this nasty vCSA.

The situation


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Within 2 days, the storage usage of the databse increased from 75% to 77%. First, I checked the size of the database:

 As you can see, the database had only 2 GB. The pg_log directory was more interesting:

 The directory was full with log files. The log files containted only one message:

The solution

This led me to VMware KB2092127 (After upgrading to vCenter Server Appliance 5.5 Update 2, pg_log file reports this error: WARNING: there is already a transaction in progress). And yes, this appliance was upgraded to U2 with high probability. The solution is described in KB2092127, and is really easy to implement. Please note that this is only a workaround. There’s currently no solution, as mentioned in the article.

Event ID 4625 – Failure Reason: Domain sid inconsistent

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

The last two days I had a lot of trouble with Microsoft Remote Desktop Services (RDP), or to use the older wording, terminal services. To be honest: Terminal servers are not really my specialty, and actually I was at the customer to help him with some vSphere related changes. But because I was there, I was asked to throw a closer look at some problems with their Microsoft W2K8R2 based terminal server farm. Some problems with removable media (USB sticks etc.) and audio on IGEL thin clients were hard to troubleshoot, but we were able to fix them. The biggest problem was none at first glance. The customer described, that remote users couldn’t login into a terminal server over VPN. But the login was successful, if the local administrator account of the terminal server was used. Some short tests confirmed the described behaviour. I checked the event logs and there it was: Event 4625.

I checked the SID of the terminal servers with PsGetSid and they all had the same SID. I asked the customer how he had deployed the terminal servers and he explained, that he had deployed the first server from a VMware template with a customization specification. The following terminal server were cloned with VMware, but the customization specification wasn’t applied. Furthermore, sysprep wasn’t run. I can’t explain why this error was only logged when a user tried to connect over VPN, but it was reproducible.

The Solution

It was clear that we had to change the SID. Unfortunately is NewSID not supported with Windows Server 2008 R2. So this wasn’t an option. The best way is to re-run sysprep. The customer and I developed an action plan to resolve the issue.

  1. Remove the server from the domain and add it into a workgroup
  2. Run Sysprep from C:\Windows\System32\Sysprep. Select “System Out-of-Box Experience (OOBE)”, “Generalize” and “Reboot”.
  3. After the reboot, rename the server with the old name. Activate Windows and do some tests.

This steps worked for us and resolved the issue. You need to test this in your environment before you apply the changes.

VMware ESXi 5.5 host doesn’t mount VMFS 5 datastore

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Yesterday I stumbled over a forum post in a german VMware forum. A user noticed after a vSphere 5.5 update, that a newly updated ESXi 5.5 hosts wasn’t able to mount some datastores. The host was updated with a HP customized ESXi 5.5 Image. The other two hosts, ESXi 5.1 installed from a HP customized image, had no problems. A HP P2000 G3 MSA Array with iSCSI was used as shared storage. The datastores with VMFS version 5.54 were mounted. Only datastores with VMFS 5.58 were not mouted. The user evacuated the VMs off one of the datastores, and then deleted and recreated the datastore. The recreated datastore appeared for a short moment and than disappered again.

 I knew the problem. It’s a mixture of a problem caused by a change in the HP customized images and a known behaviour of ESXi with VMFS 5 datastores.

Changes in the HP customized ESXi images

HP removed the P2000G3 VAAI-Plug-in for ESXi 5.x and 4.1 in September 2013 due to an incompatibility to HP Smart Array P711M-, P712M and P721M SAS RAID controllers and HP P200 G3 arrays running firmware TS230 or TS240. The incompatibility can cause messages like this:

These messages are associated with problems regarding the creation of datastores or mounting datastores. HP published a customer advisory in February 2013 to address this issue.

VMFS Locking Mechanisms

VMFS supports two locking mechanisms:

  • SCSI reservations
  • Atomic Test and Set (ATS)

Locking is necessary in an environment, where multiple hosts writes to the same filesystem. It should prevent a situation, where multiple hosts concurrently writing to the same blocks. SCSI reservations are the good old way. SCSI reservations are used, if a storage system doesn’t support VAAI. A SCSI reservation locks a whole LUN. No other host can write to it until the reservation is removed. This can lead to performance problems. Many of you know the problem with to many SCSI reservations. Atomic Test and Set (ATS) is more intelligent. ATS is capable to lock only a specific sector of a LUN. But the storage system has support this feature. For more information about VAAI, I recommend a blog article written by Chris Wahl.

You maybe know that there are different VMFS versions. And because of this, there are several situations where SCSI reservations are used instead of ATS, but also where ATS-only locking is used. I took this table from the vSphere documentation.

Storage DevicesNew VMFS5Upgraded VMFS5VMFS3
Single extentATS onlyATS, but can revert to SCSI reservationsATS, but can revert to SCSI reservations
Multiple extentsSpans only over ATS-capable devicesATS except when locks on non-headATS except when locks on non-head

An extend is a LUN which is used to expand a VMFS datastore by concatanating multiple LUNs together. Multiple extends can served from different storage systems (with limitations…). Some words to the VMFS 5 versions:

ESXi ReleaseESXi 5.0ESXi 5.1ESXi 5.5
VMFS 5 Version5.545.585.60

Putting the pieces together

We have to assume, that the datastores, that couldn’t be mounted, were created on a ESXi 5.1 host with VAAI plug-in for the P2000 G3 installed. So it was single-extend and ATS-only. One host was updated with a HP customized image to ESXi 5.5. This image doesn’t include the VAAI plug-in. So the new 5.5 host doesn’t support VAAI. No VAAI, no ATS. Because of this, the the host was able to mount the older, VMFS 5.54 datastores, but not the newer 5.58. I assume that the 5.54 datastores were updated from an older VMFS, so that they were ATS capable, but can also revert to SCSI reservation. As the user deleted the datastore and recreated it on a 5.1 host, the datastore was again version 5.58 and ATS-only.

The workaround and the solution

The workaround is to install the HP P2000 Software Plug-in for VMware VAAI. It isn’t a workaround to disable the Atomic Test and Set (ATS) primitive. If you have VMFS 5 ATS-only datastores, you wouldn’t be able to mount them. In this case you also have to disable the ATS-only mode. A solution is to do a firmware update on the P2000 G3. A few weeks ago HP released a new firmware for the P2000 G3. With the firmware release TS251R004 the P2000 G3 VAAI plug-in is no longer supported, because T10-compliance for VAAI was added.

I like to outline, that this problem could occur with any other storage that needs a VAAI plugin. If you update you hosts and you forget to install this plugin, you would have the same problems.

Problem analysis with Kepner-Tregoe

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

When you deal with problems in IT, you often deal with problems where is root cause is unknown. To solve such problems, you have to use a systematic method. Only a systematic method leads to a fast, effective and efficient solution. One of the most commonly observed methods in my career bases on approximation. We all know it as “trial and error”. Someone tries as long until the problem is solved. Often this method makes it worse than it was before, and it often leads to wrong conclusions, and furthermore wrong results. If someone draws a wrong connections at the beginning of the analysis, this leads to a totally wrong path. I would like to illustrate this with an example:

John Doe tried to monitor VMware ESXi hosts with a HP Systems Insight Manager (SIM). The VMware ESXi were running on different HP ProLiant models. John noticed, that some of the ESXi hosts showed more information than other hosts. After a very quick Google search he quickly concluded, that this was related to iLO 4 Agentless Monitoring, because those hosts, that showed all information, were ProLiant Gen8 models.

As you can imagine, this was dozens of miles away. The solution was simple: The Gen8 models were installed with ESXi images from HP, which includes the necessary agents. This example shows another very ugly behavior: Googling around, in the hope to find a problem description that sounds similar. This is often done by entering a error message into Google, selecting a search result and trying the proposed solution. And quite often the article is not even read, simply scrolled down to the solution. It’s unlikely that the same error message can have different causes, which need different solutions.

What could be a systematic method to solve problems? I’d like to introduce to Kepner-Tregoe (KT). KT stands for two things: A consulting company founded by Charles Kepner and Benjamin Tregoe, and for a method. KT is mentioned by ITIL as a component of the Problem Management in the Service Operation phase. You can use KT for problem solving, decision making or potential problem analysis. I will focus on the situation analysis and problem analysis. The situation analysis is common for problem solving, decision making or potential problem analysis.

The Kepner-Tregoe method

The KT method is based on a rational process and it’s divided into four different processes:

  • situation analysis
  • problem analysis
  • decision analysis
  • potential problem analysis

Behind each process is a question you should ask.

The situation analysis

During the situation analysis the question is “What’s going on?”. At this point, the problem analysis hasn’t started. Before you can analyse the problem, you have to clarify the situation, outline concerns and set priorities. Ask yourself about the current and future impact, how much time do you have to find a solution, and at which point a solution could be impossible (limitations because of time, budget etc.).

The problem analysis

 The problem analysis consists of five consecutive steps:

  1. Define the problem
  2. Describe the problem
  3. Create hypotheses about the cause
  4. Test the hypotheses
  5. Verify the root cause

Use the 5 Ws to define the problem. Only a problem description, that includes the 5 Ws is capable to fully describe a problem. Such a description will help you, and your colleagues, to understand the problem.

  • Who is affected by the problem?
  • Why is this important to solve the problem?
  • What are the symptoms?
  • When does the problem occur?
  • Where does the problem occur?

If you created you problem description with the 5 Ws, you can concretize the answers with “IS” and “COULD BE but IS NOT” aspects. Let’s pick up the example from above:

Who is affected by the problem? HP ProLiant G6 and G7 models running VMware ESXi.

A HP ProLiant G7 model with VMware ESXi image “IS” affected. A HP ProLiant G7 and Gen8 model with a HP custom Image for ESXi “COULD BE but IS NOT” affected.

As you can see, this will dramatically reduce the number of possible causes, especially when you add the problem description and the symptoms. But this also shows another fact: You have to take a detailed look at the affected components/ systems, and you have to take care, that you not miss any deviations between the components/ systems (in the example all hosts were running ESXi 5.1, but some of the hosts were running a VMware image, some hosts a HP custom ESXi image). You also should identify what changes are made in the past. This may be answered by the “When?” question (When does the problem occur? After demoting one of the four Active Directory Domain Controllers).

Now it’s time to create hypotheses about the possible cause. Depending on the problem description, the past changes and the “IS” and “COULD BE but IS NOT” aspects of the problem, it should be possible to create one or more hypotheses.

With one or more hypotheses, you have to test each of them against the “IS” and “COULD BE but IS NOT” aspects. The question is: Can the hypothesis explain the “IS” and “COULD BE but IS NOT” aspects? One of the hypotheses will best explain the “IS” and “COULD BE but IS NOT” aspects. This is the most probable hypothesis.

Verifying the root cause is the last and trickiest part. You have to verify your assumptions and reflect the way, how you have come to the decision what the root cause is. If you are sure that you have identified the root cause, you can develop and implement a solution. After the implementation, you have to verify the result. Is the problem solved? Yes? Fine! If not, you have to involve this into the test of the other hypotheses.


Kepner-Tregoe is a totally rational method. It’s hard at the beginning not to make quick assumptions and to reflect. It’s something you have to train. I guarantee that you will get better with each problem you solve. KT problem analysis was used during the Apollo 13 mission. And what should I say? It worked! So give it a try.

EDIT: Kepner-Tregoe informed me over Twitter, that there are two groups on LinkedIn, where you can get more information and talk to other KT practitioners.