Tag Archives: software

DataCore mirrored virtual disks full recovery fails repeatedly

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Last sunday a customer suffered a power outage for a few hours. Unfortunately the DataCore Storage Server in the affected datacenter weren’t shutdown and therefore it crashed. After the power was back, the Storage Server was started and the recoveries for the mirrored virtual disks started. Hours later, three mirrored virtual disks were still running full recoveries and the recovery for each of them failed repeatedly.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The recovery ran until a specific point, failed and started again. When the recovery failed, several events were logged on the Storage Server in the other datacenter (the Storage Server that wasn’t affected from the power outage):

Source: DcsPool, Event ID: 29

Source: disk, Event ID: 7

Source: Cissesrv, Event ID: 24606

The DataCore support quickly confirmed what we already knew: We had trouble with the backend storage on the DataCore Storage Server that was serving the full recovies for the recovering Storage Server. The full recoveries ran until the point at which a non-readable block was hit. Clearly a problem with the backend storage.


To summarize this very painful situation:

  • VMFS datastore with productive VMs on DataCore mirrored virtual disks with no redundancy
  • Trouble with the backend storage on the DataCore Storage Server, that was serving the mirrored virtual disks with no redundancy

Next steps

The customer and I decided to evacuate the VMs from the three affected datastores (each mirrored virtual disks represents a VMFS datastore). To avoid more trouble, we decided to split the unhealthy mirrors. So we had three single virtual disks. After the shutdown of the VMs on the affected datastores, we started a single storage vMotions at a time to move the VMs to other datastores. This worked until the storage vMotion hit the non-readable blocks. The storage vMotions failed and the single virtual disks went also into the status “Failed”. After that, we mounted the single virtual disks from the other DataCore Storage Server (that one, that was affected from the power outage and which was running the full recoveries). We expected that the VMFS on the single virtual disks was broken, but to our suprise we were able to mount the datastores. We moved the VMs from the datastores to other datastores. This process was flawless. Just to make this clear: We were able to mount the VMFS on virtual disks, that were in the status “Full Recovery pending”. I was quite sure that there was garbage on the disks, especially if you consider, that there was a full recovery running that never finished.

The only way to remove the logical block errors is to rebuild the logical drive on the RAID controller. This means:

  • Pray for good luck
  • Break all mirrored virtual disks
  • Remove the resulting single virtual disks
  • Remove the disks from the DataCore disk pool
  • Remove the DataCore disk pool
  • Remove the logical drives on the RAID controller
  • Remove the arrays on the RAID controller
  • Replace the faulty physical disks
  • Rebuild the arrays
  • Rebuild the logical drives
  • Create a new DataCore disk pool
  • Add disks to the DataCore disk pool
  • Add mirrors to the single virtual disks
  • Wait until the full recoveries have finished
  • Treat yourself to a beer

Final words

This was very, very painful and, unfortunately, not the first time I had to do this for this customer. The customer is in close contact to the vendor of the backend storage to identify the root cause.

Windows guest customization fails after cloning a VM

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Last week I got a call from a customer. The customer has tried to deploy new Citrix XenApp servers, and because the VMware template was a bit outdated, he tried to clone a provisioned and running Citrix XenApp VM. During this, the customer applied a guest customization specification to customize the guest OS (IP address, hostname etc). Until this point everything was fine. But after the clone process, the guest customization started, but never finished.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Using the VMware template, deployment and customization were successful. So the main problem was, that the customer was unable to use a provisioned and running Windows guest to deploy new Windows guests. I checked to logs and found this error messages in the setupact.log (you can find this log under C:\windows\system32\sysprep\panther):

I checked the rearm count with slmgr.vbs /dlv and saw, that the remaining count was 1.

Cloning and customizing a Windows VM with a rearm count of 1 leads to the observed behaviour. After the cloning and the start of the customization, the rearm count is 0. Microsoft describes this behaviour in KB929828.


This error may occur if the Windows Software Licensing Rearm program has run more than three times in a single Windows image.

To resolve this issue, you must rebuild the Windows image.

vExpert Maish Saidel-Keesing wrote about this in his blog in 2011. He explained it very well, make sure that you read his three blog posts!

In my case, rebuilding the template wasn’t an option. Therefore I had to reset the rearm count. I searched a while and found a solution that has worked for me. I’m quite sure that Microsoft doesn’t allow this, therefore I will not describe this procedure in detail. You will find it easily in the web…

The main task is to remove the WPA registry key. This key is protected under normal operation, so you have to do this using WinRE (Windows ecovery Environment) or WinPE (Windows Preinstallation Environment). After the removal of the WPA registry key, reboot the VM, add a new key using slmgr.vbs /ipk and active the Windows installation. You can check the rearm counter using slmgr.vbs /dlv and you will notice that the rearm counter is resetted.

Always keep in mind that you can’t use sysprep with a Windows installations an infinite number of times.

HP Service Pack for ProLiant 2015.04

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Some weeks ago, HP has published an updated version of their HP Service Pack for ProLiant (SPP). The SPP 2015.04.0 has added support for

  • new HP ProLiant servers and options,
  • support for Red Had Enterprise Linux 6.6, SUSE Linux Enterprise Server 12, VMware vSphere 5.5 U2 and (of course) VMware vSphere 6.0,
  • HP Smart Update Manager v7.2.0 was added,
  • the HP USB Key Utility for Windows v2.0.0.0 can now handle downloads greater than 4GB (important, because this release may not fit on a standard DVD media…)
  • select Linux firmware components is now available in rpm format

In addition, the SPP covers two important customer advisories:

  • ProLiant Gen9 Servers – SYSTEM ROM UPDATE REQUIRED to Prevent Memory Subsystem Anomalies on Servers With DDR4 Memory Installed Due to Intel Processor BIOS Upgrades (c04542689)
  • HP Virtual Connect (VC) – Some VC Flex-10/10D Modules for c-Class BladeSystem May Shut Down When Running VC Firmware Version 4.20 or 4.30 Due to an Erroneous High Temperature Reading (c04459474)

Two CAs fixed, but another CA arised (and it’s an ugly one..):

  • HP OneView 1.20 – Upgrading Virtual Connect Version 4.40 with Service Pack for ProLiant (SPP) 2015.04.0 Will Result in a Configuration Error and an Inability to Manage the HP Virtual Connect 8Gb 24-Port Fibre Channel Module (c04638459)

If you are using HP OneView >1.10 and 1.20.04, you will be unable to manage HP Virtual Connect 8Gb 24-port Fibre Channel Modules after updating the module to firmware version 3.00 or later. This is also the case, if you use the smart components from Virtual Connect  firmware version 4.40! After the update, the VC module will enter a “Configuration Error” state. Currently there is no fix. The only workaround is not to update to HP Virtual Connect 8Gb 24-port Fibre Channel Module firmware version 3.00. This will be fixed in a future HP OneView 1.20 patch.

Important to know: With this release, the SPP may not fit on standard DVD media! But to be honest: I’ve never burned the SPP to DVD, I always used USB media.

Check the release notes for more information about this SPP release. You can download the latest SPP version from the HP website. You need an active warranty or HP support agreement to download the SPP.

Safe (or safer) than backup to tape: HP StoreOnce

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

When talking to SMB customers, most of them don’t want to talk about their backup strategy. It’s paradox: They know that data loss can ruin their business, but they don’t want to invest money into a fully tested recovery concept (I try to avoid the word “backup concept” – Recovery is the key). Because of tight budgets and lacking knowledge, many customers use traditional concepts in a virtualized world. This often ends  in traditional backup applications with agents deployed into guest OS, and backups that are written to tape (or worse: On USB disks). If you ask a customer “Why do you store your data on tape?”, only a few argue with costs per GB or performance. Most the customer argue with something like

  • “We’re doing this for years, so why we should change it?”
  • “We have to store our tapes offsite”
  • “There is a corporate policy that forces us to store our backups on tape”

In most cases, the attempt to sell a backup-to-disk appliance (like HP StoreOnce backup system) dies with the last arguments. Customers tend to doesn’t trust designs in which they don’t have a backup on tape. Some customers have a strong desire to have a tape which is labled with “MONDAY” or “FRIDAY FULL”. To be honest: Usually I see this behaviour only at SMB customers. Backup-to-disk appliances are often described as

  • expensive,
  • complex, and
  • vulnerable

None of them applies to a HP StoreOnce backup system. Not even expensive, if you not only focus on CAPEX.

HP StoreOnce

Please allow me to write some sentences about HP StoreOnce.

A HP StoreOnce backup system is available as physical or virtual appliance. HP offers a broad range of physical appliances that can store between 5,5 TB and 1.728 TB BEFORE deduplication. The virtual StoreOnce VSA is available with a capacity of 4 TB, 10 TB and 50 TB before deduplication. And don’t forget the free 1 TB StoreOnce VSA! All HP StoreOnce backup systems, regardless if physical appliance or VSA, share the same StoreOnce deduplication technology, as well as the same replication and security features. In fact, the StoreOnce VSA runs the same (linux based) software as the physical applanices and vice versa. You can add features by adding software options:

  • HP StoreOnce Catalyst
  • HP StoreOnce Replication
  • HP StoreOnce Security Pack
  • HP StoreOnce Enterprise Manager

HP StoreOnce Catalyst allow the seamless movement of deduplicated data across StoreOnce capable devices. This means, that a HP Data Protector media agent can deduplicate data during a backup, write the data to a HP StoreOnce backup system, and then the data can replicated to another HP StoreOnce backup system. All without the need to rehydrate on the source, and deduplicate it on the destionation again. The StoreOnce VSA includes a HP StoreOnce Catalyst license!

HP StoreOnce Replication enables an appliance or a VSA to act as a target in a replication relationship. Only the target needs to be licensed. Fan-in describes the number of possible source appliances.

StoreOnce VSA8
StoreOnce 27008
StoreOnce 290024
StoreOnce 450024
StoreOnce 470050
StoreOnce 490050
StoreOnce 6200384

As you can see, even the StoreOnce VSA can used as a target for up to 8 source appliances. Replication is a licensable feature, except for the StoreOnce VSA. The StoreOnce VSA includes the replication license!

HP StoreOnce Enterprise Manager can be obtained for free and allows you to monitor up to 400 physical appliances or StoreOnce VSAs. It provides monitoring, reporting, trend analysis and forcasting. It integrates with the StoreOnce GUI for single pane-of-glass management for physical appliances and VSA.

HP StoreOnce Security Pack enables data-at-rest and data-in-flight encryption (using IPsec and only for StoreOnce Catalyst), as well as secure data deletion. Here applies the same as for the HP StoreOnce Catalyst and Replication license: The StoreOnce VSA includes this license already.

HP StoreOnce Deduplication

Deduplication is nothing really new. In simple terms it’s a technique to reduce the amount of stored data by removing redundancies. Data that is being detected as redundant, isn’t stored again on the disks. Only a pointer to the stored data is set. This runs the risk of potential data loss. What if the original block gets corrupted? Grist to the mill of the tape lovers (Tapes never fail… for sure…).

Integrity Plus

Don’t worry. I won’t bore you with stuff about a dead (or nearly dead) CPU architecture. Integrity Plus is HPs approach for an end-to-end verification process. Let’s take a look on how data comes into a StoreOnce backup system. From a client perspective, you can choose between Virtual Tape Library (VTL), NAS emulation (CIFS or NFS) and StoreOnce Catalyst.

When data is written to a VTL, a CRC is computed for each block and it’s stored together with the data block on disk. During a restore, a CRC is computed for every block that is read from disk and it’s compared to the initial stored CRC. If it differs, a SCSI check condition is reported. Because NAS emulation and StoreOnce Catalyst doesn’t use SCSI protocol, no CRC is computed and stored to disk. The integrity of the written data is guaranteed in other ways.

At the beginning of the deduplication process, the incoming data is divided into chunks. HP uses a variable length for each data chunk, but in average a data chunk is 4 KB. A smaller chunk size leads to better deduplication results. A SHA-1 (AFAIK 160 bit) hash is computed for each data chunk. This chunk hash is used to identify duplicate data by comparing it to other chunk hashes. At this point, a sparse index is used to find possible candidates of redundant data chunks. Instead of holding all chunk hashes in the memory, only a few hashes are stored in the RAM. The remaining chunk hashes are stored as metadata on disk. The container index contains a list of chunk hashes and a pointer to the data container where the data chunk is stored. Before data chunks are stored on disk, multiple chunks are compressed (using LZO) and a SHA-1 checksum is computed for the compressed chunks. This checksum is stored on disk. When the compressed data is decompressed, a new checksum is computed and it’s compared to the stored SHA-1 checksum. Metadata and container index files are protected with MD5 checksums. In addition, a transaction log file is maintained for the whole process and the sparse index is frequently flushed to disk.

When data is coming into the StoreOnce backup system, a match with a chunk hash in the memory can lead the system (using the sparse index, metadata and container index files) to containers with associated data chunk (e.g. data chunks that represent a backup VM). And if a data chunk of the incoming data is a duplicate, it is very likely that many of the following data chunks are also duplicates.

All physical appliances use RAID 6 to protect data in case of disk failures. Only the HP StoreOnce 2700 uses a RAID 5, because the appliance can only hold 4 SAS-NL disks. When using StoreOnce VSA, you can use any RAID level for the underlying storage. But you should use something above RAID 0…


Let’s summarize:

  • RAID
  • Supercapacitors on RAID controllers to protect write cache in case of power loss
  • ECC memory
  • Integrity Plus to protect the data within the StoreOnce backup system
  • StoreOnce Replication to replicate data to another HP StoreOnce backup systems
  • data-at-rest, data-in-flight encryption and secure deletion with StoreOnce Security Pack

Sounds very safe to me. Tape isn’t dead. Tape has its right to exist. But backup to tape isn’t safer than a backup to a StoreOnce backup system. Latter can offer you faster backups AND restores, new backup and recovery options (e.g. backups in RoBo offices that are replicated to the central datacenter). Think about the requirements for storing tapes (temperature, humidity, physical access), regular recovery tests, copy tapes to newer tapes etc. Consider not only CAPEX. Also remember OPEX.

A HP StoreOnce backup system is perfect for SMBs. It simplifies backup and recovery and it can offer new opportunities. Testdrive it using the free 1 TB StoreOnce VSA! Remember: The StoreOnce VSA includes StoreOnce Replication, Catalyst and the Security Pack! Even the free 1 TB StoreOnce VSA.

Publishing Outlook Web Access with Microsoft Web Application Proxy (WAP)

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Microsoft has introduced the Web Application Proxy (WAP) with Windows Server 2012 R2 and has it positioned as a replacement for Microsoft User Access Gateway (UAG), Thread Management Gateway (TMG) and IIS Application Request Routung (ARR). WAP ist tightly bound to the Active Directory Federation Services (AD FS) role. WAP can be used

  • pre-authenticate access to published web applications, and
  • it can function as an AD FS proxy

The AD FS proxy role was removed in Windows Server 2012 R2 and it’s replaced by the WAP role. Because WAP stores its configuration in the AD FS, you must deploy AD FS in your organization. The server, that hosts the WAP, has no local configuration. This allows you to deploy additional WAP servers to create a cluster deployment. The additional servers get their configuration from the AD FS.

The deployment of WAP can be split into two parts:

  • deployment of the AD FS role
  • deployment of the WAP role

The AD FS deployment

You can deploy the AD FS role on a domain controller or on a separate server. AD FS  acts as an identity provider. This means, that it authenticates users and provides security tokens to applications, that trust the AD FS instance. On the other hand it can act as a federation provider. This means, that it can use tokens from other identity providers and can provide security tokens to applications that trust AD FS. The AD FS role can be deployed onto a domain controller or a AD member server.

The first step is to install the AD FS role onto a AD member server or domain controller. I used the DC in my lab. Depending on your needs, this can be different. I used the PowerShell to install the AD FS role.

A reboot is not necessary. The next step is to configure the AD FS role. This process is supported by configuration wizard. Before you can start, it’s necessary to deploy the group Managed Service Account (GMSA). Open a PowerShell console and execute the following commands:

Then you can start the configuration wizard. If this is the first first AD FS server, select the first option “Create the first federation server in a federation server farm”.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

To perform the configuration, you need an account with domain administrator permissions. In my case, I simply used the Administrator account.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

You need to enroll an SSL certificate that is used for AD FS. This SSL certificate must include the DNS name for the AD FS server and also the Subject Alternative Names enterpriseregistration and enterpriseregistration.yourdomainname.tld. This screenshot includes the values that I used in my lab deployment. I entered this values into the “Attributes” box:


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Create the certificate and export it with the private key as pfx file. You must import the certificate into the “Personal” store of the local computer, that acts as AD FS server. You also need two DNS entries for the names, that are included in the certificate.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

If the certificate import was successful, you can select the certificate in the wizard. Add the Federation Service Name and the Display Name.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The Service Account can be a existing domain user account or a Managed Service Account. I used my Administrator account for simplicity.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

If you deploy a single server, you can use the Windows Internal Database. If you plan to depliy multiple AD FS servers, you have ot use a SQL server database.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Review the options and continue with the pre-requisite checks. If everything went well, you can proceed with the installation. Finish the setup and close the wizard.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Open a browser and enter the AD FS URL into the address bar. In my case this URL looks like this:



Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

If you get a screen like this, everything’s fine and AD FS works as expected. Check the Windows Server 2012 R2 AD FS Deployment Guide for more information. Now it’s time to deploy the Web Application Proxy.

The WAP deployment

To install the WAP role, simply open a PowerShell and run the Install-WindowsFeature cmdlet.

Then you can run the WAP configuration wizard. This wizard guides you through the configuration of the WAP role.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

First you have to connect to the AD FS server. Enter the Federation service name you used to deploy the AD FS instance, and provide the necessary user credentials.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

At this point you have to select the certificate, that is used by the AD FS proxy. You can use the same certificate you used for the AD FS server. But you can also create a new certificate. The certificate must be imported into the “Personal” store of the WAP server.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Confirm the settings and click “Configure”. At this point, the wizard executes the shown PowerShell command.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Close the wizard and open the management console of the Web Application Proxy to check the operational status. At this point, the WAP only acts as a AD FS proxy.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

To test the functionality, I decided to publish Outlook Web Access (OWA). Use the “Publish New Application Wizard” to publish a new application.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

To publish OWA, select “Pass-through” as pre-authentication method.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Now it’s getting interesting. When you enter the external URL, the backend server URL is automatically filled. External and Backend URL have to be the same URL. Because of this, you need split DNS (see “Configure the Web Application Proxy Infrastructure” and “AD FS Requirements” at the Microsoft TechNet Library). You also need a valid external certificate, that matches the FQDN used in the external URL.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Check the settings and click “Publish”. The wizard executes the shown PowerShell command.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Close the wizard and check the functionality of the published application. This screenshot shows the access to OWA from one of my management VMs (MGMTWKS1):


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

This drawing shows my lab setup. I’ve used two subnets ( and to simulate internal and external access, as well as split DNS.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The host dc.vcloudlab.local ( has the AD FS role installed and resolves cas.terlisten-consulting.de to (HAProxy). MGMTWKS1 resolves the same FQDN to (WAP1 – my WAP server).

Final words

This is only a very, very basic setup and I deployed it in my lab. The installation was not very difficult and I was quickly able to set up a working environment. Before you start to deploy AD FS/ WAP, I recommend to take a look into the TechNet Library:

Shady upgrade path for NetApp ONTAP 7-Mode to cDOT

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

NetApp has offered Data ONTAP for some time in two flavours:

  • 7-Mode
  • Clustered Data ONTAP (cDOT)

With cDOT, NetApp has rewritten ONTAP nearly from scratch. The aim was to create an Storage OS, that leverages scale-out architecture and storage virtualization techniques, as well as providing non-disruptive operations. NetApp has needed some release cycles to get cDOT at that point, where it provides all features that customers know from 7-Mode. With Data ONTAP 8.3, NetApp has reached this point. Even Metrocluster is now supported. That’s a huge improvement and I’m glad that NetApp has made it. But NetApp wasted no time in cutting off old habits: With ONTAP 8.3, 7-Mode is no longer offered. Okay, no big deal. Customers can migrate from 7-Mode to cDOT. Yes, indeed. But it’s not that easy as you maybe think.

First of all: You can’t update to cDOT in-place. You have to wipe the nodes and re-install Data ONTAP. That makes it nearly impossible to migrate a running Filer without downtime and/ or buying or loaning additonal hardware. Most customers migrate to cDOT at the same time as they refresh the hardware. The data can be migrated on different ways. NetApp offers the 7-Mode Transition Tool (7MTT). 7MTT leverages SnapMirror to get the data from the 7-Mode to the cDOT Filer. But you can also use plain SnapMirror without 7MTT to migrate the data. The switchover from the old to the new volume is an offline process. The accessing servers have to be disconnected, and they must be connected to the new cDOT Filer and volume. 7MTT can only migrate NAS data! If you wish to migrate SAN data (LUNs), you have to use NetApps DTA2800 appliance or something like VMware Storage vMotion. Other migration techniques, like Storage vMotion, robocopy etc. can also be used.

I know that cDOT is nearly completely rewritten, but such migration paths are PITA. Especially if customers have just bought new equipment with ONTAP 8.1 or 8.2 and they now wish to migrate to 8.3.

Another pain point ist NetApps MetroCluster. With NetApp MetroCluster customers can deploy active/ active clusters between two sites up to 200 km apart. NetApp MetroCluster leverages SyncMirror to duplicate RAID groups to different disks. NetApp MetroCluster is certified for vSphere Metro Storage Cluster (vMSC). One can say that Metro cluster is a bestseller. I know many customers that use MetroCluster with only two nodes. That’s where a 2-node HA pair is cut in the middle and spread into to locations. Let’s assume that a customer is running a stretched MetroCluster with two nodes and Data ONTAP 8.2. The customer wants to migrate to ONTAP 8.3. This means, that he has to migrate to cDOT. No problem, because with ONTAP 8.3, cDOT offers support for NetApp MetroCluster.

  1. You can’t update to cDOT in-place. So either wipe the nodes or get (temporary) additional hardware.
  2. NetApp MetroCluster with cDOT requires a 2-node cluster at each of the two sites (four nodes in sum)

Especially when you look at the second statement, you will quickly realize that all customers that are running a 2-node MetroCluster, have to purchase additional nodes and disks. Otherwise they can’t use MetroCluster with cDOT. This allows only one migration path: Use ONTAP 8.2 with 7-Mode and wait until the hardware needs to be refreshed.

This is really bad… This is a shady upgrade path.


NetApp is working hard to make the migration path better.

  • 7MTT is capable of migrating LUNs from 7DOT to cDOT in the newest Version
  • At NetApp Insight 2014 there was an announcement of 2-Node cDOT MetroCluster which will be released soon.

Thank you Sascha for this update.

Importance of client-side proxy settings in Exchange 2013 environments

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

There is an advantage, if you solves problems: You can learn something. I’m currently migrate a small Exchange 2007 environment to Exchange 2013. The first thing I learnt was, that IT staff still uses their own accounts for administration, and sometimes they assign administrator rights to users for testing and troubleshooting purposes. This can be a problem, as I described in my last posting. Today I learnt something different: Sometimes it’s the little things that bring you to despair.

After moving a mailbox from Exchange 2007 to 2013, Outlook must change the server for the client access. Nothing fancy and the user normally doesn’t notices it. If Outlook is online during the mailbox migration, the user gets a message, that he has to restart Outlook. Please note, that you need at least Outlook 2007 SP3, when you wish to migrate to Exchange 2013. This is because of an important change with Exchange 2013: The abolition of direct MAPI connections. Exchange 2013 only supports RPC-over-HTTP (aka Outlook Anywhere), even for LAN connections. RPC-over-HTTP has several advantages, e.g. the CAS role has to deal with only one protocol or easier load balancing and high-availability.

The problem

After moving a mailbox from Exchange 2007 to 2013, the Outlook 2010 client wasn’t able to connect to the Exchange server. The server was correctly changed, as I was able to see in the Outlook profile, but everytime I tried to start Outlook 2010, I got this error:


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Moving the mailbox back to the Exchange 2007 solved the problem. Moving the mailbox again to the Exchange 2013 resulted in the same error. Outlook Web Access was working fine (internal and external) on Exchange 2007 and 2013. The Microsoft Office Outlook Connectivity Test completed successful, after moving the mailbox to the Exchange 2013. Even Enterprise Active Sync (EAS) worked on the Exchange 2013 server. Only Outlook 2010 wasn’t able to connect to the mailbox, when it was on the Exchange 2013. The customer and I were puzzled… While testing this and that, we got an error while accessing Outlook Web Access. On another client, with another user OWA worked fine. BÄÄÄM! A possible cause popped in my mind.

The solution

I checked immediately  the proxy settings and there it was: A big proxy bypass list in the Internet Explorer with several entries, but the new Exchange server was missing. I added the server to the proxy bypass list and Outlook started without any problems. To be honest: It was a bit more complex, because the used proxy wasn’t an internal system. A solution provider operates it and the proxy settings were managed by a GPO, that wasn’t working correctly. In addition to that, an AD group membership was used to allow users to pass a web filter. But at the core it was the missing entry for the new Exchange server, that caused the problem.

The explanation

Exchange 2013 only supports RPC-over-HTTP and it uses the system-wide proxy settings. Therefore, HTTP(S) traffic is sent to the proxy server (regardless if the destination is internal or external), unless there is an entry in the proxy bypass list for the destionation (in this case the Exchange server). If the proxy can’t handle the traffic, Outlook will not be able to connect to the Exchange server. With MAPI, the proxy isn’t a problem, because MAPI traffic isn’t sent to the proxy. This explains, why Outlook was able to connect to the Exchange server, if the mailbox was moved back to the Exchange 2007. With Exchange 2007, Outlook uses MAPI for the connection. With Exchange 2013 RPC-over-HTTP is used.

So if you experience connection problems after moving a mailbox to Exchange 2013, check your proxy settings. This also applies when using Outlook Anywhere, because Outlook Anywhere uses also RPC-over-HTTP.

TeamViewer Connection with Royal TS

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Some of my customers use TeamViewer to provide a quick access to their systems, without the need to configure VPN connections, install software on hosts etc. TeamViewer provides fast and secure access without the need to install software. Simply start the teamviewer.exe and choose if you want to connect to a host or use the session id and password to allow someone else access your computer. TeamViewer is free for all non-commercial users! So it’s a great choice for remote support all your family members.

I use Royal TS as my primary tool for remote connection management. Most of my connections are Microsoft Remote Desktop, VNC or SSH. But I also need TeamViewer. In opposite to Remote Desktop Manager, Royal TS doesn’t has a plug-in for TeamViewer. But I found a hint in the Royal TS knowledge base how to solve this: A command task.

Create a command task for TeamViewer

Open Royal TS and rightclick “Tasks” in your connection document. Select “Add” > “Command Task”


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Enter a descriptive name. You have to create a Command Task for each TeamViewer connection you want to save.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Enter the path and the working directory to your TeamViewer executable. The arguments are the key: Session id and password will not not directly entered, but provided through custom fields.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Select “Custom Fields” and enter the session id in the first, and the password in the second field. If you want to protect both field, you can use the protected fields. Then you have to change the custom fileds in the arguments.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Save the connection and start the task. After a few seconds you should see the desktop of the remote host.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Because session ID and password may change by time, you have to change them in the command task. This is a bit unhandy, but if you not frequently connect to random hosts, this should work fine.

Royal TS – Remote connection management for Windows

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
Disclaimer: I use a NFR license that was provided to me as a vExpert by code4ward free of charge.

I have searched for a relatively long time for a solution to manage multiple remote connections, like RDP, VNC or SSH. I tried different free tools, but none of them fulfilled my requirements, which are quite simple: Manage different connections & credentials. First I’ve tried Devolutions Remote Desktop Manager, which was quite good. But to be honest: It was a bit too much for my needs. Justin Paul wrote a nice review of Remote Desktop Manager. The second product I’ve tested was more suitable: Royal TS for Windows.


What should I say? The installation is really easy: Start setup, click “next”, “next”, “next” and “finish”. ;) I shot some screenshots to show you the setup. Simply click next.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Accept the license agreement.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Click “Next”.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Choose the desired installation path or accept the default installation path.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

You can now select the file types which should be opened automatically with Royal TS. The selection shown in the screenshot is the default selection.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Check the box if you want to create a desktop shortcut.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Click “Install” to start the installation.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Wait until the installation finishes.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Click “Finish” and enjoy Royal TS. :)


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Without entering a license, you can use Royal TS with shareware limitations. That means (source):

  • You cannot have more than 10 connections per Royal TS application instance
  • You cannot have more than 10 credentials per Royal TS application instance
  • You can open only one Royal TS document per application instance

A single user license with 1y software maintenance costs 25 €, which is really cheap in respect of the features. To be honest: If I wouldn’t have a NFR license, I would pay for it! If you use MacOS X, simply buy Royal TSX, which also costs 25 €, or the bundle (Windows + MacOS X) for 37,50 €. I recommend to check the other Royal TS offerings!


When you start Royal TS, you get a neat and well-known GUI in ribbon-style.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Documents are used to store connections, credentials and tasks. I recommend to create a document in which you can store your connections and credentials.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

I also recommend to protect the document with a password, especially if you store credentials in it.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

I like to show you how you can create a RDP connection. I will focus on the main functions. Create a new connection and choose “Remote Desktop”.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Enter a name for the connection. This will automatically entered into the “Computer Name” field, so you should use an FQDN or an ip address.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

You can specify credentials for this specific connection, enter no credentials or choose another option. I tend to create credentials and to reuse them into different connections.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Depending on your display configuration you can open the connection in a tap, in a windows or on an external display.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

I like to redirect some of my local drives into the RDP session, so that I can share files more easily.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Tasks can be used to start actions on connect or disconnet, e.g. create a VPN tunnel. The imagination knows no limits. :)


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

In contrast to Remote Desktop Manager Royal TS can manage serial connections, so you can create and serial connections Royal TS.


All in all Royal TS is a great tool that I do not want to miss. Download it, try it, buy it. :) The 25 € are a good invest in time savings and productivity.

Support of HP OEM VMware bundles on non HP Hardware

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Hewlett-Packard (HP) offers a broad range of OEM VMware software for their HP ProLiant server familiy (VMware Virtualization Software and Client Virtualization with VMware View and VMware ThinApp). A customer can buy HP ProLiant servers and VMware software from HP. This also includes support for hard- and software, which makes it easy in case of support. You only have to call HP and they will do the rest. As you maybe know, I work for a IT solution provider and HP partner. It’s quite common, that a solution which is offered by us, consists of a large set of HP hard- and software. This has benefits for both sides: For us, and the customer, especially from the support perspective. The customer has a multi-vendor solution (HP and VMware), but support is done by HP. The other side of the medal is the financial perspective: The higher the project value, the better the discounts from HP. So it’s quite common that we sell HP OEM VMware licenses and support.

Today a collegue came with an interesting question: What is, if the customer decides to use non-HP equipment? Either because he wants to switch the vendor, or because the price of the HP OEM VMware software is better, than original VMware software. If you buy HP Reseller Option Kit (ROK), you are not entitled to use this software with non-HP server, unless Microsoft Software Assurance is added within 90 days (HP FAQ for Microsoft OEM licensing — Windows Server and SQL Server). We asked HP and we got an answer:

Customers who have purchased non HP platforms and VMware bundles or those planning to move HP’s OEM VMware software from an HP platform to a non HP platform can continue to receive the same level of support for their VMware products through HP services.

This means that you can use HP OEM VMware software on non HP severs. The support is delivered by HP services. This gives customers investment protection if they move on to another vendor. If the support from HP ends after 3, 4 or 5 years, the customer can extent the support or he can purchase support directly from VMware.

I’d like to see the face of the support engineer if you say “Hey HP, I have a problem with vSphere <bla bla> on my brand new Cisco UCS blade. Please help me!” ;)

Please note that these information might change over time. This information is offered “AS IS” with no warranties, and no rights are granted.