Tag Archives: software

PernixData Architect Software

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

With the general availability of PernixData FVP 3.1, PernixData released the first version of PernixData Architect.

One of the biggest problems today is, that management tools are often focused on deployment and monitoring of applications or infrastructure. This doesn’t lead to a holistic view over applications and related data center infrastructure. You have to monitor at several points within the application stack and even then, you won’t get a holistic view. Without proper information, you can’t make proper decisions. At this point, PernixData Architect comes into play.

PernixData Architect is a software platform and supports the complete IT life cycle from design and deployment over operation and optimization. It supports the decision making process with data gathering and big data analytics. PernixData Architect continuously generates information and recommendations based on gathered data from VMs, storage devices, vCenter, network etc. This information pool can analysed with big data techniques. Data are gathered, data is set into context (this is what information is) and information are linked and combined with recommendations. Here are some examples what PernixData Architect can do for you (Source)

  • Descriptive Analytics – Identify and profile the top 10 VMs on latency, throughput and IOPS.
  • Predictive Analytics – Calculate server-side resources needed to run a VM in Write Through versus Write Back mode, ensuring optimal hardware is allocated before a problem arises.
  • Prescriptive Analytics – Recommend ideal server-side resources based on application patterns.

PernixData Architect is a software-only solution and can deployed with our without PernixData FVP. Without FVP, Architect can be used as a monitoring tool and gives you visibility, management and recommendations. Architect works with any server and storage platform that is compatible with VMware vSphere!

I’ve installed the latest PernixData FVP 3.1 release in my lab and enabled the 30 days trial period for PernixData Architect. You can access Architect through the web UI.

prnx_architect_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

As you can see, I have two clusters in my lab and both are accelerated using PernixData FVP. One cluster uses Distributed Fault Tolerant Memory (DFTM), the other cluster uses SSDs as acceleration ressources. If Architect is enabled, FVP doesn’t display any stats and refers to the Architect UI. Below a screenshot of the summary screen which gives you a good overview at the first glance.

prnx_architect_2

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Architect includes much more stats than FVP.

prnx_architect_3

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

On the “Intelligence” page, you get values for the working set for each ESXi host in the cluster. This is an important value for the right sizing of your acceleration ressources.

prnx_architect_4

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

As mentioned, PernixData Architect uses the gathered data to give you recommendations in realtime. Even in my lab cluster,  there are things to improve. ;)

prnx_architect_5

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

This is only a short overview about PernixData Architect. But you might see now what insight architect can give you. If you are curious to see what PernixData FVP and Architect can do for you, you can simply install both products as part of a proof-of-concept and test them for 30 days. Even if you don’t want to install FVP, Architect can used without FVP. And even FVP can used without acceleration ressources in a monitoring mode.

Outlook license requirements for Exchange features

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Microsoft Exchange Server licensing is rather simple. You can choose between two Exchange licenses:

  • Standard (up to 5 mailbox databases)
  • Enterprise (up to 100 mailbox databases)

Standard and Enterprise only differ in the number of supported databases! Feedl free to use Exchange DAG with Exchange Standard and Windows Server Standard! To license your clients, you have to purchase a Client Access License (CAL) for each user or device that accesses your Exchange server environment. There are two types of CALs:

  • Standard
  • Enterprise (add-on for Standard CAL)

The Standard CAL is always necessary and enables most features of Exchange. The Enterprise CAL is an add-on license. If a user needs one of the Enterprise CAL features, you have to purchase a Standard AND an Enterprise CAL. The Enterprise CAL enables the following features:

  • In-Place Archive
  • Retention policies
  • Apply Information Rights Management (IRM)
  • Site mailboxes
  • DLP Policy Tips

Pretty simple, isn’t it? But have you thought about your Microsoft Outlook license? To use the Exchange Enterprise CAL features, you have to consider your Microsoft Outlook licensing! You have to use a Outlook version that is supported with your specific Exchange Server version, and you also have to consider if you have retail or volume license licenses. Microsoft Exchange Enterprise CAL features can be used with the following Microsoft Outlook licenses:

Outlook 2016

  • Outlook 2016 stand-alone (Retail or Volume License)
  • Outlook 2016 included with Microsoft Office Professional Plus 2016 (Volume License)

Outlook 2013

  • Outlook 2013 stand-alone (Retail or Volume License)
  • Outlook 2013 included with Microsoft Office Professional Plus 2013 (Volume License)

Outlook 2010

  • Outlook 2010 stand-alone (Retail or Volume License)
  • Outlook 2010 included with Microsoft Office Professional Plus Subscription (Retail)
  • Outlook 2010 included with Microsoft Office Professional Plus (Volume License)

Outlook 2007

  • Outlook 2007 stand-alone (Retail or Volume License)
  • Outlook 2007 included with Microsoft Office Ultimate 2007 (Retail)
  • Outlook 2007 included with Microsoft Office Professional Plus 2007 (Volume License)
  • Outlook 2007 included with Microsoft Office Enterprise 2007 (Volume License)

The correct Outlook client license is important! If you try to use Outlook 2013 included with Microsoft Office Professional (Retail) with In-Place Archive for example, the archive will not show up in Outlook. If everything is licensed correctly, your Outlook with enabled archiving should look like this:

in-place_archive_outlook2016

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Please note, that Outlook 2007 is not supported with Exchange 2016. Please also note, that the Enterprise CAL features “Site mailboxes” and “DLP Policy Tips” can only be used with Outlook 2013 and later.

DataCore mirrored virtual disks full recovery fails repeatedly

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Last sunday a customer suffered a power outage for a few hours. Unfortunately the DataCore Storage Server in the affected datacenter weren’t shutdown and therefore it crashed. After the power was back, the Storage Server was started and the recoveries for the mirrored virtual disks started. Hours later, three mirrored virtual disks were still running full recoveries and the recovery for each of them failed repeatedly.

virtual_disk_error_ds10_mirror

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The recovery ran until a specific point, failed and started again. When the recovery failed, several events were logged on the Storage Server in the other datacenter (the Storage Server that wasn’t affected from the power outage):

Source: DcsPool, Event ID: 29

Source: disk, Event ID: 7

Source: Cissesrv, Event ID: 24606

The DataCore support quickly confirmed what we already knew: We had trouble with the backend storage on the DataCore Storage Server that was serving the full recovies for the recovering Storage Server. The full recoveries ran until the point at which a non-readable block was hit. Clearly a problem with the backend storage.

Summary

To summarize this very painful situation:

  • VMFS datastore with productive VMs on DataCore mirrored virtual disks with no redundancy
  • Trouble with the backend storage on the DataCore Storage Server, that was serving the mirrored virtual disks with no redundancy

Next steps

The customer and I decided to evacuate the VMs from the three affected datastores (each mirrored virtual disks represents a VMFS datastore). To avoid more trouble, we decided to split the unhealthy mirrors. So we had three single virtual disks. After the shutdown of the VMs on the affected datastores, we started a single storage vMotions at a time to move the VMs to other datastores. This worked until the storage vMotion hit the non-readable blocks. The storage vMotions failed and the single virtual disks went also into the status “Failed”. After that, we mounted the single virtual disks from the other DataCore Storage Server (that one, that was affected from the power outage and which was running the full recoveries). We expected that the VMFS on the single virtual disks was broken, but to our suprise we were able to mount the datastores. We moved the VMs from the datastores to other datastores. This process was flawless. Just to make this clear: We were able to mount the VMFS on virtual disks, that were in the status “Full Recovery pending”. I was quite sure that there was garbage on the disks, especially if you consider, that there was a full recovery running that never finished.

The only way to remove the logical block errors is to rebuild the logical drive on the RAID controller. This means:

  • Pray for good luck
  • Break all mirrored virtual disks
  • Remove the resulting single virtual disks
  • Remove the disks from the DataCore disk pool
  • Remove the DataCore disk pool
  • Remove the logical drives on the RAID controller
  • Remove the arrays on the RAID controller
  • Replace the faulty physical disks
  • Rebuild the arrays
  • Rebuild the logical drives
  • Create a new DataCore disk pool
  • Add disks to the DataCore disk pool
  • Add mirrors to the single virtual disks
  • Wait until the full recoveries have finished
  • Treat yourself to a beer

Final words

This was very, very painful and, unfortunately, not the first time I had to do this for this customer. The customer is in close contact to the vendor of the backend storage to identify the root cause.

Windows guest customization fails after cloning a VM

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Last week I got a call from a customer. The customer has tried to deploy new Citrix XenApp servers, and because the VMware template was a bit outdated, he tried to clone a provisioned and running Citrix XenApp VM. During this, the customer applied a guest customization specification to customize the guest OS (IP address, hostname etc). Until this point everything was fine. But after the clone process, the guest customization started, but never finished.

vm_deployment_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Using the VMware template, deployment and customization were successful. So the main problem was, that the customer was unable to use a provisioned and running Windows guest to deploy new Windows guests. I checked to logs and found this error messages in the setupact.log (you can find this log under C:\windows\system32\sysprep\panther):

I checked the rearm count with slmgr.vbs /dlv and saw, that the remaining count was 1.

Cloning and customizing a Windows VM with a rearm count of 1 leads to the observed behaviour. After the cloning and the start of the customization, the rearm count is 0. Microsoft describes this behaviour in KB929828.

CAUSE

This error may occur if the Windows Software Licensing Rearm program has run more than three times in a single Windows image.

RESOLUTION
To resolve this issue, you must rebuild the Windows image.

vExpert Maish Saidel-Keesing wrote about this in his blog in 2011. He explained it very well, make sure that you read his three blog posts!

In my case, rebuilding the template wasn’t an option. Therefore I had to reset the rearm count. I searched a while and found a solution that has worked for me. I’m quite sure that Microsoft doesn’t allow this, therefore I will not describe this procedure in detail. You will find it easily in the web…

The main task is to remove the WPA registry key. This key is protected under normal operation, so you have to do this using WinRE (Windows ecovery Environment) or WinPE (Windows Preinstallation Environment). After the removal of the WPA registry key, reboot the VM, add a new key using slmgr.vbs /ipk and active the Windows installation. You can check the rearm counter using slmgr.vbs /dlv and you will notice that the rearm counter is resetted.

Always keep in mind that you can’t use sysprep with a Windows installations an infinite number of times.

HP Service Pack for ProLiant 2015.04

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Some weeks ago, HP has published an updated version of their HP Service Pack for ProLiant (SPP). The SPP 2015.04.0 has added support for

  • new HP ProLiant servers and options,
  • support for Red Had Enterprise Linux 6.6, SUSE Linux Enterprise Server 12, VMware vSphere 5.5 U2 and (of course) VMware vSphere 6.0,
  • HP Smart Update Manager v7.2.0 was added,
  • the HP USB Key Utility for Windows v2.0.0.0 can now handle downloads greater than 4GB (important, because this release may not fit on a standard DVD media…)
  • select Linux firmware components is now available in rpm format

In addition, the SPP covers two important customer advisories:

  • ProLiant Gen9 Servers – SYSTEM ROM UPDATE REQUIRED to Prevent Memory Subsystem Anomalies on Servers With DDR4 Memory Installed Due to Intel Processor BIOS Upgrades (c04542689)
  • HP Virtual Connect (VC) – Some VC Flex-10/10D Modules for c-Class BladeSystem May Shut Down When Running VC Firmware Version 4.20 or 4.30 Due to an Erroneous High Temperature Reading (c04459474)

Two CAs fixed, but another CA arised (and it’s an ugly one..):

  • HP OneView 1.20 – Upgrading Virtual Connect Version 4.40 with Service Pack for ProLiant (SPP) 2015.04.0 Will Result in a Configuration Error and an Inability to Manage the HP Virtual Connect 8Gb 24-Port Fibre Channel Module (c04638459)

If you are using HP OneView >1.10 and 1.20.04, you will be unable to manage HP Virtual Connect 8Gb 24-port Fibre Channel Modules after updating the module to firmware version 3.00 or later. This is also the case, if you use the smart components from Virtual Connect  firmware version 4.40! After the update, the VC module will enter a “Configuration Error” state. Currently there is no fix. The only workaround is not to update to HP Virtual Connect 8Gb 24-port Fibre Channel Module firmware version 3.00. This will be fixed in a future HP OneView 1.20 patch.

Important to know: With this release, the SPP may not fit on standard DVD media! But to be honest: I’ve never burned the SPP to DVD, I always used USB media.

Check the release notes for more information about this SPP release. You can download the latest SPP version from the HP website. You need an active warranty or HP support agreement to download the SPP.

Safe (or safer) than backup to tape: HP StoreOnce

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

When talking to SMB customers, most of them don’t want to talk about their backup strategy. It’s paradox: They know that data loss can ruin their business, but they don’t want to invest money into a fully tested recovery concept (I try to avoid the word “backup concept” – Recovery is the key). Because of tight budgets and lacking knowledge, many customers use traditional concepts in a virtualized world. This often ends  in traditional backup applications with agents deployed into guest OS, and backups that are written to tape (or worse: On USB disks). If you ask a customer “Why do you store your data on tape?”, only a few argue with costs per GB or performance. Most the customer argue with something like

  • “We’re doing this for years, so why we should change it?”
  • “We have to store our tapes offsite”
  • “There is a corporate policy that forces us to store our backups on tape”

In most cases, the attempt to sell a backup-to-disk appliance (like HP StoreOnce backup system) dies with the last arguments. Customers tend to doesn’t trust designs in which they don’t have a backup on tape. Some customers have a strong desire to have a tape which is labled with “MONDAY” or “FRIDAY FULL”. To be honest: Usually I see this behaviour only at SMB customers. Backup-to-disk appliances are often described as

  • expensive,
  • complex, and
  • vulnerable

None of them applies to a HP StoreOnce backup system. Not even expensive, if you not only focus on CAPEX.

HP StoreOnce

Please allow me to write some sentences about HP StoreOnce.

A HP StoreOnce backup system is available as physical or virtual appliance. HP offers a broad range of physical appliances that can store between 5,5 TB and 1.728 TB BEFORE deduplication. The virtual StoreOnce VSA is available with a capacity of 4 TB, 10 TB and 50 TB before deduplication. And don’t forget the free 1 TB StoreOnce VSA! All HP StoreOnce backup systems, regardless if physical appliance or VSA, share the same StoreOnce deduplication technology, as well as the same replication and security features. In fact, the StoreOnce VSA runs the same (linux based) software as the physical applanices and vice versa. You can add features by adding software options:

  • HP StoreOnce Catalyst
  • HP StoreOnce Replication
  • HP StoreOnce Security Pack
  • HP StoreOnce Enterprise Manager

HP StoreOnce Catalyst allow the seamless movement of deduplicated data across StoreOnce capable devices. This means, that a HP Data Protector media agent can deduplicate data during a backup, write the data to a HP StoreOnce backup system, and then the data can replicated to another HP StoreOnce backup system. All without the need to rehydrate on the source, and deduplicate it on the destionation again. The StoreOnce VSA includes a HP StoreOnce Catalyst license!

HP StoreOnce Replication enables an appliance or a VSA to act as a target in a replication relationship. Only the target needs to be licensed. Fan-in describes the number of possible source appliances.

ModelFan-in
StoreOnce VSA8
StoreOnce 27008
StoreOnce 290024
StoreOnce 450024
StoreOnce 470050
StoreOnce 490050
StoreOnce 6200384

As you can see, even the StoreOnce VSA can used as a target for up to 8 source appliances. Replication is a licensable feature, except for the StoreOnce VSA. The StoreOnce VSA includes the replication license!

HP StoreOnce Enterprise Manager can be obtained for free and allows you to monitor up to 400 physical appliances or StoreOnce VSAs. It provides monitoring, reporting, trend analysis and forcasting. It integrates with the StoreOnce GUI for single pane-of-glass management for physical appliances and VSA.

HP StoreOnce Security Pack enables data-at-rest and data-in-flight encryption (using IPsec and only for StoreOnce Catalyst), as well as secure data deletion. Here applies the same as for the HP StoreOnce Catalyst and Replication license: The StoreOnce VSA includes this license already.

HP StoreOnce Deduplication

Deduplication is nothing really new. In simple terms it’s a technique to reduce the amount of stored data by removing redundancies. Data that is being detected as redundant, isn’t stored again on the disks. Only a pointer to the stored data is set. This runs the risk of potential data loss. What if the original block gets corrupted? Grist to the mill of the tape lovers (Tapes never fail… for sure…).

Integrity Plus

Don’t worry. I won’t bore you with stuff about a dead (or nearly dead) CPU architecture. Integrity Plus is HPs approach for an end-to-end verification process. Let’s take a look on how data comes into a StoreOnce backup system. From a client perspective, you can choose between Virtual Tape Library (VTL), NAS emulation (CIFS or NFS) and StoreOnce Catalyst.

When data is written to a VTL, a CRC is computed for each block and it’s stored together with the data block on disk. During a restore, a CRC is computed for every block that is read from disk and it’s compared to the initial stored CRC. If it differs, a SCSI check condition is reported. Because NAS emulation and StoreOnce Catalyst doesn’t use SCSI protocol, no CRC is computed and stored to disk. The integrity of the written data is guaranteed in other ways.

At the beginning of the deduplication process, the incoming data is divided into chunks. HP uses a variable length for each data chunk, but in average a data chunk is 4 KB. A smaller chunk size leads to better deduplication results. A SHA-1 (AFAIK 160 bit) hash is computed for each data chunk. This chunk hash is used to identify duplicate data by comparing it to other chunk hashes. At this point, a sparse index is used to find possible candidates of redundant data chunks. Instead of holding all chunk hashes in the memory, only a few hashes are stored in the RAM. The remaining chunk hashes are stored as metadata on disk. The container index contains a list of chunk hashes and a pointer to the data container where the data chunk is stored. Before data chunks are stored on disk, multiple chunks are compressed (using LZO) and a SHA-1 checksum is computed for the compressed chunks. This checksum is stored on disk. When the compressed data is decompressed, a new checksum is computed and it’s compared to the stored SHA-1 checksum. Metadata and container index files are protected with MD5 checksums. In addition, a transaction log file is maintained for the whole process and the sparse index is frequently flushed to disk.

When data is coming into the StoreOnce backup system, a match with a chunk hash in the memory can lead the system (using the sparse index, metadata and container index files) to containers with associated data chunk (e.g. data chunks that represent a backup VM). And if a data chunk of the incoming data is a duplicate, it is very likely that many of the following data chunks are also duplicates.

All physical appliances use RAID 6 to protect data in case of disk failures. Only the HP StoreOnce 2700 uses a RAID 5, because the appliance can only hold 4 SAS-NL disks. When using StoreOnce VSA, you can use any RAID level for the underlying storage. But you should use something above RAID 0…

Conclusion

Let’s summarize:

  • RAID
  • Supercapacitors on RAID controllers to protect write cache in case of power loss
  • ECC memory
  • Integrity Plus to protect the data within the StoreOnce backup system
  • StoreOnce Replication to replicate data to another HP StoreOnce backup systems
  • data-at-rest, data-in-flight encryption and secure deletion with StoreOnce Security Pack

Sounds very safe to me. Tape isn’t dead. Tape has its right to exist. But backup to tape isn’t safer than a backup to a StoreOnce backup system. Latter can offer you faster backups AND restores, new backup and recovery options (e.g. backups in RoBo offices that are replicated to the central datacenter). Think about the requirements for storing tapes (temperature, humidity, physical access), regular recovery tests, copy tapes to newer tapes etc. Consider not only CAPEX. Also remember OPEX.

A HP StoreOnce backup system is perfect for SMBs. It simplifies backup and recovery and it can offer new opportunities. Testdrive it using the free 1 TB StoreOnce VSA! Remember: The StoreOnce VSA includes StoreOnce Replication, Catalyst and the Security Pack! Even the free 1 TB StoreOnce VSA.

Publishing Outlook Web Access with Microsoft Web Application Proxy (WAP)

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Microsoft has introduced the Web Application Proxy (WAP) with Windows Server 2012 R2 and has it positioned as a replacement for Microsoft User Access Gateway (UAG), Thread Management Gateway (TMG) and IIS Application Request Routung (ARR). WAP ist tightly bound to the Active Directory Federation Services (AD FS) role. WAP can be used

  • pre-authenticate access to published web applications, and
  • it can function as an AD FS proxy

The AD FS proxy role was removed in Windows Server 2012 R2 and it’s replaced by the WAP role. Because WAP stores its configuration in the AD FS, you must deploy AD FS in your organization. The server, that hosts the WAP, has no local configuration. This allows you to deploy additional WAP servers to create a cluster deployment. The additional servers get their configuration from the AD FS.

The deployment of WAP can be split into two parts:

  • deployment of the AD FS role
  • deployment of the WAP role

The AD FS deployment

You can deploy the AD FS role on a domain controller or on a separate server. AD FS  acts as an identity provider. This means, that it authenticates users and provides security tokens to applications, that trust the AD FS instance. On the other hand it can act as a federation provider. This means, that it can use tokens from other identity providers and can provide security tokens to applications that trust AD FS. The AD FS role can be deployed onto a domain controller or a AD member server.

The first step is to install the AD FS role onto a AD member server or domain controller. I used the DC in my lab. Depending on your needs, this can be different. I used the PowerShell to install the AD FS role.

A reboot is not necessary. The next step is to configure the AD FS role. This process is supported by configuration wizard. Before you can start, it’s necessary to deploy the group Managed Service Account (GMSA). Open a PowerShell console and execute the following commands:

Then you can start the configuration wizard. If this is the first first AD FS server, select the first option “Create the first federation server in a federation server farm”.

adfs_setup_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

To perform the configuration, you need an account with domain administrator permissions. In my case, I simply used the Administrator account.

adfs_setup_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

You need to enroll an SSL certificate that is used for AD FS. This SSL certificate must include the DNS name for the AD FS server and also the Subject Alternative Names enterpriseregistration and enterpriseregistration.yourdomainname.tld. This screenshot includes the values that I used in my lab deployment. I entered this values into the “Attributes” box:

adfs_setup_03

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Create the certificate and export it with the private key as pfx file. You must import the certificate into the “Personal” store of the local computer, that acts as AD FS server. You also need two DNS entries for the names, that are included in the certificate.

adfs_setup_04

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

If the certificate import was successful, you can select the certificate in the wizard. Add the Federation Service Name and the Display Name.

adfs_setup_05

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The Service Account can be a existing domain user account or a Managed Service Account. I used my Administrator account for simplicity.

adfs_setup_06

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

If you deploy a single server, you can use the Windows Internal Database. If you plan to depliy multiple AD FS servers, you have ot use a SQL server database.

adfs_setup_07

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Review the options and continue with the pre-requisite checks. If everything went well, you can proceed with the installation. Finish the setup and close the wizard.

adfs_setup_08

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Open a browser and enter the AD FS URL into the address bar. In my case this URL looks like this:

https://adfs.terlisten-consulting.de/adfs/ls/idpinitiatedsignon.htm

adfs_setup_09

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

If you get a screen like this, everything’s fine and AD FS works as expected. Check the Windows Server 2012 R2 AD FS Deployment Guide for more information. Now it’s time to deploy the Web Application Proxy.

The WAP deployment

To install the WAP role, simply open a PowerShell and run the Install-WindowsFeature cmdlet.

Then you can run the WAP configuration wizard. This wizard guides you through the configuration of the WAP role.

wap_setup_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

First you have to connect to the AD FS server. Enter the Federation service name you used to deploy the AD FS instance, and provide the necessary user credentials.

wap_setup_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

At this point you have to select the certificate, that is used by the AD FS proxy. You can use the same certificate you used for the AD FS server. But you can also create a new certificate. The certificate must be imported into the “Personal” store of the WAP server.

wap_setup_03

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Confirm the settings and click “Configure”. At this point, the wizard executes the shown PowerShell command.

wap_setup_04

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Close the wizard and open the management console of the Web Application Proxy to check the operational status. At this point, the WAP only acts as a AD FS proxy.

wap_setup_06

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

To test the functionality, I decided to publish Outlook Web Access (OWA). Use the “Publish New Application Wizard” to publish a new application.

wap_setup_07

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

To publish OWA, select “Pass-through” as pre-authentication method.

wap_setup_08

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Now it’s getting interesting. When you enter the external URL, the backend server URL is automatically filled. External and Backend URL have to be the same URL. Because of this, you need split DNS (see “Configure the Web Application Proxy Infrastructure” and “AD FS Requirements” at the Microsoft TechNet Library). You also need a valid external certificate, that matches the FQDN used in the external URL.

wap_setup_09

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Check the settings and click “Publish”. The wizard executes the shown PowerShell command.

wap_setup_10

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Close the wizard and check the functionality of the published application. This screenshot shows the access to OWA from one of my management VMs (MGMTWKS1):

wap_test_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

This drawing shows my lab setup. I’ve used two subnets (192.168.200.64/27 and 192.168.200.96/27) to simulate internal and external access, as well as split DNS.

wap-adfs

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The host dc.vcloudlab.local (192.168.200.97) has the AD FS role installed and resolves cas.terlisten-consulting.de to 192.168.200.103 (HAProxy). MGMTWKS1 resolves the same FQDN to 192.168.200.109 (WAP1 – my WAP server).

Final words

This is only a very, very basic setup and I deployed it in my lab. The installation was not very difficult and I was quickly able to set up a working environment. Before you start to deploy AD FS/ WAP, I recommend to take a look into the TechNet Library:

Shady upgrade path for NetApp ONTAP 7-Mode to cDOT

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

NetApp has offered Data ONTAP for some time in two flavours:

  • 7-Mode
  • Clustered Data ONTAP (cDOT)

With cDOT, NetApp has rewritten ONTAP nearly from scratch. The aim was to create an Storage OS, that leverages scale-out architecture and storage virtualization techniques, as well as providing non-disruptive operations. NetApp has needed some release cycles to get cDOT at that point, where it provides all features that customers know from 7-Mode. With Data ONTAP 8.3, NetApp has reached this point. Even Metrocluster is now supported. That’s a huge improvement and I’m glad that NetApp has made it. But NetApp wasted no time in cutting off old habits: With ONTAP 8.3, 7-Mode is no longer offered. Okay, no big deal. Customers can migrate from 7-Mode to cDOT. Yes, indeed. But it’s not that easy as you maybe think.

First of all: You can’t update to cDOT in-place. You have to wipe the nodes and re-install Data ONTAP. That makes it nearly impossible to migrate a running Filer without downtime and/ or buying or loaning additonal hardware. Most customers migrate to cDOT at the same time as they refresh the hardware. The data can be migrated on different ways. NetApp offers the 7-Mode Transition Tool (7MTT). 7MTT leverages SnapMirror to get the data from the 7-Mode to the cDOT Filer. But you can also use plain SnapMirror without 7MTT to migrate the data. The switchover from the old to the new volume is an offline process. The accessing servers have to be disconnected, and they must be connected to the new cDOT Filer and volume. 7MTT can only migrate NAS data! If you wish to migrate SAN data (LUNs), you have to use NetApps DTA2800 appliance or something like VMware Storage vMotion. Other migration techniques, like Storage vMotion, robocopy etc. can also be used.

I know that cDOT is nearly completely rewritten, but such migration paths are PITA. Especially if customers have just bought new equipment with ONTAP 8.1 or 8.2 and they now wish to migrate to 8.3.

Another pain point ist NetApps MetroCluster. With NetApp MetroCluster customers can deploy active/ active clusters between two sites up to 200 km apart. NetApp MetroCluster leverages SyncMirror to duplicate RAID groups to different disks. NetApp MetroCluster is certified for vSphere Metro Storage Cluster (vMSC). One can say that Metro cluster is a bestseller. I know many customers that use MetroCluster with only two nodes. That’s where a 2-node HA pair is cut in the middle and spread into to locations. Let’s assume that a customer is running a stretched MetroCluster with two nodes and Data ONTAP 8.2. The customer wants to migrate to ONTAP 8.3. This means, that he has to migrate to cDOT. No problem, because with ONTAP 8.3, cDOT offers support for NetApp MetroCluster.

  1. You can’t update to cDOT in-place. So either wipe the nodes or get (temporary) additional hardware.
  2. NetApp MetroCluster with cDOT requires a 2-node cluster at each of the two sites (four nodes in sum)

Especially when you look at the second statement, you will quickly realize that all customers that are running a 2-node MetroCluster, have to purchase additional nodes and disks. Otherwise they can’t use MetroCluster with cDOT. This allows only one migration path: Use ONTAP 8.2 with 7-Mode and wait until the hardware needs to be refreshed.

This is really bad… This is a shady upgrade path.

EDIT

NetApp is working hard to make the migration path better.

  • 7MTT is capable of migrating LUNs from 7DOT to cDOT in the newest Version
  • At NetApp Insight 2014 there was an announcement of 2-Node cDOT MetroCluster which will be released soon.

Thank you Sascha for this update.

Importance of client-side proxy settings in Exchange 2013 environments

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

There is an advantage, if you solves problems: You can learn something. I’m currently migrate a small Exchange 2007 environment to Exchange 2013. The first thing I learnt was, that IT staff still uses their own accounts for administration, and sometimes they assign administrator rights to users for testing and troubleshooting purposes. This can be a problem, as I described in my last posting. Today I learnt something different: Sometimes it’s the little things that bring you to despair.

After moving a mailbox from Exchange 2007 to 2013, Outlook must change the server for the client access. Nothing fancy and the user normally doesn’t notices it. If Outlook is online during the mailbox migration, the user gets a message, that he has to restart Outlook. Please note, that you need at least Outlook 2007 SP3, when you wish to migrate to Exchange 2013. This is because of an important change with Exchange 2013: The abolition of direct MAPI connections. Exchange 2013 only supports RPC-over-HTTP (aka Outlook Anywhere), even for LAN connections. RPC-over-HTTP has several advantages, e.g. the CAS role has to deal with only one protocol or easier load balancing and high-availability.

The problem

After moving a mailbox from Exchange 2007 to 2013, the Outlook 2010 client wasn’t able to connect to the Exchange server. The server was correctly changed, as I was able to see in the Outlook profile, but everytime I tried to start Outlook 2010, I got this error:

outlook_error

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0


Moving the mailbox back to the Exchange 2007 solved the problem. Moving the mailbox again to the Exchange 2013 resulted in the same error. Outlook Web Access was working fine (internal and external) on Exchange 2007 and 2013. The Microsoft Office Outlook Connectivity Test completed successful, after moving the mailbox to the Exchange 2013. Even Enterprise Active Sync (EAS) worked on the Exchange 2013 server. Only Outlook 2010 wasn’t able to connect to the mailbox, when it was on the Exchange 2013. The customer and I were puzzled… While testing this and that, we got an error while accessing Outlook Web Access. On another client, with another user OWA worked fine. BÄÄÄM! A possible cause popped in my mind.

The solution

I checked immediately  the proxy settings and there it was: A big proxy bypass list in the Internet Explorer with several entries, but the new Exchange server was missing. I added the server to the proxy bypass list and Outlook started without any problems. To be honest: It was a bit more complex, because the used proxy wasn’t an internal system. A solution provider operates it and the proxy settings were managed by a GPO, that wasn’t working correctly. In addition to that, an AD group membership was used to allow users to pass a web filter. But at the core it was the missing entry for the new Exchange server, that caused the problem.

The explanation

Exchange 2013 only supports RPC-over-HTTP and it uses the system-wide proxy settings. Therefore, HTTP(S) traffic is sent to the proxy server (regardless if the destination is internal or external), unless there is an entry in the proxy bypass list for the destionation (in this case the Exchange server). If the proxy can’t handle the traffic, Outlook will not be able to connect to the Exchange server. With MAPI, the proxy isn’t a problem, because MAPI traffic isn’t sent to the proxy. This explains, why Outlook was able to connect to the Exchange server, if the mailbox was moved back to the Exchange 2007. With Exchange 2007, Outlook uses MAPI for the connection. With Exchange 2013 RPC-over-HTTP is used.

So if you experience connection problems after moving a mailbox to Exchange 2013, check your proxy settings. This also applies when using Outlook Anywhere, because Outlook Anywhere uses also RPC-over-HTTP.

TeamViewer Connection with Royal TS

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Some of my customers use TeamViewer to provide a quick access to their systems, without the need to configure VPN connections, install software on hosts etc. TeamViewer provides fast and secure access without the need to install software. Simply start the teamviewer.exe and choose if you want to connect to a host or use the session id and password to allow someone else access your computer. TeamViewer is free for all non-commercial users! So it’s a great choice for remote support all your family members.

I use Royal TS as my primary tool for remote connection management. Most of my connections are Microsoft Remote Desktop, VNC or SSH. But I also need TeamViewer. In opposite to Remote Desktop Manager, Royal TS doesn’t has a plug-in for TeamViewer. But I found a hint in the Royal TS knowledge base how to solve this: A command task.

Create a command task for TeamViewer

Open Royal TS and rightclick “Tasks” in your connection document. Select “Add” > “Command Task”

command_task_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Enter a descriptive name. You have to create a Command Task for each TeamViewer connection you want to save.

command_task_2

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Enter the path and the working directory to your TeamViewer executable. The arguments are the key: Session id and password will not not directly entered, but provided through custom fields.

command_task_3

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Select “Custom Fields” and enter the session id in the first, and the password in the second field. If you want to protect both field, you can use the protected fields. Then you have to change the custom fileds in the arguments.

command_task_4

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Save the connection and start the task. After a few seconds you should see the desktop of the remote host.

command_task_5

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Because session ID and password may change by time, you have to change them in the command task. This is a bit unhandy, but if you not frequently connect to random hosts, this should work fine.