Category Archives: Software

Once in a year: How to update TLS certificates on ADFS server and proxies

You might got this news some days ago: Starting with September 1, 2020, browsers and devices from Apple, Google, and Mozilla will show errors for new TLS certificates that have a lifespan greater than 398 days. Due to this move from Apple, Google and Mozilla, you have to deal with the replacement of certificates much more often. And we all know: Replacing certificates can be a real PITA!

Bild von skylarvision auf Pixabay

Replacing TLS certificates used for ADFS and Office 365 can be a challenging task, and this blog post will cover the neccessary steps.

ADFS Server

The first service, for which we will replace the certificate, is the ADFS server, or the ADFS server farm. At this point it is important to understand that we are dealing with two different points to which the certificate is bound:

  • the ADFS service communications certificate, and
  • the ADFS SSL certificate

The first step is to replace the service communication certificate. After importing the certificate with private key, you need to assign “read” permission to the ADFS service account. Right click on the certificate, then “All Tasks” > “Manage Private Keys”.

Make sure to import the certificate on all farm servers! Next step: Start the ADFS management console on the primary node. Select “Certificates” and then “Select service communication certificate” on the right window pane.

Now we have successfully replaced the service communication certificate. But we are no finished yet! Now we have to set the ADFS SSL certificate. Depending on your OS, you have to run the PowerShell command on the primary node. If your are running Windows Server 2012 R2 or older, you have to run the PowerShell command on EVERY ADFS farm server!

You can get the certificate thumbprint using the Get-AdfsSslCertificate command. Set the ADFS SSL certificate with

Then restart the ADFS service.

ADFS Proxies

In most cases you will have one or more ADFS proxies in your DMZ. The ADFS proxy is nothing more than a Web Application Proxy (WAP) and therefore the PowerShell commands for WAP will be used.

First of all: Import the new certificate with the private key on all ADFS proxies, and then get the certificate hash of the new certificate. Then open an elevated PowerShell on each proxy.

Then we have to re-establish the trust between the proxies and the primary ADFS farm server. You will need the local (!) administrator account of the primary farm server.

The last step is to update thefederated trust with Office 365.

Update the federated trust with Office 365

To update the federated trust with Office 365, you will need the Windows Azure Active Direcotry Module for Windows PowerShell and an elevated PowerShell. Connect to Office 365 and update the federated trust:

That’s it! Bookmark this page and set a calendar entry on today +12 months. :)

Missing Microsoft Teams calendar tab with on-premise Exchange

Microsoft Teams got a big push due to the current COVID19 crisis and many of my customers deployed it in the past weeks. At ML Network, we are using Microsoft Teams for more than a year, and we don’t want to miss it anymore.

Source: Microsoft

We are running Exchange 2016 on-premises, currently CU16. We were missing the calendar tab in Teams since we started with Microsoft Teams. when you do some research about this issue, you will find many threads and blog posts, but these are the two key facts:

  • it is supported with on-premises hybrid Exchange deployments
  • it works flawless with Exchange Online

Our Exchange is configured as full-hybrid mode deployment. I did this as we deployed Office 365 at our organization.

Let’s summarize:

  • Exchange 2016 CU16
  • Hybrid Deployment
  • Office 365 with Teams enabled
  • no calendar tab when the Exchange mailbox is hosted on-premises

OAuth FTW!

While doing an Exchange Hybrid deployment for one of my customers some weeks ago, I’ve stumbled over an OAuth error message at the end of the Hybric Connection Wizard. The message was HCW8064

“HCW has completed, but was not able to perform the OAuth portion of your Hybrid configuration”

We were not able to fix this. Microsoft offers two solutions:

Yesterday I did the upgrade from CU15 to CU16 on our Exchange server and while watching the progress bar I did some research on this issue again. I found strong evidence that Microsoft Teams needs working OAuth to display the calendar tab and access the on-premises hosted mailbox. So I gave it a try and used the latest version of the HCW wizard.

What should I say? No OAuth configuration error and after a restart of Microsoft Teams, the calendar tab appeared.

Lessons Learned:

  • always use the latest CU für Exchange
  • always use the latest HCW Wizard

Load balancing ADFS and ADFS Proxy using Citrix ADC

Last week I had to setup a small Active Directory Federation Services (ADFS) farm that will be used to allow Single Sign-On (SSO) with Office 365.

Active Directory Federation Services (ADFS) is a solution developed by Microsoft to provide users an authenticated access to applications, that are not capable of using Integrated Windows Authentication (IWA).

Required by the customer was a two node ADFS farm located on the internal network, and a two node ADFS Proxy farm located at the DMZ.

An ADFS Proxyserver acts as a reverse proxy and it is typically located in your organizations perimeter network (DMZ).

This picture shows a typical ADFS/ ADFS Proxy setup:

ADFS/ WAP Design/ Citrix/ citrix.com

My customer has decided to use Citrix ADC (former NetScaler) to load balance the requests for the ADFS farm and the ADFS Proxy farm. In addition to load balancing, this offers high availability in case of a failed ADFS server or ADFS Proxy server. Please note that Citrix ADC can act as a ADFS Proxy, but this requires the Advanced Edition license. My customer “only” had a Standard License, so we had to setup dedicated ADFS Proxy servers on the DMZ network.

Citrix ADC setup

The ADFS service name is typically something like adfs.customer.tld. This farm name has to be the same for internal and external access. For internal access, the ADFS service name must be resolved to the VIP of the Citrix ADC. The same applies to external accesss. So you have to setup split DNS.

ADFS uses HTTP and HTTP, so my first attempt was to use this Citrix ADC Content Switch based setup:

This is a pretty common setup for HTTP/ HTTPS based services. But it doesn’t work… Mainly because the monitor was not getting the required response. So the monitored service was down for the ADC, and therefore the service group, the load balancing virtual server and the content switch won’t came up.

The reason for this is Server Name Indication (SNI), an extension to Transport Layer Security (TLS). SNI is enabled and required since ADFS 3.0. The monitor tries to access the URL http://x.x.x.x/federationmetadata/2007-06/federationmetadata.xml, but the ADFS service won’t answer to those requests, because it includes the ip address, and not the ADFS service name.

But there is a workaround for everything on the Internet! You can change the binding on the ADFS server nodes using netsh.

I will not add the necessary options to this command, because: DON’T DO THIS!

Yes, the service group, the load balancing virtual server and the content switch will come up after this change. But you will not be able to enable a trust between your ADFS Proxy servers and the ADFS farm.

Microsofts requirements on Load Balancing ADFS

Microsoft offers a nice overview about the requirements when deploying ADFS. There is a section about the Network requirements. Below this, Microsoft clearly documents the requirements when load balancing ADFS servers and ADFS Proxy servers.

The load balancer MUST NOT terminate SSL. AD FS supports multiple use cases with certificate authentication which will break when terminating SSL. Terminating SSL at the load balancer is not supported for any use case.

Requirements for deploying AD FS/ microsoft.com

Okay, with this in mind, the you can’t use a ADC Content Switch as described above. Because it will terminate SSL. You have to switch to a load balancing virtual server and a service group with SSL bridge . Citrix describes SSL bridge as follows:

A SSL bridge configured on the NetScaler appliance enables the appliance to bridge all secure traffic between the SSL client and the SSL server. The appliance does not offload or accelerate the bridged traffic, nor does it perform encryption or decryption. Only load balancing is done by the appliance. The SSL server must handle all SSL-related processing. Features such as content switching, SureConnect, and cache redirection do not work, because the traffic passing through the appliance is encrypted.

But there is a second, very interesting statement:

It is recommended to use the HTTP (not HTTPS) health probe endpoints to perform load balancer health checks for routing traffic. This avoids any issues relating to SNI. The response to these probe endpoints is an HTTP 200 OK and is served locally with no dependence on back-end services. The HTTP probe can be accessed over HTTP using the path ‘/adfs/probe’http://<Web Application Proxy name>/adfs/probe
http://<ADFS server name>/adfs/probe
http://<Web Application Proxy IP address>/adfs/probe
http://<ADFS IP address>/adfs/probe

Requirements for deploying AD FS/ microsoft.com

This is pretty interesting, because it addresses the above described issue with the monitor. The solution to this is a HTTP-ECV monitor with on port 80, a GET to “/adfs/probe” and the check for a HTTP/200.

A working Citrix ADC setup

This setup is divided into two parts: One for the ADFS farm, and a second one for the ADFS Proxy farm. It uses SSL bridge and HTTP for the service monitor.

Load balancing the ADFS farm

Load balancing the ADFS Proxy farm

I have implemented it on a NetScaler 12.1 with a Standard license. If you have feedback or questions, please leave a comment. :)

Supported Active Directory environments for Microsoft Exchange

It is time for some words of wisdom, in regard to Exchange and the supported Active Directory environments. It is the same as with the supported. NET Framework releases: Latest release does not automatically mean “supported”.

To be honest: I nearly nuked a customer environment with ~ 300 users yesterday by preparing the domain for the first Windows Server 2019 Domain Controller.

First things first: Everything is fine! I did not prepared to forest schema for Windows Server 2019.

The support for Windows Server 2008 R2 comes to an end and some customers are still running it. Like my customer yesterday. Some application servers are still on 2008 R2… and the Domain Controllers. The customer is also running Exchange 2013 on Windows Server 2012 R2.

The customer has decided to go to Windows Server 2019 wherever possible. This includes file servers, application servers, and the Domain Controllers. On of the first steps was the deployment of Active Directory-Based Activation. The AD schema needs to be prepared for this and I decided to prepare the schema for Windows Server 2019. I already copied the adprep folder from the Server 2019 ISO and openened a PowerShell. And then I paused. Something felt odd. I wanted to take a look at the Exchange Server supportability matrix.

Exchange 2013 does NOT supported Windows Server 2019 Domain Controllers! Uhh… that was unexpected.

Lessons learned

Always check the Exchange Server supportability matrix. Always! Regardless if it’s because of .NET Framework, Active Directory, Outlook Clients etc. Just check it every time you plan to change something in your environment.

Especially in regard to Microsoft Exchange “newer” does not automatically mean “supported”. Most times the opposite is true.

Microsoft Exchange 2013/ 2016/ 2019 shows blank ECP & OWA after changes to SSL certificates

EDIT
This issue is described in KB2971270 and is fixed in Exchange 2013 CU6.

I published this blog post in July 2015 and it is still relevant. The feedback for this blog post was incredible, and I’m not joking when I say: I saved many admins weekends. ;) It has shown, that this error still occurs with Exchange 2016 and even 2019. Maybe not because of the same, with Exchange 2013 CU6 fixed bug, but maybe for other reasons. And the solution below still applies to it. Because of this I have decided to re-publish this blog post with a modified title and this little preamble.

Feel free to leave a comment if this blog post worked for you. :)

I ran a couple of times in this error. After applying changes to SSL certificates (add, replace or delete a SSL certificate) and rebooting the server, the event log is flooded with events from source “HttpEvent” and event id 15021. The message says:

If you try to access the Exchange Control Panel (ECP) or Outlook Web Access (OWA), you will get a blank website. To solve this issue, open up an elevated command prompt on your Exchange 2013 server.

Check the certificate hash and appliaction ID for 0.0.0.0:443, 0.0.0.0:444 and 127.0.0.1:443. You will notice, that the application ID for this three entries is the same, but the certificate hash for 0.0.0.0:444 differs from the other two entries. And that’s the point. Remove the certificate for 0.0.0.0:444.

Now add it again with the correct certificate hash and application ID.

That’s it. Reboot the Exchange server and everything should be up and running again.

What’s new in Vembu BDR Suite v4.0.1

Vembu Technologies was founded in 2002, and with 60.000 customers and more than 4000 partners, Vembu is a leading provider with a comprehensive portfolio of software products and cloud services to small and medium businesses.

In December 2018, Vembu announced the fourth major release of their BDR Suite. Vembu BDR Suite 4.0.1 is now out for production setups with enhanced performance and bug fixes. Vembu BDR Suite v4.0.1 is an intermediate patch update that addresses the customers reported issues and other support issues on the previous build of v4.0. Vembu BDR Suite v4.0.1 also features a large number of enhancements and significant of those are listed below.

Vembu Technologies/ Vembu BDR Essentials/ Copyright by Vembu Technologies

What’s new?

Beside of bug fixes, BDR Suite v4.0.1 also includes some new enhancements. In my opinion, the most significant enhancements are:

  • Significant performance improvement in Quick VM Recovery on VMware environments
  • Rescan option is introduced in Hyper-V Manager Servers page, which allows you to install Vembu Integration Service on the newly added node of the Hyper-V cluster (or if it’s not available on the existing node)
  • Backups configured through BDR Server console will run in parallel (Default parallel backup count is set to 5 and it is configurable)
  • Ability to add new Hyper-V hosts or choose existing hosts while performing Live Recovery to Hyper-V host

Interested in trying Vembu BDR suite? Try the 30-day free trial now! For any questions, simply send an e-mail to vembu-support@vembu.com or follow them on Twitter.

If you are a small or mid-sized businesses, check out the Vembu BDR Essentials package!

User vdcs does not have the expected uid 1006

This posting is ~1 year years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Sorry for the long delay since my last blog post – busy times, but with lots of vSphere. :) Today, I did an upgrade of a standalone vCenter Server Appliance at one of my healthcare customers. The vCenter was on 6.0 U3 and I had to upgrade it to 6.7 U2. It was only a small deployment with three hosts, so nothing fancy. And as with in many other vSphere upgrades, I came across this warning message:

Warning User vdcs does not have the expected uid 1006
Resolution Please refer to the corresponding KB article.

I saw this message multiple times, but in the past, there was no KB article about this, only a VMTN thread. And this thread mentioned, that you can safely ignore this message, if you don’t use a Content Library. Confirmation enough to proceed with the upgrade. :)

Meanwhile, there is a KB article:

Uploading content to the library fails with error: Content Library Service does not have write permission on this storage backing (52559)

This is a statement from the KB article:

Note: You can safely ignore this message if you are not using Content Library Service before the upgrade, or using it only for libraries not backed by NFS storage.

Currently, I don’t have cusomters with NFS backed Content Libraries, but if you do, you might want to take a look at it. Especially if you have done an upgrade from 6.0 to 6.5 or 6.7 and you want to start using Content Libraries now.

Make your life easier – KeeAgent for KeePass

This posting is ~1 year years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Using a password safe, or password management system, is not a best practice – it’s a common practice. I’m using KeePass for years, because it’s available for different platforms, it can be used offline, it is Open Source, and it is not bound to any cloud services. Keepass allows me securely store usernames, passwords, recovery codes etc. for different services and websites, and together with features like autotype, Keepass offers a plus security and convenience.

I use 2FA or MFA wherever I can. That’s the reason why I’m a big fan of SSH public key authentication. But SSH key handling is sometimes inconvenient. You simple don’t want to store your SSH private keys on a cloud drive, and you don’t want to store them on a USB stick, or distribute them over different devices. In the past, I stored my SSH private keys on a cloud-drive in an encrypted container. When I needed a key, I decrypted the container and was able to use them. But this solution was inconvenient.

So what to do?

AbsolutVision/ pixabay.com/ Pixybay License

While searching for a solution I stumbled over KeeAgent, which is a plugin for KeePass. Keeagent allows you to store SSH keys in a KeePass database. KeeAgent then acts as SSH agent. I’m using this with PuTTY and MobaXterm and it works like a charm.

Setup KeeAgent

All you need is KeePass 2.x and the KeeAgent plugin. After installing the plugin (simply put the plgx file into C:\Program Files (x86)\KeePass Password Safe 2\Plugins), you can create a new entry in your KeePass database.

The password is the SSH private key passphrase. Then add the public and private key file to the newly created keepass database entry.

The KeeAgent.settings entry will be added automatically. Jump to the “KeeAgent” tab.

If required, keys can be loaded automatically if the database is locked, or you can add them later using the menu “Extras > KeeAgent”. Not every database entry can be used with KeeAgent, you have to enable the first checkbox to allow KeeAgent to use a specific database entry.

I create a database entry for each key pair I want to use with KeeAgent. And I only add frequently used keys automatically to KeeAgent. I have tons of keys and 99% of them are only added if I need them.

With KeeAgent in place, I can start new SSH sessions and KeeAgent delivers the matching key. You can see this in this screenshot “…from agent”.

I really don’t want to miss KeePass and KeeAgent. It makes my life easier and more secure.

Vembu CloudDR – Disaster Recovery as a Cloud Service

This posting is ~1 year years old. You should keep this in mind. IT is a short living business. This information might be outdated.

When it comes to disaster recovery (DR), dedicated offsite infrastructure is a must. If you follow the 3-2-1 backup rule, then you should have at least three copies of your data, on two different media, and one copy should be offsite.

But an offsite copy of your data can be expensive… You have to setup storage and networking in a suitable colocation. And even if you have an offsite copy of your data, you must be able to recover the data. This could be fun in case of terabytes of data and an offsite copy on tape.

A offsite copy in a cloud is much more interesting. No need to provide hardware, software, licenses. Just provide internet-connectivity, book a suitable plan, and you are ready to go.

Replication to Cloud using Vembu CloudDR

Vembu offers a cloud-based disaster recovery plan through its own cloud services, which is hosted in Amazon Web Services (AWS). This product is designed for businesses, who can’t afford, or who are not willing, to setup a dedicated offsite infrastructure for disaster recovery.

The data, which is backuped by the Vembu BDR server, is replicated to the Vembu Cloud. In case of any disaster, the backup data can be directly restored from the cloud at anytime and anywhere. The replication is managed and monitored using the CloudDR portal.

Before you can enable the offsite replication, you have to register your Vembu BDR server with your Vembu Portal account. You can either go to onlinebackup.vembu.com, or you can go to portal.vembu.com and sign up.

Vembu Technologies/ Vembu CloudDR/ Copyright by Vembu Technologies

After configuring schedule, retention and bandwidth usage, Vembu CloudDR is ready to go.

The end is near – time for recovery

CloudDR offers two types of recovery:

  • Image Based Recovery
  • Application Based Recovery

In case of an image based recovery, you can either download a VMDK or VHD(X) image, or you can do a file level recovery. In this case you can restore single files from inside of a chosen image.

You can even download a VHD(X) image of a VMware backup, which allows you some kind of V2V or P2V restores.

In case of a application based recovery, you can recover single application items from

  • Microsoft Exchange
  • Microsoft SharePoint
  • Microsoft SQL Server, or
  • MySQL

Depending on the type of restore, you will get an encrypted and password protected ZIP file with documents, or even MDF/ LDF files. These files can than be used to restore the lost data.

Summary

Vembu CloudDR is a pretty interesting add-on for Vembu customers. It’s easy to setup, has an attractive price tag and therefore consequently addresses the SMB customers.

Feel free to request a demo or try Vembu CloudDR.

Vembu BDR Essentials – Now up to 10 CPU Sockets

This posting is ~1 year years old. You should keep this in mind. IT is a short living business. This information might be outdated.

It is pretty common that vendors offer their products in special editions for SMB customers. VMware offers VMware vSphere Essentials and Essentials Plus, Veeam offers Veeam Backup Essentials, and Vembu has Vembu BDR Essentials.

Now Vembu has extended their Vembu BDR Essentials package significantly to address the needs of mid-sized businesses.

Vembu Technologies/ Vembu BDR Essentials/ Copyright by Vembu Technologies

Affordable backup for SMB customers

Most SMB virtualization deployments consists of two or three hosts, which makes 4 or 6 used CPU sockets. Because of this, Vembu BDR Essentials supportes up to 6 sockets or 50 VMs. Yes, 6 sockets OR 50 VMs. Vembu has no rised this limit to 10 Sockets OR 100 VMs! This allows customers to use up to five 2-socket hosts or 100 VMs with less than 10 sockets.

Feature Highlights

Vembu BDR Essentials support all important features:

  • Agentless VMBackup to backup VMs
  • Continuous Data Protection with support for RPOs of less than 15 minutes
  • Quick VM Recovery to get failed VMs up and running in minutes
  • Vembu Universal Explorer to restore individual items from Microsoft applications like Exchange, SharePoint, SQL and Active Directory
  • Replication of VMs Vembu OffsiteDR and Vembu CloudDR

Needless to say that Vembu BDR Essentials support VMware vSphere and Microsoft Hyper-V. If necessary, customer can upgrade to the Standard or Enterprise edition.