It is time for some words of wisdom, in regard to Exchange and the supported Active Directory environments. It is the same as with the supported. NET Framework releases: Latest release does not automatically mean “supported”.
To be honest: I nearly nuked a customer environment with ~ 300 users yesterday by preparing the domain for the first Windows Server 2019 Domain Controller.
First things first: Everything is fine! I did not prepared to forest schema for Windows Server 2019.
The support for Windows Server 2008 R2 comes to an end and some customers are still running it. Like my customer yesterday. Some application servers are still on 2008 R2… and the Domain Controllers. The customer is also running Exchange 2013 on Windows Server 2012 R2.
The customer has decided to go to Windows Server 2019 wherever possible. This includes file servers, application servers, and the Domain Controllers. On of the first steps was the deployment of Active Directory-Based Activation. The AD schema needs to be prepared for this and I decided to prepare the schema for Windows Server 2019. I already copied the adprep folder from the Server 2019 ISO and openened a PowerShell. And then I paused. Something felt odd. I wanted to take a look at the Exchange Server supportability matrix.
Exchange 2013 does NOT supported Windows Server 2019 Domain Controllers! Uhh… that was unexpected.
Always check the Exchange Server supportability matrix. Always! Regardless if it’s because of .NET Framework, Active Directory, Outlook Clients etc. Just check it every time you plan to change something in your environment.
Especially in regard to Microsoft Exchange “newer” does not automatically mean “supported”. Most times the opposite is true.
This issue is described in KB2971270 and is fixed in Exchange 2013 CU6.
I published this blog post in July 2015 and it is still relevant. The feedback for this blog post was incredible, and I’m not joking when I say: I saved many admins weekends. ;) It has shown, that this error still occurs with Exchange 2016 and even 2019. Maybe not because of the same, with Exchange 2013 CU6 fixed bug, but maybe for other reasons. And the solution below still applies to it. Because of this I have decided to re-publish this blog post with a modified title and this little preamble.
Feel free to leave a comment if this blog post worked for you. :)
I ran a couple of times in this error. After applying changes to SSL certificates (add, replace or delete a SSL certificate) and rebooting the server, the event log is flooded with events from source “HttpEvent” and event id 15021. The message says:
An error occurred whileusing SSL configuration forendpoint0.0.0.0:444.The error status code iscontained within the returned data.
If you try to access the Exchange Control Panel (ECP) or Outlook Web Access (OWA), you will get a blank website. To solve this issue, open up an elevated command prompt on your Exchange 2013 server.
Verify Revocation Using Cached Client Certificate Only:Disabled
Revocation Freshness Time:0
URL Retrieval Timeout:0
Ctl Store Name:(null)
DS Mapper Usage:Disabled
Negotiate Client Certificate:Disabled
Check the certificate hash and appliaction ID for 0.0.0.0:443, 0.0.0.0:444 and 127.0.0.1:443. You will notice, that the application ID for this three entries is the same, but the certificate hash for 0.0.0.0:444 differs from the other two entries. And that’s the point. Remove the certificate for 0.0.0.0:444.
Vembu Technologies was founded in 2002, and with 60.000 customers and more than 4000 partners, Vembu is a leading provider with a comprehensive portfolio of software products and cloud services to small and medium businesses.
In December 2018, Vembu announced the fourth major release of their BDR Suite. Vembu BDR Suite 4.0.1 is now out for production setups with enhanced performance and bug fixes. Vembu BDR Suite v4.0.1 is an intermediate patch update that addresses the customers reported issues and other support issues on the previous build of v4.0. Vembu BDR Suite v4.0.1 also features a large number of enhancements and significant of those are listed below.
Beside of bug fixes, BDR Suite v4.0.1 also includes some new enhancements. In my opinion, the most significant enhancements are:
Significant performance improvement in Quick VM Recovery on VMware environments
Rescan option is introduced in Hyper-V Manager Servers page, which allows you to install Vembu Integration Service on the newly added node of the Hyper-V cluster (or if it’s not available on the existing node)
Backups configured through BDR Server console will run in parallel (Default parallel backup count is set to 5 and it is configurable)
Ability to add new Hyper-V hosts or choose existing hosts while performing Live Recovery to Hyper-V host
Interested in trying Vembu BDR suite? Try the 30-day free trial now! For any questions, simply send an e-mail to firstname.lastname@example.org or follow them on Twitter.
Sorry for the long delay since my last blog post – busy times, but with lots of vSphere. :) Today, I did an upgrade of a standalone vCenter Server Appliance at one of my healthcare customers. The vCenter was on 6.0 U3 and I had to upgrade it to 6.7 U2. It was only a small deployment with three hosts, so nothing fancy. And as with in many other vSphere upgrades, I came across this warning message:
Warning User vdcs does not have the expected uid 1006 Resolution Please refer to the corresponding KB article.
I saw this message multiple times, but in the past, there was no KB article about this, only a VMTN thread. And this thread mentioned, that you can safely ignore this message, if you don’t use a Content Library. Confirmation enough to proceed with the upgrade. :)
Note: You can safely ignore this message if you are not using Content Library Service before the upgrade, or using it only for libraries not backed by NFS storage.
Currently, I don’t have cusomters with NFS backed Content Libraries, but if you do, you might want to take a look at it. Especially if you have done an upgrade from 6.0 to 6.5 or 6.7 and you want to start using Content Libraries now.
Using a password safe, or password management system, is not a best practice – it’s a common practice. I’m using KeePass for years, because it’s available for different platforms, it can be used offline, it is Open Source, and it is not bound to any cloud services. Keepass allows me securely store usernames, passwords, recovery codes etc. for different services and websites, and together with features like autotype, Keepass offers a plus security and convenience.
I use 2FA or MFA wherever I can. That’s the reason why I’m a big fan of SSH public key authentication. But SSH key handling is sometimes inconvenient. You simple don’t want to store your SSH private keys on a cloud drive, and you don’t want to store them on a USB stick, or distribute them over different devices. In the past, I stored my SSH private keys on a cloud-drive in an encrypted container. When I needed a key, I encrypted the container and was able to use them. But this solution was inconvenient.
So what to do?
While searching for a solution I stumbled over KeeAgent, which is a plugin for KeePass. Keeagent allows you to store SSH keys in a KeePass database. KeeAgent then acts as SSH agent. I’m using this with PuTTY and MobaXterm and it works like a charm.
All you need is KeePass 2.x and the KeeAgent plugin. After installing the plugin (simply put the plgx file into C:\Program Files (x86)\KeePass Password Safe 2\Plugins), you can create a new entry in your KeePass database.
The password is the SSH private key passphrase. Then add the public and private key file to the newly created keepass database entry.
The KeeAgent.settings entry will be added automatically. Jump to the “KeeAgent” tab.
If required, keys can be loaded automatically if the database is locked, or you can add them later using the menu “Extras > KeeAgent”. Not every database entry can be used with KeeAgent, you have to enable the first checkbox to allow KeeAgent to use a specific database entry.
I create a database entry for each key pair I want to use with KeeAgent. And I only add frequently used keys automatically to KeeAgent. I have tons of keys and 99% of them are only added if I need them.
With KeeAgent in place, I can start new SSH sessions and KeeAgent delivers the matching key. You can see this in this screenshot “…from agent”.
I really don’t want to miss KeePass and KeeAgent. It makes my life easier and more secure.
When it comes to disaster recovery (DR), dedicated offsite infrastructure is a must. If you follow the 3-2-1 backup rule, then you should have at least three copies of your data, on two different media, and one copy should be offsite.
But an offsite copy of your data can be expensive… You have to setup storage and networking in a suitable colocation. And even if you have an offsite copy of your data, you must be able to recover the data. This could be fun in case of terabytes of data and an offsite copy on tape.
A offsite copy in a cloud is much more interesting. No need to provide hardware, software, licenses. Just provide internet-connectivity, book a suitable plan, and you are ready to go.
Replication to Cloud using Vembu CloudDR
Vembu offers a cloud-based disaster recovery plan through its own cloud services, which is hosted in Amazon Web Services (AWS). This product is designed for businesses, who can’t afford, or who are not willing, to setup a dedicated offsite infrastructure for disaster recovery.
The data, which is backuped by the Vembu BDR server, is replicated to the Vembu Cloud. In case of any disaster, the backup data can be directly restored from the cloud at anytime and anywhere. The replication is managed and monitored using the CloudDR portal.
Before you can enable the offsite replication, you have to register your Vembu BDR server with your Vembu Portal account. You can either go to onlinebackup.vembu.com, or you can go to portal.vembu.com and sign up.
After configuring schedule, retention and bandwidth usage, Vembu CloudDR is ready to go.
The end is near – time for recovery
CloudDR offers two types of recovery:
Image Based Recovery
Application Based Recovery
In case of an image based recovery, you can either download a VMDK or VHD(X) image, or you can do a file level recovery. In this case you can restore single files from inside of a chosen image.
You can even download a VHD(X) image of a VMware backup, which allows you some kind of V2V or P2V restores.
In case of a application based recovery, you can recover single application items from
Microsoft SQL Server, or
Depending on the type of restore, you will get an encrypted and password protected ZIP file with documents, or even MDF/ LDF files. These files can than be used to restore the lost data.
Vembu CloudDR is a pretty interesting add-on for Vembu customers. It’s easy to setup, has an attractive price tag and therefore consequently addresses the SMB customers.
It is pretty common that vendors offer their products in special editions for SMB customers. VMware offers VMware vSphere Essentials and Essentials Plus, Veeam offers Veeam Backup Essentials, and Vembu has Vembu BDR Essentials.
Now Vembu has extended their Vembu BDR Essentials package significantly to address the needs of mid-sized businesses.
Affordable backup for SMB customers
Most SMB virtualization deployments consists of two or three hosts, which makes 4 or 6 used CPU sockets. Because of this, Vembu BDR Essentials supportes up to 6 sockets or 50 VMs. Yes, 6 sockets OR 50 VMs. Vembu has no rised this limit to 10 Sockets OR 100 VMs! This allows customers to use up to five 2-socket hosts or 100 VMs with less than 10 sockets.
Vembu BDR Essentials support all important features:
Agentless VMBackup to backup VMs
Continuous Data Protection with support for RPOs of less than 15 minutes
Quick VM Recovery to get failed VMs up and running in minutes
Vembu Universal Explorer to restore individual items from Microsoft applications like Exchange, SharePoint, SQL and Active Directory
Replication of VMs Vembu OffsiteDR and Vembu CloudDR
Needless to say that Vembu BDR Essentials support VMware vSphere and Microsoft Hyper-V. If necessary, customer can upgrade to the Standard or Enterprise edition.
Implementing a public key infrastructure (PKI) is a recurring task for me. More and more customers tend to implement a PKI in their environment. Mostly not to increase security, rather then to get rid of browser warnings because of self-signed certificates, to secure intra-org email communication with S/MIME, or to sign Microsoft Office macros.
What is a 2-tier PKI?
Why is a multi-tier PKI hierarchy a good idea? Such a hierarchy typically consits of a root Certificate Authority (CA), and an issuing CA. Sometimes you see a 3-tier hierarchy, in which a root CA, a sub CA and an issuing CA are tied together in a chain of trust.
A root CA issues, stores and signs the digital certificates for sub CA. A sub CA issues, stores and signs the digital certificates for issuing CA. Only an issuing CA issues, stores and signs the digital certificates for users and devices.
In a 2-tier hierarchy, a root CA issues the certificate for an issuing CA.
In case of security breach, in which the issuing CA might become compromised, only the CA certificate for the issuing CA needs to be revoked. But what of the root CA becomes compromised? Because of this, a root CA is typically installed on a secured, and powered-off (offline) VM or computer. It will only be powered-on to publish new Certificate Revocation Lists (CRL), or to sign/ renew a new sub or issuing CA certificate.
Think about the processes! Creating a PKI is more than provisioning a couple of VMs. You need to think about processes to
Be aware of what a digital certificate is. You, or your CA, confirms the identity of a party by handing out a digital certificate. Make sure that no one can issue certificates without a proof of his identity.
Think about lifetimes of certificates! Customers tend to create root CA certificates with lifetimes of 10, 20 or even 40 years. Think about the typical lifetime of a VM or server, which is necessary to run an offline root CA. Typically the server OS has a lifetime of 10 to 12 years. This should determine the lifetime of a root CA certificate. IMHO 10 years is a good compromise.
For a sub or issuing CA, a lifespan of 5 years is a good compromise. Using the same lifetime as for a root CA is not a good idea, because an issued certificate can’t be longer valid than the lifetime of the CA certificate of the issuing CA.
A lifespan of 1 to 3 years for thinks like computer or web server certificates is okay. If a certificate is used for S/MIME or code signing, you should go for a lifetime of 1 year.
But to be honest: At the end of the day, YOU decide how long your certificates will be valid.
Publish CRLs and make them accessable! You can’t know if a certificate is revoked by a CA. But you can use a CRL to check if a certificate is revoked. Because of this, the CA must publish CRLs regulary. Use split DNS to use the same URL for internal and external requests. Make sure that the CRL is available for external users.
This applies not only to certificates for users or computers, but also for sub and issuing CAs. So there must be a CRL from each of your CAs!
I recommend to publish CRLs to a webserver and make this webserver reachable over HTTP. An issued certificate includes the URL or path to the CRL of the CA, that has issued the certificate.
Make sure that the CRL has a meaningful validity period. Of an offline root CA, which issues only a few certificates of its lifetime, this can be 1 year or more. For an issuing CA, the validity period should only a few days.
Publish AIA (Authority Information Access) information and make them accessable! AIA is an certificate extension that is used to offer two types of information :
How to get the certificate of the issuing or upper CAs, and
who is the OCSP responder from where revocation of this certificate can be checked
I tend to use the same place for the AIA as for the CDP. Make sure that you configure the AIA extension before you issue the first certificates, especially configure the AIA and CDP extension before you issue intermediate and issuing CA certificates.
Use a secure hash algorithm and key length! Please stop using SHA1! I recommend at least SHA256 and 4096 bit key length. Depending on the used CPUs, SHA512 can be faster than SHA256.
Create a CApolicy.inf! The CApolicy.inf is located uder C:\Windows and will will be used during the creation of the CA certificate. I often use this CApolicy.inf files.
For the root CA:
For the issuing CA:
I do not claim that this is blog post covers all necessary aspects of such an complex thing like an PKI. But I hope that I have mentioned some of the important parts. And at least: I have a reference from which I can copy and paste the CApolicy.inf files. :D
I got this error in a new deployment of Veeam Backup & Replication 9.5 Update 4. The error occured every day at 9 pm.
24.02.2019 21:00:11 :: Error: Remote deployment and management is available for licensed agents only. Please change your backup server settings to allow managed agents to consume the license, then perform a protection group rescan.
The solution to this issue is pretty simple. Make sure that you allow the consumption of licenses for free agents. You will find this option under General > License.
Another workaround is to disable the protection group. Right click “Manually Added” under “Physical & Cloud Infrastructure” and click “Disable”.
Let me know if one of these workarounds worked for you. :)
Today, a customer called me and reported, on the first sight, a pretty weired error: Only Windows clients were unable to login into a WPA2-Enterprise wireless network. The setup itself was pretty simple: Cisco Meraki WiFi access points, a Windows Network Protection Server (NPS) on a Windows Server 2016 Domain Controller, and a Sophos SG 125 was acting as DHCP for different WiFi networks.
Pixybay / pixabay.com/ Pixabay License
Windows clients failed to authenticate, but Apple iOS, Android, and even Windows 10 Tablets had no problem.
The following error was logged into the Windows Security event log.
Connection Request Policy Name: Use Windows authentication for all users
Logging Results: Accounting information was written to the local log file.
Reason Code: 16
Reason: Authentication failed due to a user credentials mismatch. Either the user name provided does not map to an existing user account or the password was incorrect.
The credentials were definitely correct, the customer and I tried different user and password combinations.
I also checked the NPS network policy. When choosing PEAP as authentication type, the NPS needs a valid server certificate. This is necessary, because the EAP session is protected by a TLS tunnel. A valid certificate was given, in this case a wildcard certificate. A second certificate was also in place, this was a certificate for the domain controller from the internal enterprise CA.
It was an educated guess, but I disabled the server certificate check for the WPA2-Enterprise conntection, and the client was able to login into the WiFi. This clearly showed, that the certificate was the problem. But it was valid, all necessary CA certificates were in place and there was no reason, why the certificate was the cause.
The customer told me, that they installed updates on friday (today is monday), and a reboot of the domain controller was issued. This also restarted the NPS service, and with this restart, the Wildcard certificate was used for client connections.
I switched to the domain controller certificate, restarted the NPS, and all Windows clients were again able to connect to the WiFi.
Try to avoid Wildcard certificates, or at least check the certificate that is used by the NPS, if you get authentication error with reason code 16.