vcloudnine.de is the personal blog of Patrick Terlisten. Patrick has a strong focus on virtualization & cloud solutions, but also storage, networking, and IT infrastructure in general. He is a fan of Lean Management and agile methods, and practices continuous improvement whereever it is possible.Feel free to follow him on Twitter and/ or leave a comment.
This is a situation that never should happen, and I had to deal with it only a couple of times in more than 10y working with VMware vSphere/ ESXi. In most cases, the reason for this was the usage of thin-provisioned disks together with small datastores. Yes, that’s a bad design. Yes, this should never happen.
There is a nearly 100% chance that this setup will fail one day. Either because someone dumps much data into the VMs, or because of VM snapshots. But such a setip WILL FAIL one day.
Yesterday was one of these days and five VMs have stopped working on a small ESXi in a site of one of my customers. A quick look into the vCenter confirmed my first assumption. The datastore was full. My second thought: Why are there so many VMs on that small ESXi host, and why they are thin-provisioned?
The vCenter showed me the following message on each VM:
There is no more space for virtual disk $VMNAME.vmdk. You might be able to continue this session by freeing disk space on the relevant volume, and clicking Retry. Click Cancel to terminate this session.
Okay, what to do? First things first:
Is there any unallocated space left on the RAID group? If yes, expand the VMFS.
Are there any VM snapshots left? If yes, remove them
Configure 100% memory reservation for the VMs. This removes the VM memory swap files and releases a decent amout of disk space
Remove ISO files from the datastore
Remove VMs (if you have a backup and they are not necessary for the business)
This should allow you to continue the operation of the VMs. To solve the problem permanently:
Add disks to the server and expand the VMFS, or create a new datastore
Add a NFS datastore
Remove unnecessary VMs
Setup a working monitoring , setup alarms, do not overprovision datastores, or switch to eager-zeroed disks
Such an issues should not happen. It is not rude to say here: This is simply due to bad design and lack of operational processes.
Sorry for the long delay since my last blog post – busy times, but with lots of vSphere. :) Today, I did an upgrade of a standalone vCenter Server Appliance at one of my healthcare customers. The vCenter was on 6.0 U3 and I had to upgrade it to 6.7 U2. It was only a small deployment with three hosts, so nothing fancy. And as with in many other vSphere upgrades, I came across this warning message:
Warning User vdcs does not have the expected uid 1006 Resolution Please refer to the corresponding KB article.
I saw this message multiple times, but in the past, there was no KB article about this, only a VMTN thread. And this thread mentioned, that you can safely ignore this message, if you don’t use a Content Library. Confirmation enough to proceed with the upgrade. :)
Note: You can safely ignore this message if you are not using Content Library Service before the upgrade, or using it only for libraries not backed by NFS storage.
Currently, I don’t have cusomters with NFS backed Content Libraries, but if you do, you might want to take a look at it. Especially if you have done an upgrade from 6.0 to 6.5 or 6.7 and you want to start using Content Libraries now.
TL;DR: This bug is still up to date and has not been fixed yet! Some user in the VMTN thread mentioned a hotpatch from VMware, which seems to be pulled. A fix for this issue will be available with ESXi 6.5 U3 and 6.7 U3. The only workaround is to place VMs on VMFS 5 datastores, or avoid the use of snapshots if you have to use VMFS 6. I can confirm, that Windows 1903 is also affected.
One of my customers told me that they have massive performance problems with a Horizon View deployment at one of their customers. We talked about this issue and they mentioned, that this was related to Windows 10 1809 and VMFS 6. A short investigation showed, that this issue was well known, and even VMware is working on this. In their case, another IT company installed the Cisco HyperFlex solution and the engineer was unaware of this issue.
What do we know so far? In October 2018 (!), shortly after the release of Windows 10 1809, a thread came up in the VMTN (windows 10 1809 slow). According to the posted test results, the issue occurs under the following conditions.
Windows 10 1809
VMware ESXi 6.5 or 6.7 (regardless from build level)
VM has at least one snapshot
VM is placed on a VMFS 6 datastore
Space reclamation is enabled or disabled
The “official” statement of the VMware support is:
The issue is identified to be due to some guest OS behavior change in this version of windows 10, 1809 w.r.t thin provisioned disks and snapshots, this has been confirmed as a bug and will be fixed in the following releases – 6.5 U3 and 6.7U3, which will be released within End of this year (2019).
I don’t care if the root cause is VMFS 6 or Windows 10. But VMware and Microsoft needs to get this fixed fast! Just to make this clear: You will face the same issues, regardless if you run Windows 10 in a VM, use Windows 10 with Horizon View, or Windows 10 with Citrix. When VMFS 6 and Snapshots comes into play, you will ran into this performance issue.
I will update this blog post when I get some news.
Using a password safe, or password management system, is not a best practice – it’s a common practice. I’m using KeePass for years, because it’s available for different platforms, it can be used offline, it is Open Source, and it is not bound to any cloud services. Keepass allows me securely store usernames, passwords, recovery codes etc. for different services and websites, and together with features like autotype, Keepass offers a plus security and convenience.
I use 2FA or MFA wherever I can. That’s the reason why I’m a big fan of SSH public key authentication. But SSH key handling is sometimes inconvenient. You simple don’t want to store your SSH private keys on a cloud drive, and you don’t want to store them on a USB stick, or distribute them over different devices. In the past, I stored my SSH private keys on a cloud-drive in an encrypted container. When I needed a key, I encrypted the container and was able to use them. But this solution was inconvenient.
So what to do?
While searching for a solution I stumbled over KeeAgent, which is a plugin for KeePass. Keeagent allows you to store SSH keys in a KeePass database. KeeAgent then acts as SSH agent. I’m using this with PuTTY and MobaXterm and it works like a charm.
All you need is KeePass 2.x and the KeeAgent plugin. After installing the plugin (simply put the plgx file into C:\Program Files (x86)\KeePass Password Safe 2\Plugins), you can create a new entry in your KeePass database.
The password is the SSH private key passphrase. Then add the public and private key file to the newly created keepass database entry.
The KeeAgent.settings entry will be added automatically. Jump to the “KeeAgent” tab.
If required, keys can be loaded automatically if the database is locked, or you can add them later using the menu “Extras > KeeAgent”. Not every database entry can be used with KeeAgent, you have to enable the first checkbox to allow KeeAgent to use a specific database entry.
I create a database entry for each key pair I want to use with KeeAgent. And I only add frequently used keys automatically to KeeAgent. I have tons of keys and 99% of them are only added if I need them.
With KeeAgent in place, I can start new SSH sessions and KeeAgent delivers the matching key. You can see this in this screenshot “…from agent”.
I really don’t want to miss KeePass and KeeAgent. It makes my life easier and more secure.
When it comes to disaster recovery (DR), dedicated offsite infrastructure is a must. If you follow the 3-2-1 backup rule, then you should have at least three copies of your data, on two different media, and one copy should be offsite.
But an offsite copy of your data can be expensive… You have to setup storage and networking in a suitable colocation. And even if you have an offsite copy of your data, you must be able to recover the data. This could be fun in case of terabytes of data and an offsite copy on tape.
A offsite copy in a cloud is much more interesting. No need to provide hardware, software, licenses. Just provide internet-connectivity, book a suitable plan, and you are ready to go.
Replication to Cloud using Vembu CloudDR
Vembu offers a cloud-based disaster recovery plan through its own cloud services, which is hosted in Amazon Web Services (AWS). This product is designed for businesses, who can’t afford, or who are not willing, to setup a dedicated offsite infrastructure for disaster recovery.
The data, which is backuped by the Vembu BDR server, is replicated to the Vembu Cloud. In case of any disaster, the backup data can be directly restored from the cloud at anytime and anywhere. The replication is managed and monitored using the CloudDR portal.
Before you can enable the offsite replication, you have to register your Vembu BDR server with your Vembu Portal account. You can either go to onlinebackup.vembu.com, or you can go to portal.vembu.com and sign up.
After configuring schedule, retention and bandwidth usage, Vembu CloudDR is ready to go.
The end is near – time for recovery
CloudDR offers two types of recovery:
Image Based Recovery
Application Based Recovery
In case of an image based recovery, you can either download a VMDK or VHD(X) image, or you can do a file level recovery. In this case you can restore single files from inside of a chosen image.
You can even download a VHD(X) image of a VMware backup, which allows you some kind of V2V or P2V restores.
In case of a application based recovery, you can recover single application items from
Microsoft SQL Server, or
Depending on the type of restore, you will get an encrypted and password protected ZIP file with documents, or even MDF/ LDF files. These files can than be used to restore the lost data.
Vembu CloudDR is a pretty interesting add-on for Vembu customers. It’s easy to setup, has an attractive price tag and therefore consequently addresses the SMB customers.
It is pretty common that vendors offer their products in special editions for SMB customers. VMware offers VMware vSphere Essentials and Essentials Plus, Veeam offers Veeam Backup Essentials, and Vembu has Vembu BDR Essentials.
Now Vembu has extended their Vembu BDR Essentials package significantly to address the needs of mid-sized businesses.
Affordable backup for SMB customers
Most SMB virtualization deployments consists of two or three hosts, which makes 4 or 6 used CPU sockets. Because of this, Vembu BDR Essentials supportes up to 6 sockets or 50 VMs. Yes, 6 sockets OR 50 VMs. Vembu has no rised this limit to 10 Sockets OR 100 VMs! This allows customers to use up to five 2-socket hosts or 100 VMs with less than 10 sockets.
Vembu BDR Essentials support all important features:
Agentless VMBackup to backup VMs
Continuous Data Protection with support for RPOs of less than 15 minutes
Quick VM Recovery to get failed VMs up and running in minutes
Vembu Universal Explorer to restore individual items from Microsoft applications like Exchange, SharePoint, SQL and Active Directory
Replication of VMs Vembu OffsiteDR and Vembu CloudDR
Needless to say that Vembu BDR Essentials support VMware vSphere and Microsoft Hyper-V. If necessary, customer can upgrade to the Standard or Enterprise edition.
Yesterday, I got one of these mails from a customer that make you think “Ehm, no”.
Can you please enable the TPM on all VMs.
The short answer is “Ehm, no!”. But I’m a kind guy, so I added some explanation to my answer.
Let’s add some context around this topic. The Trusted Platform Module (TPM) is a cryptoprocessor that offers various functions. For example, BitLocker uses the TPM to protect encryption keys. But there are another pretty interesting Windows features that require a TPM: “Virtualization-based Security“, or VBS. In contrast to BitLocker, VBS might be a feature that you want to use inside a VM.
VBS, uses virtualization features to create an isolated and secure region of memory, that is separated from the normal operating system. VBS is required if you want to use Windows Defender Credential Guard, which protects secrets like NTLM password hashes or Kerberos ticket-granting tickets against block pass-the-hash or pass-the-ticket (PtH) attacks. VBS is also required when you want to use Windows Defender Exploit Guard, or Windows Defender Application Control.
Credential Guard, Exploit Guard, and Application Control require a TPM 2.0 (and some other stuff, like UEFI, and some CPU extensions).
So, just add the vTPM module to a VM and you are ready to go? Ehm… no.
the guest OS you use must be either Windows Server 2016, 2019 or Windows 10
the ESXi hosts must be at least ESXi 6.7, and
the virtual machine must use UEFI firmware
Okay, no big deal. But there is a fourth prerequisite that must be met:
your vSphere environment is configured for virtual machine encryption
And now things might get complicated… or expensive… or both.
Why do you need VM encryption when you want to add a vTPM?
The TPM can be used to securly store encryption keys. So the vTPM must offer a similar feature. In case of the vTPM, the data is written to the “Non-Volatile Secure Storage” of the VM. This is the .nvram file in the VM directory. To protect this data, the .nvram file is encrypted using the vSphere VM Encryption feature. In addition to the .nvram file, parts of the VMX file, the swap file, the vmware.log, and some other files are also encrypted. But not the VMDKs, except you decide to encrypt them.
Before you can start using VM encryption, you have to add a Key Management Server (KMS) to your vCenter. And you better you add a KMS cluster to your vCenter, because you don’t want that the KMS is a single point of failure. The vCenter Server requests keys from the KMS. The KMS generates and stores the keys, and passes them to third party systems, like the vCenter, using the Key Management Interoperability Protocol (KMIP) 1.1
Make sure that you think about administrator permissions, role-based access control (RBAC), or disaster recovery. When you have to deal with security, you don’t want to have users use a general, high priviedge administrator account. And think about disaster recovery! You won’t be able to start encrypted VMs, until you have re-established trust between your vCenter and your KMS (cluster). So be prepared, and do not implement a single KMS.
And this is why vTPM is nothing you simply enable on all VMs. Because it’s security. And security has to be done right.
Mike Foley has written two awesome blog posts about this topic. Make sure that you read them.
Implementing a public key infrastructure (PKI) is a recurring task for me. More and more customers tend to implement a PKI in their environment. Mostly not to increase security, rather then to get rid of browser warnings because of self-signed certificates, to secure intra-org email communication with S/MIME, or to sign Microsoft Office macros.
What is a 2-tier PKI?
Why is a multi-tier PKI hierarchy a good idea? Such a hierarchy typically consits of a root Certificate Authority (CA), and an issuing CA. Sometimes you see a 3-tier hierarchy, in which a root CA, a sub CA and an issuing CA are tied together in a chain of trust.
A root CA issues, stores and signs the digital certificates for sub CA. A sub CA issues, stores and signs the digital certificates for issuing CA. Only an issuing CA issues, stores and signs the digital certificates for users and devices.
In a 2-tier hierarchy, a root CA issues the certificate for an issuing CA.
In case of security breach, in which the issuing CA might become compromised, only the CA certificate for the issuing CA needs to be revoked. But what of the root CA becomes compromised? Because of this, a root CA is typically installed on a secured, and powered-off (offline) VM or computer. It will only be powered-on to publish new Certificate Revocation Lists (CRL), or to sign/ renew a new sub or issuing CA certificate.
Think about the processes! Creating a PKI is more than provisioning a couple of VMs. You need to think about processes to
Be aware of what a digital certificate is. You, or your CA, confirms the identity of a party by handing out a digital certificate. Make sure that no one can issue certificates without a proof of his identity.
Think about lifetimes of certificates! Customers tend to create root CA certificates with lifetimes of 10, 20 or even 40 years. Think about the typical lifetime of a VM or server, which is necessary to run an offline root CA. Typically the server OS has a lifetime of 10 to 12 years. This should determine the lifetime of a root CA certificate. IMHO 10 years is a good compromise.
For a sub or issuing CA, a lifespan of 5 years is a good compromise. Using the same lifetime as for a root CA is not a good idea, because an issued certificate can’t be longer valid than the lifetime of the CA certificate of the issuing CA.
A lifespan of 1 to 3 years for thinks like computer or web server certificates is okay. If a certificate is used for S/MIME or code signing, you should go for a lifetime of 1 year.
But to be honest: At the end of the day, YOU decide how long your certificates will be valid.
Publish CRLs and make them accessable! You can’t know if a certificate is revoked by a CA. But you can use a CRL to check if a certificate is revoked. Because of this, the CA must publish CRLs regulary. Use split DNS to use the same URL for internal and external requests. Make sure that the CRL is available for external users.
This applies not only to certificates for users or computers, but also for sub and issuing CAs. So there must be a CRL from each of your CAs!
I recommend to publish CRLs to a webserver and make this webserver reachable over HTTP. An issued certificate includes the URL or path to the CRL of the CA, that has issued the certificate.
Make sure that the CRL has a meaningful validity period. Of an offline root CA, which issues only a few certificates of its lifetime, this can be 1 year or more. For an issuing CA, the validity period should only a few days.
Publish AIA (Authority Information Access) information and make them accessable! AIA is an certificate extension that is used to offer two types of information :
How to get the certificate of the issuing or upper CAs, and
who is the OCSP responder from where revocation of this certificate can be checked
I tend to use the same place for the AIA as for the CDP. Make sure that you configure the AIA extension before you issue the first certificates, especially configure the AIA and CDP extension before you issue intermediate and issuing CA certificates.
Use a secure hash algorithm and key length! Please stop using SHA1! I recommend at least SHA256 and 4096 bit key length. Depending on the used CPUs, SHA512 can be faster than SHA256.
Create a CApolicy.inf! The CApolicy.inf is located uder C:\Windows and will will be used during the creation of the CA certificate. I often use this CApolicy.inf files.
For the root CA:
For the issuing CA:
I do not claim that this is blog post covers all necessary aspects of such an complex thing like an PKI. But I hope that I have mentioned some of the important parts. And at least: I have a reference from which I can copy and paste the CApolicy.inf files. :D
I got this error in a new deployment of Veeam Backup & Replication 9.5 Update 4. The error occured every day at 9 pm.
24.02.2019 21:00:11 :: Error: Remote deployment and management is available for licensed agents only. Please change your backup server settings to allow managed agents to consume the license, then perform a protection group rescan.
The solution to this issue is pretty simple. Make sure that you allow the consumption of licenses for free agents. You will find this option under General > License.
Another workaround is to disable the protection group. Right click “Manually Added” under “Physical & Cloud Infrastructure” and click “Disable”.
Let me know if one of these workarounds worked for you. :)
Today, a customer called me and reported, on the first sight, a pretty weired error: Only Windows clients were unable to login into a WPA2-Enterprise wireless network. The setup itself was pretty simple: Cisco Meraki WiFi access points, a Windows Network Protection Server (NPS) on a Windows Server 2016 Domain Controller, and a Sophos SG 125 was acting as DHCP for different WiFi networks.
Pixybay / pixabay.com/ Pixabay License
Windows clients failed to authenticate, but Apple iOS, Android, and even Windows 10 Tablets had no problem.
The following error was logged into the Windows Security event log.
Connection Request Policy Name: Use Windows authentication for all users
Logging Results: Accounting information was written to the local log file.
Reason Code: 16
Reason: Authentication failed due to a user credentials mismatch. Either the user name provided does not map to an existing user account or the password was incorrect.
The credentials were definitely correct, the customer and I tried different user and password combinations.
I also checked the NPS network policy. When choosing PEAP as authentication type, the NPS needs a valid server certificate. This is necessary, because the EAP session is protected by a TLS tunnel. A valid certificate was given, in this case a wildcard certificate. A second certificate was also in place, this was a certificate for the domain controller from the internal enterprise CA.
It was an educated guess, but I disabled the server certificate check for the WPA2-Enterprise conntection, and the client was able to login into the WiFi. This clearly showed, that the certificate was the problem. But it was valid, all necessary CA certificates were in place and there was no reason, why the certificate was the cause.
The customer told me, that they installed updates on friday (today is monday), and a reboot of the domain controller was issued. This also restarted the NPS service, and with this restart, the Wildcard certificate was used for client connections.
I switched to the domain controller certificate, restarted the NPS, and all Windows clients were again able to connect to the WiFi.
Try to avoid Wildcard certificates, or at least check the certificate that is used by the NPS, if you get authentication error with reason code 16.