Using a password safe, or password management system, is not a best practice – it’s a common practice. I’m using KeePass for years, because it’s available for different platforms, it can be used offline, it is Open Source, and it is not bound to any cloud services. Keepass allows me securely store usernames, passwords, recovery codes etc. for different services and websites, and together with features like autotype, Keepass offers a plus security and convenience.
I use 2FA or MFA wherever I can. That’s the reason why I’m a big fan of SSH public key authentication. But SSH key handling is sometimes inconvenient. You simple don’t want to store your SSH private keys on a cloud drive, and you don’t want to store them on a USB stick, or distribute them over different devices. In the past, I stored my SSH private keys on a cloud-drive in an encrypted container. When I needed a key, I encrypted the container and was able to use them. But this solution was inconvenient.
So what to do?
While searching for a solution I stumbled over KeeAgent, which is a plugin for KeePass. Keeagent allows you to store SSH keys in a KeePass database. KeeAgent then acts as SSH agent. I’m using this with PuTTY and MobaXterm and it works like a charm.
All you need is KeePass 2.x and the KeeAgent plugin. After installing the plugin (simply put the plgx file into C:\Program Files (x86)\KeePass Password Safe 2\Plugins), you can create a new entry in your KeePass database.
The password is the SSH private key passphrase. Then add the public and private key file to the newly created keepass database entry.
The KeeAgent.settings entry will be added automatically. Jump to the “KeeAgent” tab.
If required, keys can be loaded automatically if the database is locked, or you can add them later using the menu “Extras > KeeAgent”. Not every database entry can be used with KeeAgent, you have to enable the first checkbox to allow KeeAgent to use a specific database entry.
I create a database entry for each key pair I want to use with KeeAgent. And I only add frequently used keys automatically to KeeAgent. I have tons of keys and 99% of them are only added if I need them.
With KeeAgent in place, I can start new SSH sessions and KeeAgent delivers the matching key. You can see this in this screenshot “…from agent”.
I really don’t want to miss KeePass and KeeAgent. It makes my life easier and more secure.
When it comes to disaster recovery (DR), dedicated offsite infrastructure is a must. If you follow the 3-2-1 backup rule, then you should have at least three copies of your data, on two different media, and one copy should be offsite.
But an offsite copy of your data can be expensive… You have to setup storage and networking in a suitable colocation. And even if you have an offsite copy of your data, you must be able to recover the data. This could be fun in case of terabytes of data and an offsite copy on tape.
A offsite copy in a cloud is much more interesting. No need to provide hardware, software, licenses. Just provide internet-connectivity, book a suitable plan, and you are ready to go.
Replication to Cloud using Vembu CloudDR
Vembu offers a cloud-based disaster recovery plan through its own cloud services, which is hosted in Amazon Web Services (AWS). This product is designed for businesses, who can’t afford, or who are not willing, to setup a dedicated offsite infrastructure for disaster recovery.
The data, which is backuped by the Vembu BDR server, is replicated to the Vembu Cloud. In case of any disaster, the backup data can be directly restored from the cloud at anytime and anywhere. The replication is managed and monitored using the CloudDR portal.
Before you can enable the offsite replication, you have to register your Vembu BDR server with your Vembu Portal account. You can either go to onlinebackup.vembu.com, or you can go to portal.vembu.com and sign up.
After configuring schedule, retention and bandwidth usage, Vembu CloudDR is ready to go.
The end is near – time for recovery
CloudDR offers two types of recovery:
Image Based Recovery
Application Based Recovery
In case of an image based recovery, you can either download a VMDK or VHD(X) image, or you can do a file level recovery. In this case you can restore single files from inside of a chosen image.
You can even download a VHD(X) image of a VMware backup, which allows you some kind of V2V or P2V restores.
In case of a application based recovery, you can recover single application items from
Microsoft SQL Server, or
Depending on the type of restore, you will get an encrypted and password protected ZIP file with documents, or even MDF/ LDF files. These files can than be used to restore the lost data.
Vembu CloudDR is a pretty interesting add-on for Vembu customers. It’s easy to setup, has an attractive price tag and therefore consequently addresses the SMB customers.
It is pretty common that vendors offer their products in special editions for SMB customers. VMware offers VMware vSphere Essentials and Essentials Plus, Veeam offers Veeam Backup Essentials, and Vembu has Vembu BDR Essentials.
Now Vembu has extended their Vembu BDR Essentials package significantly to address the needs of mid-sized businesses.
Affordable backup for SMB customers
Most SMB virtualization deployments consists of two or three hosts, which makes 4 or 6 used CPU sockets. Because of this, Vembu BDR Essentials supportes up to 6 sockets or 50 VMs. Yes, 6 sockets OR 50 VMs. Vembu has no rised this limit to 10 Sockets OR 100 VMs! This allows customers to use up to five 2-socket hosts or 100 VMs with less than 10 sockets.
Vembu BDR Essentials support all important features:
Agentless VMBackup to backup VMs
Continuous Data Protection with support for RPOs of less than 15 minutes
Quick VM Recovery to get failed VMs up and running in minutes
Vembu Universal Explorer to restore individual items from Microsoft applications like Exchange, SharePoint, SQL and Active Directory
Replication of VMs Vembu OffsiteDR and Vembu CloudDR
Needless to say that Vembu BDR Essentials support VMware vSphere and Microsoft Hyper-V. If necessary, customer can upgrade to the Standard or Enterprise edition.
Implementing a public key infrastructure (PKI) is a recurring task for me. More and more customers tend to implement a PKI in their environment. Mostly not to increase security, rather then to get rid of browser warnings because of self-signed certificates, to secure intra-org email communication with S/MIME, or to sign Microsoft Office macros.
What is a 2-tier PKI?
Why is a multi-tier PKI hierarchy a good idea? Such a hierarchy typically consits of a root Certificate Authority (CA), and an issuing CA. Sometimes you see a 3-tier hierarchy, in which a root CA, a sub CA and an issuing CA are tied together in a chain of trust.
A root CA issues, stores and signs the digital certificates for sub CA. A sub CA issues, stores and signs the digital certificates for issuing CA. Only an issuing CA issues, stores and signs the digital certificates for users and devices.
In a 2-tier hierarchy, a root CA issues the certificate for an issuing CA.
In case of security breach, in which the issuing CA might become compromised, only the CA certificate for the issuing CA needs to be revoked. But what of the root CA becomes compromised? Because of this, a root CA is typically installed on a secured, and powered-off (offline) VM or computer. It will only be powered-on to publish new Certificate Revocation Lists (CRL), or to sign/ renew a new sub or issuing CA certificate.
Think about the processes! Creating a PKI is more than provisioning a couple of VMs. You need to think about processes to
Be aware of what a digital certificate is. You, or your CA, confirms the identity of a party by handing out a digital certificate. Make sure that no one can issue certificates without a proof of his identity.
Think about lifetimes of certificates! Customers tend to create root CA certificates with lifetimes of 10, 20 or even 40 years. Think about the typical lifetime of a VM or server, which is necessary to run an offline root CA. Typically the server OS has a lifetime of 10 to 12 years. This should determine the lifetime of a root CA certificate. IMHO 10 years is a good compromise.
For a sub or issuing CA, a lifespan of 5 years is a good compromise. Using the same lifetime as for a root CA is not a good idea, because an issued certificate can’t be longer valid than the lifetime of the CA certificate of the issuing CA.
A lifespan of 1 to 3 years for thinks like computer or web server certificates is okay. If a certificate is used for S/MIME or code signing, you should go for a lifetime of 1 year.
But to be honest: At the end of the day, YOU decide how long your certificates will be valid.
Publish CRLs and make them accessable! You can’t know if a certificate is revoked by a CA. But you can use a CRL to check if a certificate is revoked. Because of this, the CA must publish CRLs regulary. Use split DNS to use the same URL for internal and external requests. Make sure that the CRL is available for external users.
This applies not only to certificates for users or computers, but also for sub and issuing CAs. So there must be a CRL from each of your CAs!
I recommend to publish CRLs to a webserver and make this webserver reachable over HTTP. An issued certificate includes the URL or path to the CRL of the CA, that has issued the certificate.
Make sure that the CRL has a meaningful validity period. Of an offline root CA, which issues only a few certificates of its lifetime, this can be 1 year or more. For an issuing CA, the validity period should only a few days.
Publish AIA (Authority Information Access) information and make them accessable! AIA is an certificate extension that is used to offer two types of information :
How to get the certificate of the issuing or upper CAs, and
who is the OCSP responder from where revocation of this certificate can be checked
I tend to use the same place for the AIA as for the CDP. Make sure that you configure the AIA extension before you issue the first certificates, especially configure the AIA and CDP extension before you issue intermediate and issuing CA certificates.
Use a secure hash algorithm and key length! Please stop using SHA1! I recommend at least SHA256 and 4096 bit key length. Depending on the used CPUs, SHA512 can be faster than SHA256.
Create a CApolicy.inf! The CApolicy.inf is located uder C:\Windows and will will be used during the creation of the CA certificate. I often use this CApolicy.inf files.
For the root CA:
For the issuing CA:
I do not claim that this is blog post covers all necessary aspects of such an complex thing like an PKI. But I hope that I have mentioned some of the important parts. And at least: I have a reference from which I can copy and paste the CApolicy.inf files. :D
I got this error in a new deployment of Veeam Backup & Replication 9.5 Update 4. The error occured every day at 9 pm.
24.02.2019 21:00:11 :: Error: Remote deployment and management is available for licensed agents only. Please change your backup server settings to allow managed agents to consume the license, then perform a protection group rescan.
The solution to this issue is pretty simple. Make sure that you allow the consumption of licenses for free agents. You will find this option under General > License.
Another workaround is to disable the protection group. Right click “Manually Added” under “Physical & Cloud Infrastructure” and click “Disable”.
Let me know if one of these workarounds worked for you. :)
Today, a customer called me and reported, on the first sight, a pretty weired error: Only Windows clients were unable to login into a WPA2-Enterprise wireless network. The setup itself was pretty simple: Cisco Meraki WiFi access points, a Windows Network Protection Server (NPS) on a Windows Server 2016 Domain Controller, and a Sophos SG 125 was acting as DHCP for different WiFi networks.
Pixybay / pixabay.com/ Pixabay License
Windows clients failed to authenticate, but Apple iOS, Android, and even Windows 10 Tablets had no problem.
The following error was logged into the Windows Security event log.
Connection Request Policy Name: Use Windows authentication for all users
Logging Results: Accounting information was written to the local log file.
Reason Code: 16
Reason: Authentication failed due to a user credentials mismatch. Either the user name provided does not map to an existing user account or the password was incorrect.
The credentials were definitely correct, the customer and I tried different user and password combinations.
I also checked the NPS network policy. When choosing PEAP as authentication type, the NPS needs a valid server certificate. This is necessary, because the EAP session is protected by a TLS tunnel. A valid certificate was given, in this case a wildcard certificate. A second certificate was also in place, this was a certificate for the domain controller from the internal enterprise CA.
It was an educated guess, but I disabled the server certificate check for the WPA2-Enterprise conntection, and the client was able to login into the WiFi. This clearly showed, that the certificate was the problem. But it was valid, all necessary CA certificates were in place and there was no reason, why the certificate was the cause.
The customer told me, that they installed updates on friday (today is monday), and a reboot of the domain controller was issued. This also restarted the NPS service, and with this restart, the Wildcard certificate was used for client connections.
I switched to the domain controller certificate, restarted the NPS, and all Windows clients were again able to connect to the WiFi.
Try to avoid Wildcard certificates, or at least check the certificate that is used by the NPS, if you get authentication error with reason code 16.
Vembu Technologies was founded in 2002, and with 60.000 customers and more than 4000 partners, Vembu is a leading provider with a comprehensive portfolio of software products and cloud services to small and medium businesses.
Backup is important. There is no reason to have no backup. According to an infographic published by Clutch Research at the World Backup Day 2017, 60% of all SMBs that lost all their data will shutdown within 6 months after the data loss. Pretty bad, isn’t it?
When I talk to SMB customers, most of them complain about the costs of backups. You need software, you need the hardware, and depending on the type of used hardware, you need media. And you should have a second copy of your data. In my opinion, tape is dead for SMB customers. HPE for example, offers pretty smart disk-based backup solutions, like the HPE StoreOnce.
Vembu is giving away an Amazon gift cards through a lucky draw for those readers, that take part of a short Survey
Vembu Technologies/ Vembu BDR/ Copyright by Vembu Technologies
Vembu BDR Suite provides a 30-day free trial with no restriction. This gives you the chance to intensively test Vembu BDR Suite prior purchase.
The free edition let you choose between unlimited VMs, that are covered with limited functionality, or unlimited functionality for up to 3 VMs. Check out this comparison of free, standard and enterprise edition. Check out this comparison of free, standard and enterprise edition.
Last week, a customer complained that he could not send emails with pictures with the native iOS email app. He attached three, four or five pictures to an emails, pushed the send button and instantly an error was displayed.
We checked the different connectors as well as the organizational limit for messages. The test mails were between 10 to 20 MB, and the message size limit was much higher.
geralt / pixabay.com/ Creative Commons CC0
The cross-check with Outlook Web Access indicated, that the issue was not a configured limit on one of the Exchange connectors. Instead, a quick search directed us towards the client-specific message size limits. Especially this statement caught our attention:
For any message size limit, you need to set a value that’s larger than the actual size you want enforced. This accounts for the Base64 encoding of attachments and other binary data. Base64 encoding increases the size of the message by approximately 33%, so the value you specify should be approximately 33% larger than the actual message size you want enforced. For example, if you specify a maximum message size value of 64 MB, you can expect a realistic maximum message size of approximately 48 MB.
The message size limit for Active Sync is 10 MB (Source). This is a server limit which can’t configured using the Exchange Admin Center. Taking the 33% Base64 overhead into account, the message size limit is ~ 6,5 MB. My customer and I were able to proof this assumption. A 10 MB mail stuck in the outbox, a 6 MB mail was sent.
How to change client-specific message size limits?
In this case, my customer and I only changed the Active Sync limit. You can use the commands below to change the limit. This will rise the limit to ~ 67 MB. Without the Base64 overhead, this values allow messages sizes up to 50 MB. You have to run these commands from an administrative CMD.
%windir%\system32\inetsrv\appcmd.exe set config "Default Web Site/Microsoft-Server-ActiveSync/" -section:system.webServer/security/requestFiltering /requestLimits.maxAllowedContentLength:69730304
%windir%\system32\inetsrv\appcmd.exe set config "Default Web Site/Microsoft-Server-ActiveSync/" -section:system.web/httpRuntime /maxRequestLength:68096
%windir%\system32\inetsrv\appcmd.exe set config "Exchange Back End/Microsoft-Server-ActiveSync/" -section:system.webServer/security/requestFiltering /requestLimits.maxAllowedContentLength:69730304
%windir%\system32\inetsrv\appcmd.exe set config "Exchange Back End/Microsoft-Server-ActiveSync/" -section:system.web/httpRuntime /maxRequestLength:68096
%windir%\system32\inetsrv\appcmd.exe set config "Exchange Back End/Microsoft-Server-ActiveSync/" -section:appSettings /[key='MaxDocumentDataSize'].value:69730304
Make sure that you restart the IIS after the changes. Run iisreset from an administrative CMD.
Please note, that you have to run these commands after you installed an Exchange Server Cumulative Update (CU), because the files, in which the changes are made, will be overwritten by the CU. This statement is from the Microsoft:
Any customized Exchange or Internet Information Server (IIS) settings that you made in Exchange XML application configuration files on the Exchange server (for example, web.config files or the EdgeTransport.exe.config file) will be overwritten when you install an Exchange CU. Be sure save this information so you can easily re-apply the settings after the install. After you install the Exchange CU, you need to re-configure these settings.
The maximum size for a message sent by Exchange Web Services clients is 64 MB, which is much more that the 10 MB for Active Sync. This might explain why customers, that use Outlook for iOS app, might not recognize this issue.
EDIT: Today I found a blog post written by Frank Zöchling in June 2018, which addresses this topic.
To backup a virtual machine, Veeam Backup & Replication needs two permissions:
permission to access and backup the VM, as well as the
permission to do specific tasks inside the VM
to guarantee a consistent backup. The former persmission is granted by the user account that is used to access the VMware vCenter server (sorry for the VMW focust at this point). Usually, this account has the Administrator role granted at the vCenter Server level. The latter permission is granted by a user account that has permissions inside the guest operating system.
geralt / pixabay.com/ Creative Commons CC0
Something I often see in customer environments is the usage of the Domain Administrator account. But why? Because everything works when this account is used!
There are two reasons for this:
This account is part of the local Administrator group on every server and client
customers tend to grant the Administrator role to the Domain Admins group on vCenter Server level
In simple words: Many customers use the same account to connect to the vCenter, and for the application-aware processing of Veeam Backup & Replication. At least for Windows servers backups.
Houston, we have a problem!
Everything is fine until customers have to secure their environments. One of the very first things customers do, is to protect the Administrator account. And at this point, things might go wrong.
Using a service account to connect to the vCenter server is easy. This can be any account from the Active Directory, or from the embedded VMware SSO domain. I tend to create a dedicated AD-based service account. For the necessary permissions in the vCenter, you can grant this account Administrator permissions, or you can create a new user role in the vCenter. Veeam offers a PDF document which documents the necessary permissions for the different Veeam tasks.
The next challenge is the application-aware processing. For Microsoft SQL Server, the user account must have the sysadmin privileges on the Microsoft SQL Server. For Microsoft Exchange, the user must be member of the local Administrator group. But in case of a Active Directory Domain Contoller things get complicated.
A Domain Controller does not have a local user database (SAM). So what user account or group membership is needed to backup a domain controller using application-aware processing?
Permissions: Administrative rights for target Active Directory. Account of an enterprise administrator or domain administrator.
So the service account used to backup a domain controller is one of the most powerful accounts in the active directory.
There is no other way. You need a Domain or Enterprise Administrator account. I tend to create a dedicated account for this task.
I recommend to create a service account to connect the vCenter, and which is added to the local Administrator group on the servers to backup, and I create a dedicated Domain/ Enterprise Administrator account to backup the virtual Domain Controllers.
The advantage is that I can change apply different fine-grained password policies to this accounts. Sure, you can add more security by creating more accounts for different servers, and applications, add a dedicated role to the vCenter for Veeam etc. But this apporach is easy enough to implement, and adds a significant amount of user account security to every environment that is still using DOMAIN\Administrator to backup their VMs.
One of my customers bought a very nice new backup solution, which consists of a
HPE StoreOnce 5100 with ~ 144 TB usable capacity,
and a new HPE ProLiant DL380 Gen10 with Windows Server 2016
as new backup server. StoreOnce and backup server will be connected with 8 Gb Fibre-Channel and 10 GbE to the existing network and SAN. Veeam Backup & Replication 9.5 U3a is already in use, as well as VMware vSphere 6.5 Enterprise Plus. The backend storage is a HPE 3PAR 8200.
This setup allows the usage of Catalyst over Fibre-Channel together with Veeam Storage Snapshots, and this was intended to use.
The OS on the StoreOnce was up-to-date (3.16.7), Windows Server 2016 was installed using HPE Intelligent Provisioning. Afterwards, a drivers and firmware were updated using the latest SPP 2018.11 was installed. So all drivers and firmware were also up-to-date.
After doing zoning and some other configuration tasks, I installed Veeam Backup and Replication 9.5 U3, configured my Catalyst over Fibre-Channel repository. I configured a test backup… and the server failed with a Blue Screen of Death… which is pretty rare since Server 2008 R2.
geralt / pixabay.com/ Creative Commons CC0
I did some tests:
backup from 3PAR Storage Snapshots to Catalyst over FC repository – BSoD
backup without 3PAR Storage Snapshots to Catalyst over FC repository – BSoD
backup from 3PAR Storage Snapshots to Catalyst over LAN repository – works fine
backup without 3PAR Storage Snapshots to Catalyst over LAN repository – works fine
backup from 3PAR Storage Snapshots to default repository – works fine
backup without 3PAR Storage Snapshots to default repository – works fine
So the error must be caused by the usage of Catalyst over Fibre-Channel. I filed a case at HPE, uploaded gigabytes of memory dumps and heard pretty less during the next week.
HPE StoreOnce Support Matrix FTW!
After a week, I got an email from the HPE support with a question about the installed HBA driver and firmware. I told them the version number and a day later I was requested to downgrade (!) drivers and firmware.
The customer has got a SN1100Q (P9D93A & P9D94A) HBA in his backup server, and I was requested to downgrade the firmware to version 8.05.61, as well as the driver to 184.108.40.206. And with this firmware and driver version, the backup was running fine (~ 750 MB/s hroughput).
I found the HPE StoreOnce Support Matrix on the SPOCK website from HPE. The matrix confirmed the firmware and driver version requirement (click to enlarge).
Fun fact: None of the listed HBAs (except the Synergy HBAs) is supported with the latest StoreOnce G2 products.
You should take a look at those support matrices – always! HPE confirmed that the first level recommendation “Have you trieed to update to the latest firmware” can cause similar problems. The fact, that the factory ships the server with the latest firmware does not make this easier.