Category Archives: Software

High CPU usage on Citrix ADC VPX

While building a small Citrix NetScaler… ehm… ADC VPX (I really hate this name…) lab environment, I noticed that the fan of my Lenovo T480s was spinning up. I was wondering why, because the VPX VM was just running for a couple of minutes – without any load. But the task manager told me, that the VMware Workstation Process was consuming 25% (I have a Intel i5 Quad Core CPU) CPU. So VMware Workstation was just eating a whole CPU core without doing anything. I would not care, but the fan… And it reminded me, that I’ve seen an similar behaviour in various VPX deployments on VMWare ESXi.

Fifaliana/ pixabay.com/ Creative Commons CC0

A quick search lead me to this Citrix Support Knowledge Center article: High CPU Usage on NetScaler VPX Reported on VMware ESXi Version 6.0. That’s exactly what I’ve observed.

The solution is setting the parameter cpuyield to yes.

The VPX does not need a reboot. Short after setting the parameter, the fan stopped spinning. Have I mentioned how I love silence on my desk? I’m pretty happy that my T480s is a really quiet laptop.

But what does this parameter is used for? In pretty simple words: To allocate CPU cycles, that are not used by other VMs. Until ADC VPX 11.1, the VPX was sharing CPU with other VMs. This changed with ADC VPX 12.0. Since this release, the VPX was like a child, that was playing with their favorite toy just to make sure, that no other child can play with it. Not very polite…

This is a quote from the Support Knowledge Center article:

Set ns vpxparam parameters:
-cpuyield: Release or do not release of allocated but unused CPU resources.

YES: Allow allocated but unused CPU resources to be used by another VM.

NO: Reserve all CPU resources for the VM to which they have been allocated. This option shows higher percentage in hypervisor for VPX CPU usage.
DEFAULT: NO

I don’t think that I would change this in production. But for lab environments, especially if you run this on VMware Workstation, I would set -cpuyield to yes .

Using Let’s Encrypt DNS-01 challenge validation with local BIND instance

I’m using Let’s Encrypt certificates for a while now. In the past, I used the standalone plugin (TLS-SNI-01) to get or renew my certificates. But now I switched to the DNS plugin. I run my own name servers with BIND, so it was a very low hanging fruit to get this plugin to work.

Clker-Free-Vector-Images/ pixabay.com/ Creative Commons CC0

To get or renew a certificate, you need to provide some kind of proof that you are requesting the certificate for a domain that is under your control. No certificate authority (CA) wants to be the CA, that hands you out a certificate for google.com or amazon.com…

The DNS-01 challenge uses TXT records in order to validate your ownership over a certain domain. During the challenge, the Automatic Certificate Management Environment (ACME) server of Let’s Encrypt will give you a value that uniquely identifies the challenge. This value has to be added with a TXT record to the zone of the domain for which you are requesting a certificate. The record will look like this:

This record is for a wildcard certificate. If you want to get a certificate for a host, you can add one or more TXT records like this:

There is a IETF draft about the ACME protocol. Pretty interesting read!

Configure BIND for DNS-01 challenges

I run my own name servers with BIND on FreeBSD. The plugin for certbot automates the whole DNS-01 challenge process by creating, and subsequently removing, the necessary TXT records from the zone file using RFC 2136 dynamic updates.

First of all, we need a new TSIG (Transaction SIGnature) key. This key is used to authorize the updates.

This key has to be added to the named.conf. The key is in the .key file.

The key is used to authroize the update of certain records. To allow the update of TXT records, which are needed for the challenge, add this to the zone part of you named.con.

The records start always with _acme-challenge.domainname.

Now you need to create a config file for the RFC2136 plugin. This file also includes the key, but also the IP of the name server. If the name server is running on the same server as the DNS-01 challenge, you can use 127.0.0.1 as name server address.

Now we have everything in place. This is a --dry-run from on of my FreeBSD machines.

This is a snippet from the name server log file at the time of the challenge.

You might need to modify the permissons for the directory which contains the zone files. Usually the name server is not running as root. In my case, I had to grant write permissions for the “bind” group. Otherwise you might get “permission denied”.

 

Powering on a VM with shared VMDK fails after extending a EagerZeroedThick VMDK

I hope that you are not reading this blog post while searching for a solution for a failed cluster. If so, feel free to leave a comment if this blog post saved your evening or weekend. :)

Last friday, a change at one of my customers went horribly wrong. I was not onsite, but they contacted me during the night from friday to saturday, because their most important Windows Server Failover Cluster was unable to start after extending a shared VMDK.

cripi/ pixabay.com/ Creative Commons CC0

They tried something pretty simple: Extending an virtual disk of a VM. That is something most of us doing pretty often. The customer did this also pretty often. It was a well known task… Except the fact, that the VM was part of a Windows Server Failover Cluster. With shared VMDKs. And the disks were EagerZeroedThick, because this is a requirement for shared VMDKs.

They extended the disk using the vSphere Web Client. And at this point, the change was doomed to fail. They tried to power-on the VMs, but all they got was this error:

VMware ESX cannot open the virtual disk, “/vmfs/volumes/4c549ecd-66066010-e610-002354a2261b/VMNAME/VMDKNAME.vmdk” for clustering. Please verify that the virtual disk was created using the ‘thick’ option.

A shared VMDK is a VMDK in multiwriter mode. This VMDK has to be created as Thick Provision Eager Zeroed. And if you wish to extend this VMDK, you must use vmkfstools with the option -d eagerzeroedthick. If you extend the VMDK using the Web Client, the extended portion of the disk will become LazyZeroed!

VMware has described this behaviour in the KB1033570 (Powering on the virtual machine fails with the error: Thin/TBZ disks cannot be opened in multiwriter mode). There is also a blog post by Cormac Hogan at VMware, who has described this behaviour.

That’s a screenshot from the failed cluster. Check out the type of the disk (Thick-Provision Lazy-Zeroed).

Patrick Terlisten/ vcloudnine.de/ Creative Commons CC0

You must use vmkfstools to extend a shared VMDK – but vmkfstools is also the solution, if you have trapped into this pitfall. Clone the VMDK with option -d eagerzeroedthick.

Another solution, which was new to me, is to use Storage vMotion. You can migrate the “broken” VMDK to another datastore and change the the disk format during Storage vMotion. This solution is described in the “Notes” section of KB1033570.

Both ways will fix the problem. The result will be a Thick Provision Eager Zeroed VMDK, which will allow the VMs to be successfully powered on.

Office 365 – Outlook keeps prompting for password

This is only a short blog post to  document a solution for a very annoying problem. After the automatic update of my Outlook to the latest Office 365 build (version 1809), it has started to prompting for credentials. I’m using Outlook to access a Microsoft Exchange 2016 server (on-premises), without any hybrid configuration. A pretty simple and plain Exchange 2016 on-prem deployment.

I knew, that it has to be related to Office 365, because the Outlook 2016 on my PC at the office was not affected. Only the two Office 365 deployments on my ThinkPad T480s and ThinkPad X250.

To make this long story short: ExcludeExplicitO365Endpoint  is the key! You have to add a DWORD under HKEY_CURRENT_USER\Software\Microsoft\Office\16.0\Outlook\AutoDiscover.

HKEY_CURRENT_USER\SOFTWARE\Policies\Microsoft\office\16.0\outlook\autodiscover
DWORD: ExcludeExplicitO365Endpoint
Value = 1

Restart your computer and the annoying credentials prompts are gone.

Vembu BDR Essentials – affordable backup for SMB customers

It is common that vendors offer their products in special editions for SMB customers. VMware offers VMware vSphere Essentials and Essentials Plus, Veeam offers Veeam Backup Essentials, and now Vembu has published Vembu BDR Essentials.

Vembu Technologies/ Vembu BDR Essentials/ Copyright by Vembu Technologies

Backup is important. There is no reason to have no backup. According to an infographic published by Clutch Research at the World Backup Day 2017, 60% of all SMBs that lost all their data will shutdown within 6 months after the data loss. Pretty bad, isn’t it?

When I talk to SMB customers, most of them complain about the costs of backups. You need software, you need the hardware, and depending on the type of used hardware, you need media. And you should have a second copy of your data. In my opinion, tape is dead for SMB customers. HPE for example, offers pretty smart disk-based backup solutions, like the HPE StoreOnce. But hardware is nothing without software. And at this point, Vembu BDR Essentials comes into play.

Affordable backup for SMB customers

Most SMB virtualization deployments consists of two or three hosts, which makes 4 or 6 used CPU sockets. Because of this, Vembu BDR Essentials supportes up to 6 sockets or 50 VMs. But why does Vembu limit the number of sockets and VMs? You might missed the OR. Customers have to choice which limit they want to accept. Customers are limited at the host-level (max 6 sockets), but not limited in the amount of VMs, or they can use more than 6 sockets, but then they are limited to 50 VMs.

Feature Highlights

Vembu BDR Essentials support all important features:

  • Agentless VMBackup to backup VMs
  • Continuous Data Protection with support for RPOs of less than 15 minutes
  • Quick VM Recovery to get failed VMs up and running in minutes
  • Vembu Universal Explorer to restore individual items from Microsoft applications like Exchange, SharePoint, SQL and Active Directory
  • Replication of VMs Vembu OffsiteDR and Vembu CloudDR

Needless to say that Vembu BDR Essentials support VMware vSphere and Microsoft Hyper-V. If necessary, customer can upgrade to the Standard or Enterprise  edition.

To get more information about the different Vembu BDR parts, take a look at my last Vembu blog post: The one stop solution for backup and DR: Vembu BDR Suite

The pricing

Now the fun part – the pricing. Customers can save up to 50% compared to the Vembzu BDR Suite.

Vembu Technologies/ Vembu BDR Essentials Pricing/ Copyright by Vembu Technologies

The licenses for Vembu BDR Essentials are available in two models:

  • Subscription, and
  • Perpetual

Subscription licenses are available for 1, 2, 3 and 5 years. The perpetual licenses is valid for 10 years from the date of purchase. The subscription licensing has the benefit, that it included 24×7 technical support. If you purchase the perpetual  license, the Annual Maintenance Cost (AMC) for first year is free. From the second year, it is 20% of the license cost, and it is available for 1, 2 or 3 years.

There is no excuse for not having a backup

With Vembu BDR Essentials, there is no more excuse for not having a competitive backup protecting your business! The pricing fits any SMB customer, regardless of their size or business. The rich feature set is competitive to other vendors, and both leading hypervisors are supported.

A pretty nice product. Try it for free! Vembu also offers a free edition that might fit small environments. The free edition let you choose between unlimited VMs, that are covered with limited functionality, or unlimited functionality for up to 3 VMs. Check out this comparison of free, standard and enterprise edition.

The one stop solution for backup and DR: Vembu BDR Suite

This posting is ~1 year years old. You should keep this in mind. IT is a short living business. This information might be outdated.

I have worked with a lot of backup software products during my career, but for the last years I have primarily worked with MicroFocus Data Protector (former HP OmniBack, HP Data Protector, or HPE Data Protector), and Veeam Backup & Replication. Data Protector was a great solution for traditional server environments, or when UNIX (HP-UX, AIX, Solaris etc.) compatibility was required. Features like Zero Downtime Backups, LAN-free or Direct SAN backups were available for many years. But their code quality has suffered severely in the recent years. The product no longer seemed like a one-stop shop. Some months ago, HPE sold its software division to MicroFocus and started to sell Veeam Backup & Replication through its channel. Some months prior selling the complete software division, HPE acquired Trilead, which is many of us well known because of their VM Explorer. Sad but true: Data Protector is dead to me.

I think I don’t have to say much about Veeam. Veeam is unbeaten when it comes down to virtualized server environments, and they constantly add features and extend their product portfolio. Think about their solutions Office 365, or Veeam Agent for Windows and Linux.

Why Vembu?

It is always good to have more than product in the portfolio, just because to give customers the choice between different products. If your only tool is a hammer, everthing looks like a nail. That is why I took a closer look at Vembu. I became aware of Vembu, because they asked to place an ad on vcloudnine. This was a year ago. So it was obvious to take a look at their products. Furthermore, Vembu and its products were mentioned many times in my Twitter timeline. Two good reasons to take a look at them!

Vembu Technologies was founded in 2002, and with 60.000 customers and more than 4000 partners, Vembu is a leading provider with a comprehensive portfolio of software products and cloud services to small and medium businesses. We are not talking about a newcomer!

The Vembu BDR Suite

The Vembu BDR Suite is an one stop solution to all your backup and disaster recovery needs. That is what Vembu says about their own product. The BDR Suite covers

  • Backup and replication of VMs running on VMware vSphere and Microsoft Hyper-V
  • Backup and bare-metal recovery for physical servers and workstations (Windows Server and Desktop)
  • File and application backups of Microsoft Exchange, Microsoft SQL Server, Microsoft SharePoint, Microsoft Active Directory, Microsoft Outlook, and MySQL
  • Creating of backup copies and transfer of them to a DR site

Let’s have a more detailed look at the Vembu BDR Suite. This is a picture of the overall architecture.

Vembu Technologies/ Vembu BDR Suite architecture/ Copyright by Vembu Technologies

VMBackup

VMBackup provides fast, efficient and agentless backup for VMs hosted on VMware ESXi and on Microsoft Hyper-V. It also provides the capability to replicate virtual machines from one ESXi host to another ESXi (VMreplication). You might guess it – This feature is only available for VMware ESXi. In case of Microsoft Hyper-V, you have to use the built-in Hyper-V replication. The failover and failback of replicated VMs is managed by the BDR Backup Server. VMBackup offers instant VM recovery, recovery of single files and folder from image-level backups, and recovery of application items from Microsoft Exchange, Microsoft SQL Server, Microsoft SharePoint, and Microsoft Active Directory. The functionality is similar to what you know from other products, like Veeam Backup & Replication, or MicroFocus Data Protector. VMBackup is licensed per socket, and it is available in a Standard (~ 150 $ per socket) and an Enterprise (~ 250 $ per socket) edition.

ImageBackup

ImageBackup addresses something, that might be extinct for some of us: Physical servers, like physical database or file servers. It can take image backups of Windows servers and workstations. This allows customers to restore the entire server or workstation from scratch to the same, or to new hardware. ImageBackup utilizes the Volume Shadow Copy Service (VSS) to create a consistent backup of a physical machine. Customers can restore a backup to the bare-metal, restore single files and folders, as well as application items from Microsoft Exchange, Microsoft SQL Server, Microsoft SharePoint, and Microsoft Active Directory. If necessary, the can be restored to a supported hypervisor. With other words: P2V migration. ImageBackup is licensed per host, or per application server if you wish to take backups of applications like Microsoft Exchange or SQL server. ImageBackup for servers costs ~ 150 $, and it is free for workstations.

NetworkBackup

NetworkBackup addresses the backup of files, folders and application data from Windows, Mac and Linux clients. It is designed to protect business data across file servers, application servers, workstations and other endpoints. It does not take an image backup, but full and incremental backups. The feature set and use case of NetworkBackup is similar to “traditional” backup software like MicroFocus Data Protector or ARCServe. NetworkBackup offers intelligent scheduling policies, bandwidth management and flexible retention polices. Clients are not always onsite, to address this, NetworkBackup can store its data in the Vembu Cloud (Vembu Cloud Services). NetworkBackup is licensed per file server (~ 60 $ per server), application server (~ 150 $), or workstation (free).

OffsiteDR

OffsiteDR creates and transfers backup copies to a DR site. Data is immediately transferred when it arrives at the backup server. The Data is encrypted in-flight using industry-standard AES 256 encryption. WAN optimization is included, which means that data is compressed, encrypted and deduplicated before being replicated to the OffsiteDR server. The recovery of VMs and files can directly be done from the OffsiteDR server. So there is no need to setup a new backup server in case of a disaster recovery. OffsiteDR covers different recovery screnarios, like instantly recover machines directly from the Vembu OffsiteDR server, bare-metal restore using the Vembu Recovery CD, or restore the virtual machine as on a VMware ESXi or Microsoft Hyper-V server directly from the Vembu OffsiteDR server. OffsiteDR is an add-on to VMBackup, and it is licensed per CPU socket (~ 90 $).

Universal Explorer

The Universal Explorer is used to restore items from various Microsoft applications, like Microsoft Exchange, SQL Server, SharePoint, or Active Directory. An item can be an email, a mailbox, complete databases, user or group objects etc. These items are sourced from image-level backups of physical and virtual machines. You might see some similarities to Veeam Explorer. Both products are comparable.

Recovery CD

The Vembu Recovery CD can be used to recover physical or virtual maschines. Drivers for the target platform will be injected during the restore. This is pretty handy in case of P2P and V2P migrations.

Licensing & Editions

Vembu offers a subscription and a perpetual license model. The subscription model can be purchased on a monthly or yearly basis, such as 1, 2, 3 or 5 years. It includes 24/ 7 standard technical support, updates and upgrades throughout the licensed period. The perpetual licensing model allows you to purchase and use the Vembu BDR suite by paying a single fee. This includes free maintenance and support for the first year.

Visit the pricing page for more detailed information. A Vembu BDR Suite edition comparison is also available.

Final thoughts

With 60.000 customers and 4000 partners, Vembu is not a newcomer in the backup business. The product portfolio is quite comprehensive. The Vembu BDR Suite offers industry standard features to a very sweet price. I can’t see any feature, that a SMB customer might require, which is not available. In sum, the Vembu BDR suite seems to me to be a very good alternative to the top dogs in the backup business, especially if we are talkin about SMB customers.

Backup from a secondary HPE 3PAR StoreServ array with Veeam Backup & Replication

This posting is ~1 year years old. You should keep this in mind. IT is a short living business. This information might be outdated.

When taking a backup with Veeam Backup & Replication, a VM snapshot is created to get a consistent state of the VM. The snapshot is taken prior the backup, and it is removed after the successful backup of the VM. The snapshot grows during its lifetime, and you should keep in mind, that you need some free space in the datastore for snapshots. This can be a problem, especially in case of multiple VM backups at a time, and if the VMs share the same datastore.

Benefit of storage snapshots

If your underlying storage supports the creation of storage snapshots, Veeam offers an additional way to create a consistent state of the VMs. In this case, a storage snapshot is taken, which is presented to the backup proxy, and is then used to backup the data. As you can see: No VM snapshot is taken.

Now one more thing: If you have a replication or synchronous mirror between two storage systems, Veeam can do this operation on the secondary array. This is pretty cool, because it takes load from you primary storage!

Backup from a secondary HPE 3PAR StoreServ array

Last week I was able to try something new: Backup from a secondary HPE 3PAR StoreServ array. A customer has two HPE 3PAR StoreServ 8200 in a Peer Persistence setup, a HPE StoreOnce, and a physical Veeam backup server, which also acts as Veeam proxy. Everything is attached to a pretty nice 16 Gb dual Fabric SAN. The customer uses Veeam Backup & Replication 9.5 U3a. The data was taken from the secondary 3PAR StoreServ and it was pushed via FC into a Catalyst Store on a StoreOnce. Using the Catalyst API allows my customer to use Synthetic Full backups, because the creation is offloaded to StoreOnce. This setup is dramatically faster and better than the prior solution based on MicroFocus Data Protector. Okay, this last backup solution was designed to another time with other priorities and requirements. it was a perfect fit at the time it was designed.

This blog post from Veeam pointed me to this new feature: Backup from a secondary HPE 3PAR StoreServ array. Until I found this post, it was planned to use “traditional” storage snapshots, taken from the primary 3PAR StoreServ.

With this feature enabled, Veeam takes the snapshot on the 3PAR StoreServ, that is hosting the synchronous mirrored virtual volume. This graphic was created by Veeam and shows the backup workflow.

Veeam/ Backup process/ Copyright by Veeam

My tests showed, that it’s blazing fast, pretty easy to setup, and it takes unnecessary load from the primary storage.

In essence, there are only three steps to do:

  • add both 3PARs to Veeam
  • add the registry value and restart the Veeam Backup Server Service
  • enable the usage of storage snapshots in the backup job

How to enable this feature?

To enable this feature, you have to add a single registry value on the Veeam backup server, and afterwards restart the Veeam Backup Server service.

  • Location: HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication\
  • Name: Hp3PARPeerPersistentUseSecondary
  • Type: REG_DWORD (0 False, 1 True)
  • Default value: 0 (disabled)

Thanks to Pierre-Francois from Veeam for sharing his knowledge with the community. Read his blog post Backup from a secondary HPE 3PAR StoreServ array for additional information.

CloudFlare API v4 and Fail2ban: Fixing the unban action

This posting is ~1 year years old. You should keep this in mind. IT is a short living business. This information might be outdated.

In January 2017, I wrote an article about how to protect your WordPress blog using the WP Fail2Ban plugin, fail2ban on your Linux/ FreeBSD host, and CloudFlare. Back then, the fail2ban was using the CloudFlare API V1, which was already deprecated since November 2016.

Free-Photos/ pixabay.com/ Creative Commons CC0

Although the actions were updated later to use CloudFlare API V4, I still had problems with the unbaning of IP addresses. IP addresses were banned, but the unban action failed. 

This is the unban action, which is included in fail2ban (taken from fail2ban-0.10.3.1 which is shipped with FreeBSD 11.1-RELEASE-p10):

And this is the unban action, which finally solved this issue:

I found the solution at serverfault.com. The only difference is an additional tr -d '\n' in the last line of the statement. Kudos to Jake for fixing this!

To prevent the action file to being overwritten, you should copy the original cloudflare.conf located in the action.d directory, e.g. to mycloudflare.conf , and use the copied action file in your fail definition.

Windows Network Policy Server (NPS) server won’t log failed login attempts

This posting is ~1 year years old. You should keep this in mind. IT is a short living business. This information might be outdated.

This is just a short, but interesting blog post. When you have to troubleshoot authentication failures in a network that uses Windows Network Policy Server (NPS), the Windows event log is absolutely indispensable. The event log offers everything you need. The success and failure event log entries include all necessary information to get you back on track. If failure events would be logged…

geralt/ pixabay.com/ Creative Commons CC0

Today, I was playing with Alcatel-Lucent Enterprise OmniSwitches and Access Guardian in my lab. Access Guardian refers to the some OmniSwitch security functions that work together to provide a dynamic, proactive network security solution:

  • Universal Network Profile (UNP)
  • Authentication, Authorization, and Accounting (AAA)
  • Bring Your Own Device (BYOD)
  • Captive Portal
  • Quarantine Manager and Remediation (QMR)

I have planned to publish some blog posts about Access Guardian in the future, because it is a pretty interesting topic. So stay tuned. :)

802.1x was no big deal, mac-based authentication failed. Okay, let’s take a look into the event log of the NPS… okay, there are the success events for my 802.1x authentication… but where are the failed login attempts? Not a single one was logged. A short Google search showed me the right direction.

Failed logon/ logoff events were not logged

In this case, the NPS role was installed on a Windows Server 2016 domain controller. And it was a german installation, so the output of the commands is also in german. If you have an OS installed in english, you must replace “Netzwerkrichtlinienserver” with “Network Policy Server”.

Right-click the PowerShell Icon and open it as Administrator. Check the current settings:

As you can see, only successful logon and logoff events were logged.

The option /success:enable /failure:enable activeates the logging of successful and failed logon and logoff attempts.

Veeam backups fails because of time differences

This posting is ~1 year years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Last week I had an interesting incident at a customer. The customer reported that one of multiple Veeam backup jobs jobs constantly failed.

jarmoluk/ pixabay.com/ Creative Commons CC0

The backup job included two VMs, and the backup of one of these VMs failed with this error:

The verified the used credentials for that job, but re-entering the password does not solved the issue. I then checked the Veeam backup logs located under %ProgramData%\Veeam\Backup (look for the Agent.Job_Name.Source.VM_Name.vmdk.log) and found VDDK Error 3014:

The user, that was used to connect to the vCenter, was an Active Directory located account. The account were granted administrator privileges root of the vCenter. Switching from an AD located account to Administrator@vsphere.local solved the issue. Next stop: vmware-sts-idmd.log on the vCenter Server appliance. The error found in this log confirmed my theory, that there was an issue with the authentication itself, not an issue with the AD located account.

To make a long story short: Time differences. The vCenter, the ESXi hosts and some servers had the wrong time. vCenter and ESXi hosts were using the Domain Controllers as time source.

This is the ntpq output of the vCenter. You might notice the jitter values on the right side, both noted in milliseconds.

After some investigation, the root cause seemed to be a bad DCF77 receiver, which was connected to the domain controller that was hosting the PDC Emulator role. The DCF77 receiver was connected using an USB-2-LAN converter. Instead of using a DCF77 receiver, the customer and I implemented a NTP hierarchy using a valid NTP source on the internet (pool.ntp.org).