Tag Archives: backup & replication

Backup from a secondary HPE 3PAR StoreServ array with Veeam Backup & Replication

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

When taking a backup with Veeam Backup & Replication, a VM snapshot is created to get a consistent state of the VM. The snapshot is taken prior the backup, and it is removed after the successful backup of the VM. The snapshot grows during its lifetime, and you should keep in mind, that you need some free space in the datastore for snapshots. This can be a problem, especially in case of multiple VM backups at a time, and if the VMs share the same datastore.

Benefit of storage snapshots

If your underlying storage supports the creation of storage snapshots, Veeam offers an additional way to create a consistent state of the VMs. In this case, a storage snapshot is taken, which is presented to the backup proxy, and is then used to backup the data. As you can see: No VM snapshot is taken.

Now one more thing: If you have a replication or synchronous mirror between two storage systems, Veeam can do this operation on the secondary array. This is pretty cool, because it takes load from you primary storage!

Backup from a secondary HPE 3PAR StoreServ array

Last week I was able to try something new: Backup from a secondary HPE 3PAR StoreServ array. A customer has two HPE 3PAR StoreServ 8200 in a Peer Persistence setup, a HPE StoreOnce, and a physical Veeam backup server, which also acts as Veeam proxy. Everything is attached to a pretty nice 16 Gb dual Fabric SAN. The customer uses Veeam Backup & Replication 9.5 U3a. The data was taken from the secondary 3PAR StoreServ and it was pushed via FC into a Catalyst Store on a StoreOnce. Using the Catalyst API allows my customer to use Synthetic Full backups, because the creation is offloaded to StoreOnce. This setup is dramatically faster and better than the prior solution based on MicroFocus Data Protector. Okay, this last backup solution was designed to another time with other priorities and requirements. it was a perfect fit at the time it was designed.

This blog post from Veeam pointed me to this new feature: Backup from a secondary HPE 3PAR StoreServ array. Until I found this post, it was planned to use “traditional” storage snapshots, taken from the primary 3PAR StoreServ.

With this feature enabled, Veeam takes the snapshot on the 3PAR StoreServ, that is hosting the synchronous mirrored virtual volume. This graphic was created by Veeam and shows the backup workflow.

Veeam/ Backup process/ Copyright by Veeam

My tests showed, that it’s blazing fast, pretty easy to setup, and it takes unnecessary load from the primary storage.

In essence, there are only three steps to do:

  • add both 3PARs to Veeam
  • add the registry value and restart the Veeam Backup Server Service
  • enable the usage of storage snapshots in the backup job

How to enable this feature?

To enable this feature, you have to add a single registry value on the Veeam backup server, and afterwards restart the Veeam Backup Server service.

  • Location: HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication\
  • Name: Hp3PARPeerPersistentUseSecondary
  • Type: REG_DWORD (0 False, 1 True)
  • Default value: 0 (disabled)

Thanks to Pierre-Francois from Veeam for sharing his knowledge with the community. Read his blog post Backup from a secondary HPE 3PAR StoreServ array for additional information.

Veeam backups fails because of time differences

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Last week I had an interesting incident at a customer. The customer reported that one of multiple Veeam backup jobs jobs constantly failed.

jarmoluk/ pixabay.com/ Creative Commons CC0

The backup job included two VMs, and the backup of one of these VMs failed with this error:

Error: Failed to open VDDK disk [[VMDS-SAS-01] VMDC1/VMDC1_1.vmdk] ( is read-only mode - [true] ) 
Failed to open virtual disk Logon attempt with parameters [VC/ESX: [vcenter.domain.tld];Port: 443;Login: 
[AD\Administrator];VMX Spec: [moref=vm-59];Snapshot mor: [snapshot-20226];Transports: [san];Read Only: [true]]
failed because of the following errors: Failed to open virtual disk Logon attempt with parameters 
VC/ESX: [vcenter.domain.tld];Port: 443;Login: [AD\Administrator

The verified the used credentials for that job, but re-entering the password does not solved the issue. I then checked the Veeam backup logs located under %ProgramData%\Veeam\Backup (look for the Agent.Job_Name.Source.VM_Name.vmdk.log) and found VDDK Error 3014:

Insufficient permissions in the host operating system

The user, that was used to connect to the vCenter, was an Active Directory located account. The account were granted administrator privileges root of the vCenter. Switching from an AD located account to Administrator@vsphere.local solved the issue. Next stop: vmware-sts-idmd.log on the vCenter Server appliance. The error found in this log confirmed my theory, that there was an issue with the authentication itself, not an issue with the AD located account.

[2018-07-04T11:59:49.848+02:00 vsphere.local        142f5216-8316-4752-b02c-e02be4154816 INFO ] [VmEventAppender] EventLog: source=[VMware Identity Server], tenant=[vsphere.local], eventid=[USER_NAME_PWD_AUTH_FAILED], level=[ERROR], category=[VMEVENT_CATEGORY_IDM], text=[Failed to authenticate principal [AD\Administrator]. Native platform error [code: 851968][null][null]], detailText=[com.vmware.identity.interop.idm.IdmNativeException: Native platform error [code: 851968][null][null]
[2018-07-04T11:59:49.848+02:00 vsphere.local        142f5216-8316-4752-b02c-e02be4154816 ERROR] [IdentityManager] Failed to authenticate principal [AD\Administrator]. Native platform error [code: 851968][null][null]
com.vmware.identity.interop.idm.IdmNativeException: Native platform error [code: 851968][null][null]

[2018-07-04T12:10:41.603+02:00 vsphere.local        64051ea1-0d7f-453d-8e34-92f0c8c37e77 INFO ] [IdentityManager] Authentication succeeded for user [AD\Administrator] in tenant [vsphere.local] in [37] milliseconds with provider [ad.domain.tld] of type [com.vmware.identity.idm.server.provider.activedirectory.ActiveDirectoryProvider]

To make a long story short: Time differences. The vCenter, the ESXi hosts and some servers had the wrong time. vCenter and ESXi hosts were using the Domain Controllers as time source.

This is the ntpq  output of the vCenter. You might notice the jitter values on the right side, both noted in milliseconds.

vcenter:/storage/log/vmware/sso # ntpq
ntpq> peer
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
vmdc2.ad.       192.168.16.11    2 u   53   64  363    0.532  207.553 152007.
vmdc1.ad.       .LOCL.           1 u    2   64  377    0.257  204.559 161964.

After some investigation, the root cause seemed to be a bad DCF77 receiver, which was connected to the domain controller that was hosting the PDC Emulator role. The DCF77 receiver was connected using an USB-2-LAN converter. Instead of using a DCF77 receiver, the customer and I implemented a NTP hierarchy using a valid NTP source on the internet (pool.ntp.org).