Tag Archives: storage

The one stop solution for backup and DR: Vembu BDR Suite

I have worked with a lot of backup software products during my career, but for the last years I have primarily worked with MicroFocus Data Protector (former HP OmniBack, HP Data Protector, or HPE Data Protector), and Veeam Backup & Replication. Data Protector was a great solution for traditional server environments, or when UNIX (HP-UX, AIX, Solaris etc.) compatibility was required. Features like Zero Downtime Backups, LAN-free or Direct SAN backups were available for many years. But their code quality has suffered severely in the recent years. The product no longer seemed like a one-stop shop. Some months ago, HPE sold its software division to MicroFocus and started to sell Veeam Backup & Replication through its channel. Some months prior selling the complete software division, HPE acquired Trilead, which is many of us well known because of their VM Explorer. Sad but true: Data Protector is dead to me.

I think I don’t have to say much about Veeam. Veeam is unbeaten when it comes down to virtualized server environments, and they constantly add features and extend their product portfolio. Think about their solutions Office 365, or Veeam Agent for Windows and Linux.

Why Vembu?

It is always good to have more than product in the portfolio, just because to give customers the choice between different products. If your only tool is a hammer, everthing looks like a nail. That is why I took a closer look at Vembu. I became aware of Vembu, because they asked to place an ad on vcloudnine. This was a year ago. So it was obvious to take a look at their products. Furthermore, Vembu and its products were mentioned many times in my Twitter timeline. Two good reasons to take a look at them!

Vembu Technologies was founded in 2002, and with 60.000 customers and more than 4000 partners, Vembu is a leading provider with a comprehensive portfolio of software products and cloud services to small and medium businesses. We are not talking about a newcomer!

The Vembu BDR Suite

The Vembu BDR Suite is an one stop solution to all your backup and disaster recovery needs. That is what Vembu says about their own product. The BDR Suite covers

  • Backup and replication of VMs running on VMware vSphere and Microsoft Hyper-V
  • Backup and bare-metal recovery for physical servers and workstations (Windows Server and Desktop)
  • File and application backups of Microsoft Exchange, Microsoft SQL Server, Microsoft SharePoint, Microsoft Active Directory, Microsoft Outlook, and MySQL
  • Creating of backup copies and transfer of them to a DR site

Let’s have a more detailed look at the Vembu BDR Suite. This is a picture of the overall architecture.

Vembu Technologies/ Vembu BDR Suite architecture/ Copyright by Vembu Technologies

VMBackup

VMBackup provides fast, efficient and agentless backup for VMs hosted on VMware ESXi and on Microsoft Hyper-V. It also provides the capability to replicate virtual machines from one ESXi host to another ESXi (VMreplication). You might guess it – This feature is only available for VMware ESXi. In case of Microsoft Hyper-V, you have to use the built-in Hyper-V replication. The failover and failback of replicated VMs is managed by the BDR Backup Server. VMBackup offers instant VM recovery, recovery of single files and folder from image-level backups, and recovery of application items from Microsoft Exchange, Microsoft SQL Server, Microsoft SharePoint, and Microsoft Active Directory. The functionality is similar to what you know from other products, like Veeam Backup & Replication, or MicroFocus Data Protector. VMBackup is licensed per socket, and it is available in a Standard (~ 150 $ per socket) and an Enterprise (~ 250 $ per socket) edition.

ImageBackup

ImageBackup addresses something, that might be extinct for some of us: Physical servers, like physical database or file servers. It can take image backups of Windows servers and workstations. This allows customers to restore the entire server or workstation from scratch to the same, or to new hardware. ImageBackup utilizes the Volume Shadow Copy Service (VSS) to create a consistent backup of a physical machine. Customers can restore a backup to the bare-metal, restore single files and folders, as well as application items from Microsoft Exchange, Microsoft SQL Server, Microsoft SharePoint, and Microsoft Active Directory. If necessary, the can be restored to a supported hypervisor. With other words: P2V migration. ImageBackup is licensed per host, or per application server if you wish to take backups of applications like Microsoft Exchange or SQL server. ImageBackup for servers costs ~ 150 $, and it is free for workstations.

NetworkBackup

NetworkBackup addresses the backup of files, folders and application data from Windows, Mac and Linux clients. It is designed to protect business data across file servers, application servers, workstations and other endpoints. It does not take an image backup, but full and incremental backups. The feature set and use case of NetworkBackup is similar to “traditional” backup software like MicroFocus Data Protector or ARCServe. NetworkBackup offers intelligent scheduling policies, bandwidth management and flexible retention polices. Clients are not always onsite, to address this, NetworkBackup can store its data in the Vembu Cloud (Vembu Cloud Services). NetworkBackup is licensed per file server (~ 60 $ per server), application server (~ 150 $), or workstation (free).

OffsiteDR

OffsiteDR creates and transfers backup copies to a DR site. Data is immediately transferred when it arrives at the backup server. The Data is encrypted in-flight using industry-standard AES 256 encryption. WAN optimization is included, which means that data is compressed, encrypted and deduplicated before being replicated to the OffsiteDR server. The recovery of VMs and files can directly be done from the OffsiteDR server. So there is no need to setup a new backup server in case of a disaster recovery. OffsiteDR covers different recovery screnarios, like instantly recover machines directly from the Vembu OffsiteDR server, bare-metal restore using the Vembu Recovery CD, or restore the virtual machine as on a VMware ESXi or Microsoft Hyper-V server directly from the Vembu OffsiteDR server. OffsiteDR is an add-on to VMBackup, and it is licensed per CPU socket (~ 90 $).

Universal Explorer

The Universal Explorer is used to restore items from various Microsoft applications, like Microsoft Exchange, SQL Server, SharePoint, or Active Directory. An item can be an email, a mailbox, complete databases, user or group objects etc. These items are sourced from image-level backups of physical and virtual machines. You might see some similarities to Veeam Explorer. Both products are comparable.

Recovery CD

The Vembu Recovery CD can be used to recover physical or virtual maschines. Drivers for the target platform will be injected during the restore. This is pretty handy in case of P2P and V2P migrations.

Licensing & Editions

Vembu offers a subscription and a perpetual license model. The subscription model can be purchased on a monthly or yearly basis, such as 1, 2, 3 or 5 years. It includes 24/ 7 standard technical support, updates and upgrades throughout the licensed period. The perpetual licensing model allows you to purchase and use the Vembu BDR suite by paying a single fee. This includes free maintenance and support for the first year.

Visit the pricing page for more detailed information. A Vembu BDR Suite edition comparison is also available.

Final thoughts

With 60.000 customers and 4000 partners, Vembu is not a newcomer in the backup business. The product portfolio is quite comprehensive. The Vembu BDR Suite offers industry standard features to a very sweet price. I can’t see any feature, that a SMB customer might require, which is not available. In sum, the Vembu BDR suite seems to me to be a very good alternative to the top dogs in the backup business, especially if we are talkin about SMB customers.

Backup from a secondary HPE 3PAR StoreServ array with Veeam Backup & Replication

When taking a backup with Veeam Backup & Replication, a VM snapshot is created to get a consistent state of the VM. The snapshot is taken prior the backup, and it is removed after the successful backup of the VM. The snapshot grows during its lifetime, and you should keep in mind, that you need some free space in the datastore for snapshots. This can be a problem, especially in case of multiple VM backups at a time, and if the VMs share the same datastore.

Benefit of storage snapshots

If your underlying storage supports the creation of storage snapshots, Veeam offers an additional way to create a consistent state of the VMs. In this case, a storage snapshot is taken, which is presented to the backup proxy, and is then used to backup the data. As you can see: No VM snapshot is taken.

Now one more thing: If you have a replication or synchronous mirror between two storage systems, Veeam can do this operation on the secondary array. This is pretty cool, because it takes load from you primary storage!

Backup from a secondary HPE 3PAR StoreServ array

Last week I was able to try something new: Backup from a secondary HPE 3PAR StoreServ array. A customer has two HPE 3PAR StoreServ 8200 in a Peer Persistence setup, a HPE StoreOnce, and a physical Veeam backup server, which also acts as Veeam proxy. Everything is attached to a pretty nice 16 Gb dual Fabric SAN. The customer uses Veeam Backup & Replication 9.5 U3a. The data was taken from the secondary 3PAR StoreServ and it was pushed via FC into a Catalyst Store on a StoreOnce. Using the Catalyst API allows my customer to use Synthetic Full backups, because the creation is offloaded to StoreOnce. This setup is dramatically faster and better than the prior solution based on MicroFocus Data Protector. Okay, this last backup solution was designed to another time with other priorities and requirements. it was a perfect fit at the time it was designed.

This blog post from Veeam pointed me to this new feature: Backup from a secondary HPE 3PAR StoreServ array. Until I found this post, it was planned to use “traditional” storage snapshots, taken from the primary 3PAR StoreServ.

With this feature enabled, Veeam takes the snapshot on the 3PAR StoreServ, that is hosting the synchronous mirrored virtual volume. This graphic was created by Veeam and shows the backup workflow.

Veeam/ Backup from secondary array/ Copyright by Veeam

My tests showed, that it’s blazing fast, pretty easy to setup, and it takes unnecessary load from the primary storage.

In essence, there are only three steps to do:

  • add both 3PARs to Veeam
  • add the registry value and restart the Veeam Backup Server Service
  • enable the usage of storage snapshots in the backup job

How to enable this feature?

To enable this feature, you have to add a single registry value on the Veeam backup server, and afterwards restart the Veeam Backup Server service.

  • Location: HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication\
  • Name: Hp3PARPeerPersistentUseSecondary
  • Type: REG_DWORD (0 False, 1 True)
  • Default value: 0 (disabled)

Thanks to Pierre-Francois from Veeam for sharing his knowledge with the community. Read his blog post Backup from a secondary HPE 3PAR StoreServ array for additional information.

Meltdown & Spectre: What about HPE Storage and Citrix NetScaler?

In addition to my shortcut blog post about Meltdown and Spectre with regard of Microsoft Windows, VMware ESXi and vCenter, and HPE ProLiant, I would like to add some additional information about HPE Storage and Citrix NetScaler.

When we talk about Meltdown and Spectre, we are talking about three different vulnerabilities:

  • CVE-2017-5715 (branch target injection)
  • CVE-2017-5753 (bounds check bypass)
  • CVE-2017-5754 (rogue data cache load)

CVE-2017-5715 and CVE-2017-5753 are known as “Spectre”, CVE-2017-5754 is known as “Meltdown”. If you want to read more about these vulnerabilities, please visit meltdownattack.com.

Due to the fact that different CPU platforms are affected, one might can guess that also  other devices, like storage systems or load balancers, are affected. Because of my focus, this blog post will focus on HPE Storage and Citrix NetScaler.

HPE Storage

HPE has published a searchable and continously updated list with products, that might be affected (Side Channel Analysis Method allows information disclosure in Microprocessors). Interesting is, that a product can be affected, but not vulnerable.

ProductImpactedComment
Nimble StorageYesFix under investigation
StoreOnceYESNot vulnerable – Product doesn’t allow arbitrary code execution.
3PAR StoreServYESNot vulnerable – Product doesn’t allow arbitrary code execution.
3PAR Service ProcessorYESNot vulnerable – Product doesn’t allow arbitrary code execution.
3PAR File ControllerYESVulnerable- further information forthcoming.
MSAYESNot vulnerable – Product doesn’t allow arbitrary code execution.
StoreVirtualYESNot vulnerable – Product doesn’t allow arbitrary code execution.
StoreVirtual File ControllerYESVulnerable- further information forthcoming.

The File Controller are vulnerable, because they are based on Windows Server.

So if you are running 3PAR StoreServ, MSA, StoreOnce or StoreVirtual: Relax! If you are running Nimble Storage, wait for a fix.

Citrix NetScaler

Citrix has also published an article with information about their products (Citrix Security Updates for CVE-2017-5715, CVE-2017-5753, CVE-2017-5754).

The article is a bit spongy in its statements:

Citrix NetScaler (MPX/VPX): Citrix believes that currently supported versions of Citrix NetScaler MPX and VPX are not impacted by the presently known variants of these issues.

Citrix believes… So nothing to do yet, if you are running MPX or VPX appliances. But future updates might come.

The case is a bit different, when it comes to the NetScaler SDX appliances.

Citrix NetScaler SDX: Citrix believes that currently supported versions of Citrix NetScaler SDX are not at risk from malicious network traffic. However, in light of these issues, Citrix strongly recommends that customers only deploy NetScaler instances on Citrix NetScaler SDX where the NetScaler admins are trusted.

No fix so far, only a recommendation to check your processes and admins.

Checking the 3PAR Quorum Witness appliance

Two 3PAR StoreServs running in a Peer Persistence setup lost the connection to the Quorum Witness appliance. The appliance is an important part of a 3PAR Peer Persistence setup, because it acts as a tie-breaker in a split-brain scenario.

While analyzing this issue, I saw this message in the 3PAR Management Console:

In addition to that, the customer got e-mails that the 3PAR StoreServ arrays lost the connection to the Quorum Witness appliance. In my case, the CouchDB process died. A restart of the appliance brought it back online.

How to check the Quorum Witness appliance?

You can check the status of the appliance with a simple web request. The documentation shows a simple test based on curl. You can run this direct from the BASH of the appliance.

But you can also use the PowerShell cmdlet Invoke-WebRequest.

If you add /witness to the URL, you can test the access to the database, which is used for Peer Persistence.

If you get a connection error, check if the beam process is running.

If not, reboot the appliance. This can be done without downtime. The appliance comes only into play, if a failover occurs.

Why I moved from NFS to vSAN… and why it went wrong

I wanted to retire my Synology DS414slim, and switch completely to vSAN. Okay, no big deal. Many folks use vSAN in their lab. But I’d like to explain why I moved to vSAN and why this move failed. I think some of my thoughts are also applicable for customer environments.

So far, I used a Synology DS414slim with three Crucial M550 480 GB SSDs (RAID 5) as my main lab storage. The Synology was connected with two 1 GbE uplinks (LAG) to my  network, and each host was connected with 4x 1 GbE uplinks (single distributed vSwitch). The Synology was okay from the capacity perspective, but the performance was horrible. RAID 5, SSDs and NFS were not the best team, or to be precise, the  CPU of the Synology was the main bottleneck.

nas_ds414slim

1,2 GHz is not enough, if you want to use NFS or iSCSI. I never got more than 60 MB/s (sequential). The random IO performance was okay, but as soon as the IO increased, the latencies went through the roof. Not because the SSDs were to slow, but because the CPU of the Synology was not powerful enough to handle the NFS requests.

Workaround: Add more flash storage

The workaround for the poor random IO performance was adding more flash storage. This time, the flash storage was added to the hosts. I used PernixData FVP to boost my lab. FVP was a quite cool product (unfortunately it was a cool product.) PernixData granted me, as a PernixPro, some licenses for my lab.

End of an era

The acquisition of PernixData by Nutanix, the missing support für vSphere 6.5, and the end of availability of all PernixData products led to the decision to remove PernixData FVP from my lab. Without PernixData FVP, my lab was again a slow train crawling up a hill. Four HPE ProLiant, with enough CPU (40 cores) and memory resources (384 GB RAM) were tied down by slow IO.

Redistribution of resources

I had

  • three 480 GB SSDs, and
  • three 40 GB SSDs

in stock. The 40 GB SSDs were to small and slow, so I replaced them with 120 GB SSDs. I was able to equip three of my four hosts with SSDs. Three hosts with flash storage were enough to try VMware vSAN.

Fortunately, not all hosts have to add capacity to a vSAN cluster. Hosts can also only consume storage from a vSAN cluster. With this in mind, vSAN appeared to be a way out of my IO dilemma. In addition, using the 480 GB SSDs as capacity tier, a vSAN all-flash config was possible.

Migration

It took me a little time to move around VMs to temporary locations, while keeping my DC and my VCSA available. I had to remove my datastore on the Synology to free up the 480 GB SSDs. The necessary vSAN licenses were granted by VMware (vExpert licenses).

The creation of the vSAN cluster itself was easy. Fortunately, wiping partitions from disks is easy. You can use the vSphere Web Client to do this.

vsan_wipe_partitions

The initial performance was quite good, much better than expected and much better than the NFS performance of the old Synology NAS. I enabled deduplication and compression, but as soon as I moved VMs to the vSAN datastore, the throughput dropped and latencies went through the roof. It was totally unusable. Furthermore, I got health alarms:

vsan_congestion_error

As the load increased, the errors became more severe.

vsan_performance_error

I was able to solve this with a blog post of Cormac Hogan (VSAN 6.1 New Feature – Handling of Problematic Disks). Even without compression and deduplication, the performance was not as expected and most times to low to work with. At this point, I got an idea what was causing my vSAN problems.

Do not use consumer-grade hardware with vSAN

To be honest: The budget is the problem. I had to take consumer-grade SSDs.

This is a screenshot from the vSAN Observer. esx1 to esx3 are equipped with SSDs, esx4 is only consuming storage from the vSAN cluster.

vsan_observer_perf

Red is not the color to highlight good things…

An explanation attempt

This blog post of Duncan Epping (Why Queue Depth matters!) is a bit older, but still valid in my case. The controller I use  (HPE Smart Array P410i) has a a deep queue (1011), the RAID device has a queue length of 1024, but the SATA SSDs have only a queue length of 32. Here’s the disk adapter and disk device view of ESXTOP.

The consumer-grade SSDs drowned in IOs, unable to handle parallel read and write operations. There’s nothing much that I can do. Currently there are two options:

  • Replacing the SSDs with devices, that have a deeper queue depth
  • Replace the Synology NAS with a more powerful NAS and move back to NFS

I don’t know which way I will go. To get this clear:

  • This is my lab, not a customer environment
  • It is not a vSAN related problem
  • It is because of consumer-grade hardware

Do not try this at production kids. Go vSAN, but please use the right hardware.

HPE 3PAR OS updates that fix VMware VAAI ATS Heartbeat issue

Customers that use HPE 3PAR StoreServs with 3PAR OS 3.2.1 or 3.2.2 and VMware ESXi 5.5 U2 or later, might notice one or more of the following symptoms:

  • hosts lose connectivity to a VMFS5 datastore
  • hosts disconnect from the vCenter
  • VMs hang during I/O operations
  • you see the messages like these in the vobd.log or vCenter Events tab

  • you see the following messages in the vmkernel.log

Interestingly, not only HPE is affected by this. Multiple vendors have the same issue. VMware described this issue in KB2113956. HPE has published a customer advisory about this.

Workaround

If you have trouble and you can update, you can use this workaround. Disable ATS heartbeat for VMFS5 datastores. VMFS3 datastores are not affected by this issue. To disable ATS heartbeat, you can use this PowerCLI one-liner:

Solution

But there is also a solution. Most vendors have published firwmare updates for their products. HPE has released

  • 3PAR OS 3.2.2 MU3
  • 3PAR OS 3.2.2 EMU2 P33, and
  • 3PAR OS 3.2.1 EMU3 P45

All three releases of 3PAR OS include enhancements to improve ATS heartbeat. Because 3PAR OS 3.2.2 has also some nice enhancements for Adaptive Optimization, I recommend to update to 3PAR OS 3.2.2.

HPE StoreVirtual – Managers and Quorum

HPE StoreVirtual is a scale-out storage platform, that is designed to meet the needs of virtualized environments. It’s based on LeftHand OS and because the magic is a piece of software, HPE StoreVirtual is available as HPE ProLiant/ BladeSystem-based hardware, or as Virtual Storage Appliance (VSA) for VMware ESXi, Microsoft Hyper-V and KVM. It comes with an all-inclusive enterprise feature set. This feature set provides

  • Storage clustering
  • Network RAID
  • Thin Provisioning (with support for space reclamation)
  • Snapshots
  • Asynchronous and synchronous replication across multiple sites
  • Automated software upgrades and self-healing storage
  • Adaptive Optimization (Tiering)

The license is alway all-inclusive. There is no need to license individual features.

HPE StoreVirtual is not a new product. Hewlett-Packard has acquired LeftHand Networks in 2008. The product had several names since 2008 (HP LeftHand, HP P4000 and since a couple of years it’s StoreVirtual), but the core intelligence, LeftHand OS, was constantly developed by HPE. There are rumours that HPE StoreOnce Recovery Manager Central will be available for StoreVirtual soon.

Management Groups & Clusters

A management group is a collection of multiple (at least one) StoreVirtual P4000 storage systems or StoreVirtual VSA. A management group represents the highest administrative domain. Administrative users, NTP and e-mail notification settings are configured on management group level. Clusters are created per management group. A management group can consist of multiple clusters. A cluster represents a pool of storage from which volumes are created. A volume spans all nodes of a cluster. Depending on the Network RAID level, multiple copies of data are distributed over the storage systems in a cluster. Capacity and IO are expanded by adding more storage systems to a cluster.

As in each cluster, there are aids to ensure the function of the cluster in case of node failes. This is where managers and quorums comes into play.

Managers & Quorums

HPE StoreVirtual is a scale-out storage platform. Multiple storage systems form a cluster. As in each cluster, availability must be maintained if one or more cluster nodes fail. To maintain availability, a majority of managers must be running and be able to communicate with each other. This majority is called “a quorum”. This is nothing new. Windows Failover Clusters can also use a majority of nodes to gain a quorum. The same applies to OpenVMS clusters.

A manager is a service running on a storage system. This service is running on multiple storage systems within a cluster, and therefore in a management group. A manager has several functions:

  • Monitor the data replication and the health of the storage systems
  • Resynchronize data after a storage system failure
  • Manage and monitor communication between storage systems in the cluster
  • Coordinate configuration changes (one storage system is the coordinating manager)

This manager is called a “regular manager”. Regular managers are running on storage systems. The number of managers are counted per management group. You can have up to 5 managers per management group. Even if you have multiple storage systems and clusters per management group, you can’t have more than 5 managers running on storage systems. Sounds like a problem, but it’s not. If you have three 3-node clusters in a single management group, you can start managers on 5 of the 6 storage systems. Even if two storage systems fail, the remaining three managers gain a quorum. But if the quorum is lost, all clusters in a management group will be unavailable.

I have two StoreVirtual VSA running in my lab. As you can see, the management group contains two regular managers and vsa1 is the coordinating manager.

storevirtual_manager_1

There are also specialized manager. There are three types of specialized managers:

  • Failover Manager (FOM)
  • Quorum Witness (NFS)
  • Virtual Manager

A FOM is a special version of LeftHand OS and its primary function is to act as a tie breaker in split-brain scenarios. it’s added to a management group. It is mainly used if an even number of storage systems is used in a cluster, or in case of multi-site deployments.

The Quorum Witness was added with LeftHand OS 12.5. The Quorum Witness can only be used in 2-node cluster configurations. It’s added to the management group and it uses a file on a NFS share to provide high availability. Like the FOM, the Quorum Witness is used as the tie breaker in the event of a failure.

The Virtual Manager is the third specialized managers. It can be added to a management group, but its not active until it is needed to regain quorum. It can be used to regain quorum and maintain access to data in a disaster recovery situation. But you have to start it manually. And you can’t add it, if the quorum is lost!

As you can see in this screenshot, I use the Quorum Witness in my tiny 2-node cluster.

storevirtual_manager_2

Regardless of the number of storage systems in a management group, you should use an odd number of managers. An odd number of managers ensures, that a majority is easily maintained. In case of a even number of manager, you should add a FOM. I don’t recommend to add a Virtual Manager.

# of storage systems# of Manager
11 regular manager
22 regular manager + 1 specialized manager
33 regular manager or 2 + 1 FOM or Virtual Manager
43 regular manager or 4 + 1 FOM or Virtual Manager
> 55 regular manager or 4 + 1 FOM or Virtual Manager

In case of a multi-site deployment, I really recommend to place a FOM at a third site. I know that this isn’t always possible. If you can’t deploy it to a third site, place it at the “primary site”. A multi-site deployment is characterized by the fact, that the storage systems of a cluster are located in different locations. But it’s still a single cluster! This might lead to the situation, where a site failure causes the quorum gets lost. Think about a 4-node cluster with two nodes per site. In this case, the remaining two nodes wouldn’t gain quorum (split-brain situation). In this case, a FOM at a third site would help to gain quorum in case of a site failure. If you have multiple clusters in a management group, balance the managers across the clusters. I recommend to add a FOM. If you have a clusters at multiple sites, (primary and a DR site with remote copy), ensure that the majority of managers are at the primary site.

Final words

It is important to understand how managers, quorum, management groups and clusters are linked. Network RAID protects the data by storing multiple copies of data across storage systems in a cluster. Depending on the chosen Network RAID level, you can lose disks or even multiple storage systems. But never forget to have a sufficient number of managers (regular and specialized). If the quorum can’t be maintained, the access to the data will be unavailable. It’s not sufficient to focus on data protection. The availability of, or more specifically, the access to the data is at least as important. If you follow the guidelines, you will get a rock-solid, high performance scale-out storage.

I recommend to listen to Calvin Zitos podcast (7 Years of 100% uptime with StoreVirtual VSA) and to read Bart Heungens blog post about his experience with HPE StoreVirtual VSA (100% uptime for 7 years with StoreVirtual VSA? Check!).

HPE StoreVirtual REST API

Representational State Transfer (REST) APIs are all the rage. REST was defined by Roy Thomas Fielding in his PhD dissertation “Architectural Styles and the Design of Network-based Software Architectures“. The architectural style of REST describes six constraints:

  • Uniform interface
  • Stateless
  • Cacheable
  • Client – Server communication
  • Layered system
  • Code on demand

RESTful APIs typically use HTTP and HTTP verbs (GET, POST, PUT, DELETE, etc.) to send data to, or retrieve data from remote systems. To do so, REST APIs use Uniform Resource Identifiers (URIs) to interact with remote systems. Thus, a client can interact with a remote system over a REST API using standard HTTP URIs and HTTP verbs. For the data transfer, common internet media types, like JSON or XML are used. It’s important to understand that REST is not a standard per se. But most implementations make use of standards such as HTTP, URI, JSON or XML.

Because of the uniform interface, you have different choices in view of a client. I will use PowerShell and the Invoke-RestMethod cmdlet in my examples.

HPE StoreVirtual REST API

With the release of LeftHand OS 11.5 (the latest release is 12.6), HPE added a REST API for management and storage provisioning. Due to a re-engineered management stack, the REST API is significantly faster than the same task processed on the CLI or using the  Centralized Management Console (CMC). It’s perfect for automation and scripting. It allows customers to achieve a higher level of automation and operational simplicity. The StoreVirtual REST API is using JavaScript Object Notation (JSON) for data transfer between client and the StoreVirtual management group. With the REST API, you can

  • Read, create, and modify volumes
  • Create and delete snapshots
  • Create, modify, and delete servers
  • Grant and revoke access of servers to volumes

I use two StoreVirtal VSA (LeftHand OS 12.6) in my lab. Everything I show in this blog post is based on LeftHand OS 12.6.

The REST API in LeftHand OS 12.6 uses:

  • HTTPS 1.1
  • media types application/JSON
  • Internet media types application/schema+JSON
  • UTF-8 character encoding

RESTful APIs typically use HTTP and HTTP verbs (GET, POST, PUT, DELETE, etc.). I case of the StoreVirtual REST API:

  • GET is used to retrieve an object. No body is necessary.
  • PUT is used to update an object. The information to update the object is sent within the body.
  • POST is used to create of an object, or to invoke an action or event. The necessary information are sent within the body.
  • DELETE is used to delete an object.

Entry point for all REST API calls is /lhos, starting from a node, eg.

Subsequent resources are relative to this base URI. Resources are:

Resource pathDescription
/lhos/managementGroupManagement group entity
/lhos/clustersCluster collection
/lhos/cluster/<id>Cluster entity
/lhos/credentialsCredentials collection
/lhos/credentials/<session token>Credentials entity
/lhos/serversServer collection
/lhos/servers/<id>Server entity
/lhos/snapshotsSnapshot collection
/lhos/snapshots/<id>Snapshot entity
/lhos/volumesVolume collection
/lhos/volumes/<id> Volume entity

The object model of the StoreVirtual REST API uses

  • Collections, and
  • Entities

to address resources. An entity is used to address individual resources, whereas a collection is a group of individual resources. Resources can be addressed by using a URI.

Exploring the API

First of all, we need to authenticate us. Without a valid authentication token, no REST API queries can be made. To create a credential entity, we have to use the POST method.

$cred is a hash table which includes the username and the password. This hash table is converted to the JSON format with the ConvertTo-Json cmdlet. The JSON data will be used as body for our query. The result is an authentication token.

This authentication token must be used for all subsequent API queries. This query retrieves a collection of all valid sessions.

The GET method is used, and the authentication token is sent with the header of the request.

To retrieve an individual credential entity, the URI of the entity must be used.

The result of this query is the individual credential entity

It’s important to know, that if a session has not been used for 15 minutes, it is automatically removed. The same applies to constantly active sessions after 24 hours. After 24 hours, the credential entity will be automatically removed.

Let’s try to create a volume. The information about this new volume has to be sent within the body of our request. We use again the ConvertTo-Json cmdlet to convert a hash table with the necessary information to the JSON format.

The size must be specified in bytes. As a result, Invoke-RestMethod will output this:

Using the CMC, we can confirm that the volume was successfully created.

storevirtual_rest_api_vol_1

Since we have a volume, we can create a snapshot. To create a snapshot, we need to invoke an action on the volume entity. We have to use the POST method and the URI of our newly created volume.

In case of a successful query, Invoke-RestMethod will give us this output.

Again, we can use the CMC to confirm the success of our operation.

storevirtual_rest_api_vol_2

To delete the snapshot, the DELETE method and the URI of the snapshot entity must be used.

To confirm the successful deletion of the snapshot, the GET method can be used. The GET method will retrieve a collection of all snapshot entities.

The result will show no members inside of the snapshot collection.

At the end of the day, we remove our credential entity, because it’s not longer used. To delete the credential entity, we use the DELETE method with the URI of our credential entity.

The next query should fail, because the credential entity is no longer valid.

HTTPS workaround

The StoreVirtual API is only accessable over HTTPS. By default, the StoreVirtual nodes use an untrusted HTTPS certifificate. This will cause Invoke-RestMethod to fail.

After a little research, I found a workaround. This workaround uses the System.Security.Cryptography.X509Certificates namespace. You can use this snippet to build a function or add it to a try-catch block.

Final words

The StoreVirtual REST API is really handy. It can be used to perform all important tasks. It’s perfect for automation and it’s faster than the CLI. I’ve used PowerShell in my examples, but I’ve successfully tested it with Python. Make sure to take a look in to the HPE StoreVirtual REST API Reference Guide.

PowerCLI: Get-LunPathState

Careful preparation is a key element to success. If you restart a storage controller, or even the whole storage, you should be very sure that all ESXi hosts have enough paths to every datstore. Sure, you can use the VMware vSphere C# client or the Web Client to check every host and every datastore. But if you have a large cluster with a dozen datastores and some Raw Device Mappings (RDMs), this can take a looooong time. Checking the path state of each LUN is a task, which can be perfectly automated. Get a list of all hosts, loop through every host and every LUN, output a list of all hosts with all LUNs and all paths for each LUN. Sounds easy, right?

For a long time, I used this PowerCLI script for checking the LUN path state. But now I decided to give something back and I tweaked it a bit for my needs.

Feel free to use and/ or modify it.

Using HP StoreOnce as target for Windows Server Backup (WSB)

Some days ago, I blogged about the new HP StoreOnce software release 3.13.0. This release included several fixes. One fix wasn’t mentioned by me, although it’s interesting.

  • Fixed issue where Windows 2012 R2 built-in native backup was not supported with 3.12.x software (BZ 61232)

Windows Server Backup (WSB) is part of Windows Server since Windows Server 2008. WSB can create bare metal backups and recover those backups. The same applies to system state backups, file level backups, Hyper-V VMs, Exchange etc. Very handy for small environmens. Backup can be stored on disk or on a file share. With Server 2012, the file share must be SMB3 capable. So if it’s not a Windows file server, the NAS that offers the file share has to be SMB3 capable. This doesn’t apply to Windows Server 2008 (R2).

With StoreOnce 3.13.0, HP has fixed this. Starting with 3.13.0, you can use a CIFS share on a StoreOnce appliance as a target for Windows Server Backup. This allows you to take advantage of the benefits of StoreOnce, like industry-leading deduplication and replication technology.

I was able to test this new feature with StoreOnce VSA appliances in my lab, as well as with a customers StoreOnce 4700 appliance.

Download you free copy of the HP StoreOnce Free 1 TB VSA today and give it a try!