Tag Archives: storevirtual

HPE StoreVirtual – Managers and Quorum

HPE StoreVirtual is a scale-out storage platform, that is designed to meet the needs of virtualized environments. It’s based on LeftHand OS and because the magic is a piece of software, HPE StoreVirtual is available as HPE ProLiant/ BladeSystem-based hardware, or as Virtual Storage Appliance (VSA) for VMware ESXi, Microsoft Hyper-V and KVM. It comes with an all-inclusive enterprise feature set. This feature set provides

  • Storage clustering
  • Network RAID
  • Thin Provisioning (with support for space reclamation)
  • Snapshots
  • Asynchronous and synchronous replication across multiple sites
  • Automated software upgrades and self-healing storage
  • Adaptive Optimization (Tiering)

The license is alway all-inclusive. There is no need to license individual features.

HPE StoreVirtual is not a new product. Hewlett-Packard has acquired LeftHand Networks in 2008. The product had several names since 2008 (HP LeftHand, HP P4000 and since a couple of years it’s StoreVirtual), but the core intelligence, LeftHand OS, was constantly developed by HPE. There are rumours that HPE StoreOnce Recovery Manager Central will be available for StoreVirtual soon.

Management Groups & Clusters

A management group is a collection of multiple (at least one) StoreVirtual P4000 storage systems or StoreVirtual VSA. A management group represents the highest administrative domain. Administrative users, NTP and e-mail notification settings are configured on management group level. Clusters are created per management group. A management group can consist of multiple clusters. A cluster represents a pool of storage from which volumes are created. A volume spans all nodes of a cluster. Depending on the Network RAID level, multiple copies of data are distributed over the storage systems in a cluster. Capacity and IO are expanded by adding more storage systems to a cluster.

As in each cluster, there are aids to ensure the function of the cluster in case of node failes. This is where managers and quorums comes into play.

Managers & Quorums

HPE StoreVirtual is a scale-out storage platform. Multiple storage systems form a cluster. As in each cluster, availability must be maintained if one or more cluster nodes fail. To maintain availability, a majority of managers must be running and be able to communicate with each other. This majority is called “a quorum”. This is nothing new. Windows Failover Clusters can also use a majority of nodes to gain a quorum. The same applies to OpenVMS clusters.

A manager is a service running on a storage system. This service is running on multiple storage systems within a cluster, and therefore in a management group. A manager has several functions:

  • Monitor the data replication and the health of the storage systems
  • Resynchronize data after a storage system failure
  • Manage and monitor communication between storage systems in the cluster
  • Coordinate configuration changes (one storage system is the coordinating manager)

This manager is called a “regular manager”. Regular managers are running on storage systems. The number of managers are counted per management group. You can have up to 5 managers per management group. Even if you have multiple storage systems and clusters per management group, you can’t have more than 5 managers running on storage systems. Sounds like a problem, but it’s not. If you have three 3-node clusters in a single management group, you can start managers on 5 of the 6 storage systems. Even if two storage systems fail, the remaining three managers gain a quorum. But if the quorum is lost, all clusters in a management group will be unavailable.

I have two StoreVirtual VSA running in my lab. As you can see, the management group contains two regular managers and vsa1 is the coordinating manager.


There are also specialized manager. There are three types of specialized managers:

  • Failover Manager (FOM)
  • Quorum Witness (NFS)
  • Virtual Manager

A FOM is a special version of LeftHand OS and its primary function is to act as a tie breaker in split-brain scenarios. it’s added to a management group. It is mainly used if an even number of storage systems is used in a cluster, or in case of multi-site deployments.

The Quorum Witness was added with LeftHand OS 12.5. The Quorum Witness can only be used in 2-node cluster configurations. It’s added to the management group and it uses a file on a NFS share to provide high availability. Like the FOM, the Quorum Witness is used as the tie breaker in the event of a failure.

The Virtual Manager is the third specialized managers. It can be added to a management group, but its not active until it is needed to regain quorum. It can be used to regain quorum and maintain access to data in a disaster recovery situation. But you have to start it manually. And you can’t add it, if the quorum is lost!

As you can see in this screenshot, I use the Quorum Witness in my tiny 2-node cluster.


Regardless of the number of storage systems in a management group, you should use an odd number of managers. An odd number of managers ensures, that a majority is easily maintained. In case of a even number of manager, you should add a FOM. I don’t recommend to add a Virtual Manager.

# of storage systems# of Manager
11 regular manager
22 regular manager + 1 specialized manager
33 regular manager or 2 + 1 FOM or Virtual Manager
43 regular manager or 4 + 1 FOM or Virtual Manager
> 55 regular manager or 4 + 1 FOM or Virtual Manager

In case of a multi-site deployment, I really recommend to place a FOM at a third site. I know that this isn’t always possible. If you can’t deploy it to a third site, place it at the “primary site”. A multi-site deployment is characterized by the fact, that the storage systems of a cluster are located in different locations. But it’s still a single cluster! This might lead to the situation, where a site failure causes the quorum gets lost. Think about a 4-node cluster with two nodes per site. In this case, the remaining two nodes wouldn’t gain quorum (split-brain situation). In this case, a FOM at a third site would help to gain quorum in case of a site failure. If you have multiple clusters in a management group, balance the managers across the clusters. I recommend to add a FOM. If you have a clusters at multiple sites, (primary and a DR site with remote copy), ensure that the majority of managers are at the primary site.

Final words

It is important to understand how managers, quorum, management groups and clusters are linked. Network RAID protects the data by storing multiple copies of data across storage systems in a cluster. Depending on the chosen Network RAID level, you can lose disks or even multiple storage systems. But never forget to have a sufficient number of managers (regular and specialized). If the quorum can’t be maintained, the access to the data will be unavailable. It’s not sufficient to focus on data protection. The availability of, or more specifically, the access to the data is at least as important. If you follow the guidelines, you will get a rock-solid, high performance scale-out storage.

I recommend to listen to Calvin Zitos podcast (7 Years of 100% uptime with StoreVirtual VSA) and to read Bart Heungens blog post about his experience with HPE StoreVirtual VSA (100% uptime for 7 years with StoreVirtual VSA? Check!).

HPE StoreVirtual REST API

Representational State Transfer (REST) APIs are all the rage. REST was defined by Roy Thomas Fielding in his PhD dissertation “Architectural Styles and the Design of Network-based Software Architectures“. The architectural style of REST describes six constraints:

  • Uniform interface
  • Stateless
  • Cacheable
  • Client – Server communication
  • Layered system
  • Code on demand

RESTful APIs typically use HTTP and HTTP verbs (GET, POST, PUT, DELETE, etc.) to send data to, or retrieve data from remote systems. To do so, REST APIs use Uniform Resource Identifiers (URIs) to interact with remote systems. Thus, a client can interact with a remote system over a REST API using standard HTTP URIs and HTTP verbs. For the data transfer, common internet media types, like JSON or XML are used. It’s important to understand that REST is not a standard per se. But most implementations make use of standards such as HTTP, URI, JSON or XML.

Because of the uniform interface, you have different choices in view of a client. I will use PowerShell and the Invoke-RestMethod cmdlet in my examples.

HPE StoreVirtual REST API

With the release of LeftHand OS 11.5 (the latest release is 12.6), HPE added a REST API for management and storage provisioning. Due to a re-engineered management stack, the REST API is significantly faster than the same task processed on the CLI or using the  Centralized Management Console (CMC). It’s perfect for automation and scripting. It allows customers to achieve a higher level of automation and operational simplicity. The StoreVirtual REST API is using JavaScript Object Notation (JSON) for data transfer between client and the StoreVirtual management group. With the REST API, you can

  • Read, create, and modify volumes
  • Create and delete snapshots
  • Create, modify, and delete servers
  • Grant and revoke access of servers to volumes

I use two StoreVirtal VSA (LeftHand OS 12.6) in my lab. Everything I show in this blog post is based on LeftHand OS 12.6.

The REST API in LeftHand OS 12.6 uses:

  • HTTPS 1.1
  • media types application/JSON
  • Internet media types application/schema+JSON
  • UTF-8 character encoding

RESTful APIs typically use HTTP and HTTP verbs (GET, POST, PUT, DELETE, etc.). I case of the StoreVirtual REST API:

  • GET is used to retrieve an object. No body is necessary.
  • PUT is used to update an object. The information to update the object is sent within the body.
  • POST is used to create of an object, or to invoke an action or event. The necessary information are sent within the body.
  • DELETE is used to delete an object.

Entry point for all REST API calls is /lhos, starting from a node, eg.

Subsequent resources are relative to this base URI. Resources are:

Resource pathDescription
/lhos/managementGroupManagement group entity
/lhos/clustersCluster collection
/lhos/cluster/<id>Cluster entity
/lhos/credentialsCredentials collection
/lhos/credentials/<session token>Credentials entity
/lhos/serversServer collection
/lhos/servers/<id>Server entity
/lhos/snapshotsSnapshot collection
/lhos/snapshots/<id>Snapshot entity
/lhos/volumesVolume collection
/lhos/volumes/<id> Volume entity

The object model of the StoreVirtual REST API uses

  • Collections, and
  • Entities

to address resources. An entity is used to address individual resources, whereas a collection is a group of individual resources. Resources can be addressed by using a URI.

Exploring the API

First of all, we need to authenticate us. Without a valid authentication token, no REST API queries can be made. To create a credential entity, we have to use the POST method.

$cred is a hash table which includes the username and the password. This hash table is converted to the JSON format with the ConvertTo-Json cmdlet. The JSON data will be used as body for our query. The result is an authentication token.

This authentication token must be used for all subsequent API queries. This query retrieves a collection of all valid sessions.

The GET method is used, and the authentication token is sent with the header of the request.

To retrieve an individual credential entity, the URI of the entity must be used.

The result of this query is the individual credential entity

It’s important to know, that if a session has not been used for 15 minutes, it is automatically removed. The same applies to constantly active sessions after 24 hours. After 24 hours, the credential entity will be automatically removed.

Let’s try to create a volume. The information about this new volume has to be sent within the body of our request. We use again the ConvertTo-Json cmdlet to convert a hash table with the necessary information to the JSON format.

The size must be specified in bytes. As a result, Invoke-RestMethod will output this:

Using the CMC, we can confirm that the volume was successfully created.


Since we have a volume, we can create a snapshot. To create a snapshot, we need to invoke an action on the volume entity. We have to use the POST method and the URI of our newly created volume.

In case of a successful query, Invoke-RestMethod will give us this output.

Again, we can use the CMC to confirm the success of our operation.


To delete the snapshot, the DELETE method and the URI of the snapshot entity must be used.

To confirm the successful deletion of the snapshot, the GET method can be used. The GET method will retrieve a collection of all snapshot entities.

The result will show no members inside of the snapshot collection.

At the end of the day, we remove our credential entity, because it’s not longer used. To delete the credential entity, we use the DELETE method with the URI of our credential entity.

The next query should fail, because the credential entity is no longer valid.

HTTPS workaround

The StoreVirtual API is only accessable over HTTPS. By default, the StoreVirtual nodes use an untrusted HTTPS certifificate. This will cause Invoke-RestMethod to fail.

After a little research, I found a workaround. This workaround uses the System.Security.Cryptography.X509Certificates namespace. You can use this snippet to build a function or add it to a try-catch block.

Final words

The StoreVirtual REST API is really handy. It can be used to perform all important tasks. It’s perfect for automation and it’s faster than the CLI. I’ve used PowerShell in my examples, but I’ve successfully tested it with Python. Make sure to take a look in to the HPE StoreVirtual REST API Reference Guide.

Deploying HP StoreVirtual VSA – Part II

Part I of this series covered the deployment, part II is dedicated to the configuration of the StoreVirtual VSA cluster. I assume that the Centralized Management Console (CMC) was installed. Start the CMC. If you see no systems unter “Available Systems”, client “Find” on the menu and then choose “Find Systems…”. A dialog will appear. Click “Add…” and enter the ip address of one of the earlier deployed VSA nodes. Repeat this until all deployed VSA nodes are added. Then click “Close”. Now you should have all available VSA nodes listed under “Available Systems”.


A management group contains virtual and physical StoreVirtual systems that are managed together. Cluster and volumes are defined per management group. Also user accounts are defined per management group. Right click a node and choose “Add to New Management Group…” from the context menu. We will add all three nodes into this new management group.


Click “Next”. On the next page of the wizard we have to enter a username and password for a administrative user, that will be added to all nodes.


On the next page we have to provide a NTP server. You can set the time manually, but I recommend to use a NTP server. In this case it’s the Active Directory Domain Controller in my lab. Please note, that this server has to be reachable for the VSA nodes! In part I we deployed the VSA nodes with two NICs and with eth0 they can reach the NTP server.


On the next page of the wizard, you have to provide information about the DNS: DNS domain name, additional DNS suffixes and one or more DNS servers. For the DNS servers the same applies as for the NTP server. They have to be reachable for the VSA nodes!


To use the e-mail notification, you have to provide a SMTP server. I don’t have one in my lab, so I left the fields empty. This results in a warning message which can safly be ignored.



Now comes a very important question: Standard or Multi-Site Cluster? A Multi-Site cluster is necessary if site fault tolerance is needed. It also takes care, that traffic from hosts is only send to the local site. A Multi-Site cluster can span multiple sites and can have cluster virtual ip addresses (cluster VIP) in different subnets. A Multi-Site cluster is needed, if you want to build a vSphere Metro Storage Cluster (vSMC) with HP StoreVirtual. I chose to create a standard cluster.


After chosing the cluster type, we have to provide a cluster name and the number of nodes, that should be member of this new cluster.


The next step is to configure the cluster virtual ip address (cluster VIP). This ip address has to be in the same subnet as the VSA nodes. This ip address is used to access the cluster. After the initial connection to the cluster VIP, the initiator will contact a VSA node for the data transfer.


The wizard allows us to create a volume. This step can be skipped. I created a 1 TB thin-provisioned volume.


After clicking “Finish” the management group and the cluster will be created. This steps could take some time.


At the end you will get a summary screen. You can create further volumes or you can repeat the whole wizard to create additional management groups or cluster.


Congratulations! You have now a fully functional HP StoreVirtual VSA cluster.

Possible cluster VIP error message

Depending on your deployment, you will get this error message in the CMC:

VIP error: System is not reachable by any VIP in the cluster


This message occures, if you have deployed your VSA nodes and two NIC and the NIC, that is used for iSCSI, isn’t selected as the preferred SAN/iQ interface. I mentioned in part I that I would refer to the “Select the preferred SAN/iQ interface” option later. This is now. You can get rid of this message, by selecting the right interface as the preferred SAN/iQ interface. Select “Network” on a VSA node, then click the “Communication” tab and choose “Select LeftHandOS Interface…” from the “Communications Tasks” drop-down menu on the bottom of the page.


The message should disappear after changing this on each affected VSA node.

Add hosts

To present volumes to hosts, you have to add hosts. A host consits of a name, an ip address, an iSCSI IQN and, if needed, CHAP credentials. Multiple hosts can grouped to server clusters. You need at least to hosts to build a server cluster. But first of all, we will add a single host:


If you want to work with application-managed snapshots, you have to provide a “Controlling Server IP Address”. When working with VMware vSphere, this is the ip address of the vCenter server.

With at least two hosts, you can create a server group. A server group simplifies the volume management, because you can assign and unassign volumes to a group of hosts with a single click. This ensures the consistency of volume presentations for a group of hosts.


Presenting a volume

During the initial configuration we created a 1 TB thin-provisioned nRAID 10 volume. To assign this volume to a host, right-click the volume in the CMC and click “Assign and Unassign Servers…”. A windows will popup and you can check or uncheck the server, to which the volume should be assigned. A volume can be presented read-only or read-write.


We are nearly at the end. We only have to add the cluster VIP to the iSCSI initiator and create a datastore out of the presented volume.


After a rescan a new datastore can be added by using the presented volume. Have I mentioned that each VSA node has only 10 GB of data storage? Thin provisioning can be treacherous… ;)


Final words

The deployment and configuration is really easy. But this short series only scratched the surface. You can now add more volumes, play with SmartClones and remote snapshots. Have fun!

Deploying HP StoreVirtual VSA – Part I

I would like to thank Calvin Zito for the donation of StoreVirtual NFR licenses to vExperts. This will help to spread the knowhow about this awesome product! If you are not a vExpert, you can download the StoreVirtual VSA for free and try it for 60 days. If you are a vExpert, ping Calvin on Twitter for a 1y NFR license.

This blog post covers the deployment of the current StoreVirtual VSA release (LeftHand OS 11). A second blog post covers the configuration using the CMC. Both posts are focused on LeftHand OS 11 and VMware vSphere. If you are searching for a deployment and configuration guide for LeftHand OS 9.x or 10 on VMware vSphere, take a look at this two blog posts from Craig Kilborn: Part 1 – How To Install & Configure HP StoreVirtual VSA On vSphere 5.1 & Part 2 – How To Install & Configure HP StoreVirtual VSA On vSphere 5.1. Another blog post that covers LeftHand OS 11 is from Hugo Strydom. Hugo wrote about what he did with his VSA (vExpert : What I did with my HP VSA). I wrote a blog post about the HP StoreVirtual VSA some weeks ago. If you are interested in some basics about the VSA, check my mentioned blog post.


The deployment process has been simplified. The setup wizard did a good job at my lab, but AFAIK there are problem if you use distributed Switches in your lab. If you are affected, please leave a comment or ping me via Twitter. But before we start the setup wizard, we have to think about the goals of our setup. There are some things, that we need to consider. The deployment process can be divided into tree steps:

  1. Planning
  2. Deployment
  3. Configuration

Planning the installation

Before you start, you should have a plan. There are some things you should consider.

vSwitches: We have to design and configure the virtual switches (vSwitches) and port groups. The vSwitches should be dedicated to the VSA cluster and the accessing hosts. You should configure at least 2x 1 GbE for performance and redundancy as vSwitch uplinks. If the iSCSI initiators and all nodes of the VSA cluster are running on the same host, you can use a vSwitch with no uplinks. If you want to use jumbo frames, you need to configure the vSwitches, port groups and VMkernel ports accordingly. I recommend to use a dedicated iSCSI VLAN to separate the traffic.

IP addresses: Each VSA needs an ip address. I recommend to use two ip addresses: One for eth0 and one for eth1. eth0 will be used for management and must attached to a port group, that makes it possible to reach the interface. Either because you client is attached to the same port group, the traffic is routed or the physical client is in the same VLAN as the VSA. eth1 will be used for iSCSI. You also need an ip address for the cluster virtual ip address (cluster VIP). This address must be in the same subnet as the eth1 ip addresses of the VSA nodes. If you want to use multipathing for your iSCSI initiators, each initiator needs two ip addresses in the same subnet as the VIP and the VSA nodes.

Hostnames: Meaningful hostnames facilitate management. I named my VSA nodes vsa01.lab.local, vsa02.lab.local and vsa03.lab.local. Feel free to name you VSAs in another fashion. :)

Storage: A VSA node has a single disk for the OS. All other disks are attached to a seperate controller (when using VMware the Paravirtual SCSI adapter is used). Storage can be added as VMDK or RDM to a VSA node, beginning with SCSI 1:0 (first device on second controller). If you want to use Adaptive Optimization (AO), you should have 10% of the total capacity on SSDs. The VMDK or RDM should be RAID protected, so you should avoid the use of RAID 0. Disks can be hot-added, but not hot-removed. You need at least 5 GB, but a VSA can scale up to 50 TB.

CPU & Memory: CPU and memory resources have to be reserved. You should have at least two 2 GHz cores reserved for each VSA node. The memory requirements depend on the virtualized storage capacity. For 4 TB up to 10 TB you should have 7 GB RAM for each VSA node. If you want to use the same capacity with AO, you should have 8 GB RAM. For 500 MB up to 4 TB, you should have 5 GB RAM. This applies also when using AO. In a productional environment I strongly recommend to use CPU and memory reservation and not to run more than one VSA on a single host. This does not apply to a lab environment.

The deployment

I took some screenshots during the deployment of a VSA using the setup wizard. I ran the wizard on a Windows 8.1 client.

The setup file (HP_StoreVirtual_VSA_2014_Installer_for_VMware_vSphere_TA688-10518.exe) is self-extracting. After the extraction a CMD comes up asking you, if you want to use the GUI or  CLI interface. I chose the GUI wizard. Unfortunately after pressing “2” for the GUI wizard, the wizard didn’t appeared. I had to run the setup file as administrator (right click the file, then choose “Run as administrator”). On the welcome page simply click “Next”.


You have to provide hostname or ip address, and login credentials for the target ESXi host or the vCenter server. I chose a ESXi as target for my VSA deployment.


On the third page you get a summary of the host you chosen one step earlier.


Now you can choose between deploying a VSA or a Failover Manager. The latter is a special manger used in clusters as a quorum tie-breaker. But we want to deploy a VSA.


In the next step we have to chose a datastore in which the VSA should reside. This has no impact on the later configured storage.


The next step covers the NIC setup of the VSA. As I mentioned earlier I recommend to use two NICs for the VSA: One for management and a second one for iSCSI traffic. As you can see on the screenshot, I used eth0 for management.


The second NIC is dedicated to iSCSI traffic. Please notice the drop-down menu on the bottom “Select the preferred SAN/iQ interface”. I will refer to it later.


Now it’s time to give a name to the VM and to select the drive type. Because I had no RDMs in my lab, the option is greyed out.


Now we have to configure the data disks.


The wizard allows you to deploy more than one VSA. In the next step you can choose, if you want to deploy another VSA on the same or another host, or if you are done. I only deployed one VSA, so I was done at this point.


Before you click “Deploy”, you should check the settings. If everything is fine, hit the “Deploy” button. The deployment will start immediately.


After a couple of minutes the deployment is finished. Hit “Finish”. Now it’s time to start the Centralized Management Console (CMC). If not already installed, you can install it manually. Usually the CMC is installed automatically by the wizard.


Part II covers the configuration of the management group, cluster etc. If you have further questions or feedback, feel free to leave a comment!

HP StoreVirtual VSA – An introduction

In 2008 HP acquired LeftHand Networks for “only” $360 million. In relation to the acquiration of 3PAR in 2010 ($2.35 billion) this was a  really cheap buy. LeftHand Networks was a pioneer in regard of IP based storage build on commodity server hardware. Their secret was SAN/iQ, a linux-based operating system, that did the magic. HP StoreVirtual is the TAFKAP (or Prince…? What’s his current name?) in the HP StorageWorks product familiy. ;) HP LeftHand, HP P4000 and now StoreVirtual. But the secret sauce never changed: SAN/iQ or LeftHand OS. Hardware comes and goes, but the secret of StoreVirtual was and is the operating system. And because of this it was easy for HP to bring the OS into a VM. StoreVirtual Virtual Storage Appliance (VSA) was born. So you can chose between the StoreVirtual Storage nodes (HW appliances) and the StoreVirtual VSA, the virtual storage appliance. This article will focus on the StoreVirtual VSA with LeftHand OS 11.

HP StoreVirtual VSA

The solution of LeftHand Networks differed in one imporant point: Their concept was not based on the “traditional” dual-controller paradigm. Their storage nodes formed a cluster and the data blocks were copied between the nodes. The access to the cluster was realized with a cluster virtual IP (VIP). So each node provided capacity and IO. And with each block, that were added to the cluster, performance and IO increased. Imagine a train, not a diesel locomotive, but a modern train where each axis has a motor. With each car of the train is added, capacity (for passengers) and drive power increases. You can call it GRID Storage.

The StoreVirtual Storage appliances uses HP ProLiant hardware. Depending on the model between 4 and 25 SAS or SAS-NL disks are configured. If you use the StoreVirtual VSA, storage is allocated in form of raw device mappings (RDM) or VMDK. You simply add RDM or VMDK to the VSA. With this you can use the StoreVirtual VSA to utilize local storage in hosts. Beside the local RAIDs inside the HW appliances, StoreVirtual provides resiliency through Network RAID (nRAID). Karim Vaes wrote an excellent article and described the different nRAID level in detail. To make a long story short: Network RAID works like the well known raid levels. Instead of dealing with disks, you deal with data blocks. And the data blocks are copied between two or more nodes. Depending on the number of nodes inside of a cluster, you can use different nRAID levels and get more or less redundancy and resiliency in case of one or more node failures. Currently you can choose between Network RAID 0, 5, 6, 10, 10+1 and 10+2 to protect against double disk, controller, node, power, network or site failure.

A cluster is a group of nodes. One or more clusters can be created in a management group. So the smallest setup is a Managementgroup with one cluster. The storage capacity of all nodes inside of a cluster is pooled and can be used to create volumes, clones and snapshots. The volumes seamlessly span the nodes in the cluster. You can expand the storage and IO capacity by adding nodes to the cluster. The StoreVirtual VSA offers their storage via iSCSI. A cluster has at least one IP address and each node has also at least one IP address. The cluster virtual IP address (VIP) is used to connect to the cluster. As long as the cluster is online, the VIP will stay online and will provide access to the volumes. A quorum (majority of nodes) determines if a cluster can stay online or if it will go down. For this a special manager is running on each node. You can also use specialized managers, so called Failover Manager (FOM). If you have two nodes and a FOM, at least one node and the FOM need to stay online and must be able to communicate with each other. If this isn’t the case, the cluster will go down and access to volumes is no longer possible. StoreVirtual provides two clustering modes: Standard Cluster and Multi-Site Cluster. A standard cluster can’t contain nodes that are designated to a site, nodes can’t span multiple subnets and it can only have a single cluster VIP. So if you need to deploy StoreVirtual Storage or VSA nodes to different sites, you have to build a Multi-Site cluster. Otherwise a standard cluster is sufficient. Don’t try to deploy a standard cluster in a multi-site enviroment. It will work, but in unawareness of multiple sites, LeftHand OS won’t guarantee that block copies are written to both sites.

LeftHand OS provides a broad range of features:

  • Storage Clustering
  • Network RAID
  • Thin Provisioning
  • Application integrated snapshots
  • SmartClone
  • Remote Copy
  • Adaptive Optimization

The HP StoreVirtual VSA is… a virtual storage appliance. It’s delivered as a ready-to-run appliance for VMware vSphere or Microsoft Hyper-V. Because the VSA is a VM, it consumes CPU, memory and disk resources from the hypervisor. Therefor you have to ensure that the VSA gets the resources it needs to operate correctly. These are best practices taken from the “HP StoreVirtual Storage VSA Installation and Configuration Guide”

Configure the VSA for vSphere to start automatically and first, and before any other virtual machines, when the vSphere Server on which it resides is started. This ensures that the VSA for vSphere is brought back online as soon as possible to automatically re-join its cluster.

Locate the VSA for vSphere on the same virtual switch as the VMkernel network used for iSCSI traffic. This allows for a portion of iSCSI I/O to be served directly from the VSA for vSphere to the iSCSI initiator without using a physical network.

Locate the VSA for vSphere on a virtual switch that is separate from the VMkernel network used for VMotion. This prevents VMotion traffic and VSA for vSphere I/O traffic from interfering with each other and affecting performance.

HP recommends installing vSphere Server on top of a redundant RAID configuration with a RAID controller that has battery-backed cache enabled. Do not use RAID 0.

And if there are best practices, there are always some things you shouldn’t do…

Use of VMware snapshots, VMotion, High-Availability, Fault Tolerance, or Distributed Resource Scheduler (DRS) on the VSA for vSphere itself.

Use of any vSphere Server configuration that VMware does not support.

Co-location of a VSA for vSphere and other virtual machines on the same physical platform without reservations for the VSA for vSphere CPUs and memory in vSphere.

Co-location of a VSA for vSphere and other virtual machines on the same VMFS datastore.

Running VSA for vSphere’s on top of existing HP StoreVirtual Storage is not recommended.

Because the OS is the same for HW appliances and VSA, you can manage both with the same tool. A StoreVirtual solution is managed with the Centralized Management Console (CMC). You can run the CMC on Windows or Linux. The CMC is the only ways to manage StoreVirtual Storage and VSA nodes. On the nodes itself you can only assign an IP address and set user and password. Everything else is configured with the CMC.

Meanwhile there are some really cool solutions that integrates with HP StoreVirtual. Take a look at Veeam Explorer for SAN Snapshots. StoreVirtual is also certified for vSphere Metro Storage Cluster. You can get a 60 days evaluation copy on the HP website. Give it a try! If you’re a vExpert you can get a free NFR license from HP!

Blog posts about deploying StoreVirtual VSA, features like Snapshots or Adaptive Optimization and solutions like Veeam Explorer for SAN Snapshots will follow. I will also blog about the HP Data Protector Zero Downtime Backup with HP StoreVirtual.