HP StoreVirtual VSA – An introduction

In 2008 HP acquired LeftHand Networks for “only” $360 million. In relation to the acquiration of 3PAR in 2010 ($2.35 billion) this was a  really cheap buy. LeftHand Networks was a pioneer in regard of IP based storage build on commodity server hardware. Their secret was SAN/iQ, a linux-based operating system, that did the magic. HP StoreVirtual is the TAFKAP (or Prince…? What’s his current name?) in the HP StorageWorks product familiy. ;) HP LeftHand, HP P4000 and now StoreVirtual. But the secret sauce never changed: SAN/iQ or LeftHand OS. Hardware comes and goes, but the secret of StoreVirtual was and is the operating system. And because of this it was easy for HP to bring the OS into a VM. StoreVirtual Virtual Storage Appliance (VSA) was born. So you can chose between the StoreVirtual Storage nodes (HW appliances) and the StoreVirtual VSA, the virtual storage appliance. This article will focus on the StoreVirtual VSA with LeftHand OS 11.

HP StoreVirtual VSA

The solution of LeftHand Networks differed in one imporant point: Their concept was not based on the “traditional” dual-controller paradigm. Their storage nodes formed a cluster and the data blocks were copied between the nodes. The access to the cluster was realized with a cluster virtual IP (VIP). So each node provided capacity and IO. And with each block, that were added to the cluster, performance and IO increased. Imagine a train, not a diesel locomotive, but a modern train where each axis has a motor. With each car of the train is added, capacity (for passengers) and drive power increases. You can call it GRID Storage.

The StoreVirtual Storage appliances uses HP ProLiant hardware. Depending on the model between 4 and 25 SAS or SAS-NL disks are configured. If you use the StoreVirtual VSA, storage is allocated in form of raw device mappings (RDM) or VMDK. You simply add RDM or VMDK to the VSA. With this you can use the StoreVirtual VSA to utilize local storage in hosts. Beside the local RAIDs inside the HW appliances, StoreVirtual provides resiliency through Network RAID (nRAID). Karim Vaes wrote an excellent article and described the different nRAID level in detail. To make a long story short: Network RAID works like the well known raid levels. Instead of dealing with disks, you deal with data blocks. And the data blocks are copied between two or more nodes. Depending on the number of nodes inside of a cluster, you can use different nRAID levels and get more or less redundancy and resiliency in case of one or more node failures. Currently you can choose between Network RAID 0, 5, 6, 10, 10+1 and 10+2 to protect against double disk, controller, node, power, network or site failure.

A cluster is a group of nodes. One or more clusters can be created in a management group. So the smallest setup is a Managementgroup with one cluster. The storage capacity of all nodes inside of a cluster is pooled and can be used to create volumes, clones and snapshots. The volumes seamlessly span the nodes in the cluster. You can expand the storage and IO capacity by adding nodes to the cluster. The StoreVirtual VSA offers their storage via iSCSI. A cluster has at least one IP address and each node has also at least one IP address. The cluster virtual IP address (VIP) is used to connect to the cluster. As long as the cluster is online, the VIP will stay online and will provide access to the volumes. A quorum (majority of nodes) determines if a cluster can stay online or if it will go down. For this a special manager is running on each node. You can also use specialized managers, so called Failover Manager (FOM). If you have two nodes and a FOM, at least one node and the FOM need to stay online and must be able to communicate with each other. If this isn’t the case, the cluster will go down and access to volumes is no longer possible. StoreVirtual provides two clustering modes: Standard Cluster and Multi-Site Cluster. A standard cluster can’t contain nodes that are designated to a site, nodes can’t span multiple subnets and it can only have a single cluster VIP. So if you need to deploy StoreVirtual Storage or VSA nodes to different sites, you have to build a Multi-Site cluster. Otherwise a standard cluster is sufficient. Don’t try to deploy a standard cluster in a multi-site enviroment. It will work, but in unawareness of multiple sites, LeftHand OS won’t guarantee that block copies are written to both sites.

LeftHand OS provides a broad range of features:

  • Storage Clustering
  • Network RAID
  • Thin Provisioning
  • Application integrated snapshots
  • SmartClone
  • Remote Copy
  • Adaptive Optimization

The HP StoreVirtual VSA is… a virtual storage appliance. It’s delivered as a ready-to-run appliance for VMware vSphere or Microsoft Hyper-V. Because the VSA is a VM, it consumes CPU, memory and disk resources from the hypervisor. Therefor you have to ensure that the VSA gets the resources it needs to operate correctly. These are best practices taken from the “HP StoreVirtual Storage VSA Installation and Configuration Guide”

Configure the VSA for vSphere to start automatically and first, and before any other virtual machines, when the vSphere Server on which it resides is started. This ensures that the VSA for vSphere is brought back online as soon as possible to automatically re-join its cluster.

Locate the VSA for vSphere on the same virtual switch as the VMkernel network used for iSCSI traffic. This allows for a portion of iSCSI I/O to be served directly from the VSA for vSphere to the iSCSI initiator without using a physical network.

Locate the VSA for vSphere on a virtual switch that is separate from the VMkernel network used for VMotion. This prevents VMotion traffic and VSA for vSphere I/O traffic from interfering with each other and affecting performance.

HP recommends installing vSphere Server on top of a redundant RAID configuration with a RAID controller that has battery-backed cache enabled. Do not use RAID 0.

And if there are best practices, there are always some things you shouldn’t do…

Use of VMware snapshots, VMotion, High-Availability, Fault Tolerance, or Distributed Resource Scheduler (DRS) on the VSA for vSphere itself.

Use of any vSphere Server configuration that VMware does not support.

Co-location of a VSA for vSphere and other virtual machines on the same physical platform without reservations for the VSA for vSphere CPUs and memory in vSphere.

Co-location of a VSA for vSphere and other virtual machines on the same VMFS datastore.

Running VSA for vSphere’s on top of existing HP StoreVirtual Storage is not recommended.

Because the OS is the same for HW appliances and VSA, you can manage both with the same tool. A StoreVirtual solution is managed with the Centralized Management Console (CMC). You can run the CMC on Windows or Linux. The CMC is the only ways to manage StoreVirtual Storage and VSA nodes. On the nodes itself you can only assign an IP address and set user and password. Everything else is configured with the CMC.

Meanwhile there are some really cool solutions that integrates with HP StoreVirtual. Take a look at Veeam Explorer for SAN Snapshots. StoreVirtual is also certified for vSphere Metro Storage Cluster. You can get a 60 days evaluation copy on the HP website. Give it a try! If you’re a vExpert you can get a free NFR license from HP!

Blog posts about deploying StoreVirtual VSA, features like Snapshots or Adaptive Optimization and solutions like Veeam Explorer for SAN Snapshots will follow. I will also blog about the HP Data Protector Zero Downtime Backup with HP StoreVirtual.

HP StoreVirtual VSA – An introduction
5 (100%) 5 votes
Patrick Terlisten
Follow me

Patrick Terlisten

vcloudnine.de is the personal blog of Patrick Terlisten. Patrick has over 15 years experience in IT, especially in the areas infrastructure, cloud, automation and industrialization. Patrick was selected as VMware vExpert (2014 - 2016), as well as PernixData PernixPro.

Feel free to follow him on Twitter and/ or leave a comment.
Patrick Terlisten
Follow me

Latest posts by Patrick Terlisten (see all)

Related Posts:

Leave a Reply

Your email address will not be published. Required fields are marked *