Tag Archives: hpe

Checking the 3PAR Quorum Witness appliance

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Two 3PAR StoreServs running in a Peer Persistence setup lost the connection to the Quorum Witness appliance. The appliance is an important part of a 3PAR Peer Persistence setup, because it acts as a tie-breaker in a split-brain scenario.

While analyzing this issue, I saw this message in the 3PAR Management Console:

3PAR Quorum Witness Status

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

In addition to that, the customer got e-mails that the 3PAR StoreServ arrays lost the connection to the Quorum Witness appliance. In my case, the CouchDB process died. A restart of the appliance brought it back online.

How to check the Quorum Witness appliance?

You can check the status of the appliance with a simple web request. The documentation shows a simple test based on curl. You can run this direct from the BASH of the appliance.

But you can also use the PowerShell cmdlet Invoke-WebRequest.

If you add /witness to the URL, you can test the access to the database, which is used for Peer Persistence.

If you get a connection error, check if the beam process is running.

If not, reboot the appliance. This can be done without downtime. The appliance comes only into play, if a failover occurs.

HPE ProLiant PowerShell SDK

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Some days ago, my colleague Claudia and I started to work on a new project: A greenfield deployment consisting of some well known building blocks: HPE ProLiant, HPE MSA, HPE Networking (now Aruba) and VMware vSphere. Nothing new for us, because we did this a couple times together. But this led us to the idea, to automate some tasks. Especially the configuration of the HPE ProLiants: Changing BIOS settings and configuring the iLO.

Do not automate what you have not fully understood

Some of the wisest words I have ever said to a customer. Modifying the BIOS and iLO settings is a well understood task. But if you have to deploy a bunch of ProLiants, this is a monotonous, and therefore error prone process. Perfect for automation!

Scripting Tools for Windows PowerShell

To support the automation of HPE ProLiant deployments, HPE offers the Scripting Tools for Windows PowerShell. HPE offers the PowerShell modules free for charge. There are three different downloads:

  • iLO cmdlets
  • BIOS cmdlets
  • Onboard Administrator (OA) cmdlets

The iLO cmdlets include PowerShell cmdlets to configure and manage iLO on HPE ProLiant G7, Gen8 or Gen9 servers. The BIOS cmdlets does not support G7 servers, so you can only configure and manage legacy and UEFI BIOS for Gen8 (except DL580) and all Gen9 models. The OA cmdlets support the configuration and management of the HPE Onboard Administrator, which is used with HPEs well known ProLiant BL blade servers. The OA cmdlets need at least  OA v3.11, whereby v4.60 is the latest version available.  All you need to get started are

  • Microsoft .NET Framework 4.5, and
  • Windows Management Framework 3.0 or later

If you are using Windows 8 or 10, you already have PowerShell 4 respectively PowerShell 5.

Support for HPE ProLiant Gen9 iLO RESTful API

If you have ever seen a HPE ProLiant Gen9 booting up, you might have noticed the iLO RESTful API icon down right. Depending on the server model, the BIOS cmdlets utilize the ILO4 RESTful API. But the iLO RESTful API ecosystem is it worth to be presented in an own blog post. Stay tuned.

Documentation and examples

HPE offers a simple documentation for the BIOS, iLO and OA cmdlets. You can find the documentation in HPEs Information Library. Documentation is important, but sometimes example code is necessary to quickly ramp up code. Check HPEs PowerShell SDK GitHub repository for examples.

Time to code

I’m keen on it and curious to automate some of my regular deployment tasks with these PowerShell modules. Some of these tasks are always the same:

  • change the power management and other BIOS settings
  • change the network settings of the iLO
  • change the initial password of the iLO administrator account and create additional iLO user accounts

Further automation tasks are not necessarily related to the HPE ProLiant PowerShell SDK, but to PowerShell, respectively VMware PowerCLI. PowerShell is great to automate the different aspects and modules of an infrastructure deployment. You can use it to build your own tool box.

Enable IPv6 SLAAC on HPE OfficeConnect 1920 switches

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

The HPE OfficeConnect 1920 switch series is designed for SMBs. The switch is perfect for small environments, that require features like VLANs, routing or 802.1x. This switch is smart-managed, so it has “only” a web interface and only a limited CLI.

I have two switches in my lab: A 1910-8G and the successor, a 1920-24G. Although the device supports IPv6, it doesn’t support SLAAC (Stateless Address Autoconfiguration) by default. The switch does not send router advertisements (RA). I’m using IPv6 in my lab (Stateless DHCPv6 + SLAAC), so the missing RAs were a problem for me, or at least, annoying. Fortunately you can change the default behaviour.

Enable router advertisements (RA)

To change the default behaviour of the HPE 1920, you have to use the CLI. The CLI is very limited, but there’s a hidden CLI command, which enables access to nearly all available features. If you are familiar with HPEs Comware based switches, you will notice, that the switch is a Comware-based device.

After switching to the system-view, we can change the default behaviour for each VLAN interface. I have multiple VLAN interfaces and each VLAN interface has an IPv4 and an unique local address (ULA) IPv6 address.

The first command enables router advertisements. The second command adds the prefix which should be announced. That’s it. Don’t forget to save the changed configuration with “save force”. If you have more than one VLAN interface, enter this command in each VLAN interface context you wish to change.

HPE Data Protector 9.08 is available

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

3 days ago, on 13th October 2016, HPE has released patch bundle 9,08 for Data Protector 9. A patch bundle isn’t a directly installable version, instead it’s a bundle of patches and enhancements for a specific version of Data Protector, in this case Data Protector 9.

Beside fixes for discovered problems, a patch bundle includes also enhancements. There are some enhancements in this patch bundle, that have caught my attention particularly.

QCCR2A64053: Support for object copy of file system data to Microsoft Azure. Data Protector now supports the creation of a special backup device, which can be used together with Data Protector object copies, to copy Data Protector file system backups to Azure Backup Vaults. This is an easy way to create copies of important data on Microsoft Azure.

Contemporaneous with the announcement of Data Protector 9.08, I got an e-mail of HPE with the information, that one of my change request has made it into the latest patch bundle:

QCCR2A68100: VMWARE GRE stays in debug mode. I have observed this behaviour in different Data Protector installations: If debugging isn’t explicitly disabled (OB2DBG=0 in the omnirc), the VMware GRE always writes debug logs. Regardless if debugging is enabled or disabled in the GRE configuration.

Because of some security related changes and fixes in Data Protector 9.08, HPE has marked this patch bundle as critical.

Download Data Protector patch bundle 9.08:

Data Protector 9.08 for Windows

Data Protector 9.08 for HP-UX/IA

Data Protector 9.08 for Linux/64

I’m routing on the edge…

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

In my last post (Routed Port vs. Switch Virtual Interface (SVI)), I have mentioned a consequence of using routed ports to interconnect access and core switches:

You have to route the traffic on the access switches.

Routing on the network access, the edge of the network, is not a question of performance. It is more of a management issue. Depending on the size of your network, and the number of subnets, you have to deal with lots of routes. And think about the effort, if you add, change or remove subnets from your network. This is not what you want to do with static routes. You need a routing protocol.

The experiment of the week

We have a core switch C1, consisting of two independent switches (C1-1 and C1-2) forming a virtual chassis. S1 and S2 are two switches at the network access. This is a core-edge design. There is no distribution layer. Each switch at the network access has two uplinks: One uplink to C1-1 and one uplink to C1-2. The ports on each end of the links are configured as routed ports.

Please ignore the 40 GbE ports (FGE) between C1-1 and C1-2. These ports are used for the Intelligent Resilient Framework (IRF), which is used to create a virtual chassis.

routed_links_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

These are the interfaces on the core switch, that are working in route mode. GE1/0/1 and GE2/0/1 are the uplinks to S1, and GE1/0/2 and GE2/0/2 are the uplinks to S2.

These are the interfaces on the access switch S1, that are working in route mode. GE1/0/1 and GE1/0/2 are the uplinks to C1. As you can see, GE1/0/1 on C1 and  GE1/0/1 on S1 belong to the same /30 network. The same applies to GE2/0/1 on C1 and GE1/0/2 on S1. There are also two SVIs, one on VLAN 1 (192.168.1.0/24) and another on VLAN 2 (192.168.2.0/24). These VLANs are used for client connectivity.

These are the interfaces on S2, that are working in route mode. GE1/0/1 and GE1/0/2 are the uplinks to C1. The interfaces GE1/0/2 on C1, and  GE1/0/1 on S2 belong to the same /30 network. The same applies to GE2/0/2 on C1 and GE1/0/2 on S2. There are also two SVIs, one on VLAN 1 (192.168.10.0/24) and another on VLAN 2 (192.168.20.0/24).

You might wonder, because the same VLAN IDs are used on both access switches. They don’t care, because there is no layer 2 connectivity between these two switches. The only way from S1 to S2 is over the routed links to the core switch.

Now let’s have a look at the Open Shortest Path First (OSPF) routing protocol.

Single Area OSPF

The Open Shortest Path First (OSPF) routing protocol is an interior gateway protocols (IGP), and also a link-state routing protocol. The calculation of the shortest path for each route is based on Dijkstra’s algorithm. I don’t want to annoy you with details. Take a look at the Wikipedia article for OSPF.

The simplest OSPF setup is a “Single Area OSPF”. This is an OSPF configuration, which has only a single area. This is the area 0, or the backbone area.

The configuration on the core switch looks like this:

The networks, that should be associated with this area, are specified with a wildcard mask. The wildcard mask is the opposite of the subnet mask. The wildcard mask 0.255.255.255 corresponds to the subnet mask 255.0.0.0. Because I have used multiple /30 subnets at the core switch, I can summarize them with a single entry for 10.0.0.0.

The same configuration applies to the access switches S1 and S2.

With this simple configuration, the switches will exchange their routing information. They will synchronize their link-state databases, and they will be fully adjacent. If a link-state change occurs, OSPF will handle this.

The core switch has two links to each access switch. The router ID represents the access switches. 1.1.1.2 is a loopback interface IP address on S1, 1.1.1.3 is a loopback interface IP address on S2.

The same applies to the access switches, in this case S1. The access switches have also two active links to the core switch.

If one of the links fail, the access switch has another working link to the core switch, and OSPF will recalculate the shortest paths, taking the link-state change (link down between core and an access switch) into account.

This is the OSPF routing table of the core switch, based on the example above.

What if I add a new subnet on S1? Let’s create a new VLAN and add a SVI to it (VLAN 3 and 192.168.3.1).

Without touching the OSPF configuration, the core switch C1, and the other access switch S2, added routes to this new subnet.

Pretty cool, isn’t it?

Any downsides?

This is only an example with a single core switch and two access switches. OSPF can be pretty complex, if the size of the network increases. The Dijkstra’s algorithm can be really CPU intensive, and the size of the link-state databases (LSDB) increase with adding more routers and networks. For this reason, larger networks have to be divided into separate areas. It depends on the network size and the CPU/ memory performance of your switches/ routers, but a common practice is a maximum of up to 50 switches/ routers per area. If you have unstable links, the area should be smaller, because each link-state change is flooded to all neighbors and consumes CPU time.

You need a good subnet design, otherwise you have to touch your OSPF configuration too often. You should be able to summarize subnets.

Conclusion

Routing at the network access is nothing for small networks. There are better designs for small networks. But if your network has a decent size, routing at the edge of the network can offer some benefits. Instead of working with SVIs and small transfer VLANs, a routed port is more simple to implement. Routed links can also have a shorter convergence delay, and you can reduce the usage of Spanning Tree Protocol to a minimum.

Routed Port vs. Switch Virtual Interface (SVI)

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Many years ago, networks consisted of repeaters, bridges and router. Switches are the successors of the bridges. A switch is nothing else than a multiport bridge, and a traditional switch doesn’t know how to pass traffic to a different broadcast domains (VLANs). Passing traffic between different broadcast domains, is a job for a router. A router has an IP interface in each broadcast domain, and the IP interface is used by the clients in the broadcast domain as a gateway.

Switch Virtual Interface

A Switch Virtual Interface, or SVI, is exactly this: An virtual IP interface in a broadcast domain (or VLAN). It’s used by the connected clients in the broadcast domain to send traffic to other broadcast domains.

This is how a SVI is created on HPE Comware 7. It’s similar to other vendors.

At least one port is assigned to this VLAN, and as soon as at least one port of this VLAN is online, the SVI is also reachable.

What happens, if you connect two switches with a cable? The broadcast domain spans both switches. Layer 2 traffic is transmitted between the switches. And what would happen if you connect a second cable between the same two switches? As long as you are running Spanning Tree Protocol (STP), or another loop detection mechanism, nothing would happen. But one of the two connection would be blocked. No traffic would be able to pass over this connection. If you want to use multiple, active connections between switches, you have to use Link Aggregation Groups (LAG), or things like Multiple Spanning Tree Protocol (MSTP) and Per VLAN Spanning Tree (PVST).

Routers don’t know this. Multiple connections between the same two routers can’t form a loop. Loops and STP (an some other crappy layer 2 stuff) are legacies of the bridges, still alive in modern switches. Loops are a typical “bridge problem”.

Routed Ports

Some switches offer a way, to change the operation mode of a switch port. After changing this operation mode, a switch port doesn’t act like a bridge port anymore. It’s acting like the port of a router, that only handles layer 3 traffic.

This is again a HPE Comware 7 example. I know that Cisco and Alcatel Lucent Enterprise also offer routed ports.

This is a normal switch port. Please note the “port link-mode bridge”.

To “convert” a switch into a routed port, simply change the link-mode of the port.

As you can see, you can now assign an IP address directly to the port.

Example

Let’s try to make this clear with an example. C1-1 and C1-2 are two HPE Comware based switched, configured as an IRF stack (virtual chassis). These two switches form the core switch C1. S1 and S2 are two access switches, also HPE Comware based. Each access switch has two uplinks: One uplink to C1-1 and another uplink to C1-2, the two chassis that form C1. The 40 GbE Ports between C1-1 and C1-2 are used for IRF. Please ignore them.

The uplinks between the switches, all ports are Gigabit Ethernet (GE) ports, are configured as routed ports.

Without routed ports, the uplinks must be configured as a LAG, or STP would block one of the two uplinks between the core switches and the access switch. But because routed ports are used, no loop is formed. Most layer 2 traffic can’t pass the routed ports (broadcasts, multicasts etc.)

routed_links_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

THe Link Layer Discovery Protocol (LLDP) traffic can pass the routed port. This is what the core switch (C1) “sees” over LLDP.

Each routed port as an IP address assigned. The same applies to the routed ports on the access switches. Each uplink pair (core to access) uses a /30 subnet.

As you can see, the interfaces working in bridge mode start counting at GE1/0/3.

The same applies to STP. The ports, that were configured as routed ports, are not listed in the output. STP is not active on these ports.

What are the implications?

The example shows redundant links between access and core switches. There are no loops, but there’s also no layer 2 connectivity. VLANs are only located on the access switches. There are no VLANs spanning multiple switches. What does this mean? How can a client on S1 reach a server on S2? The answer is simple: You have to route the traffic on the access switches. But that’s a topic for another blog post.

HPE 3PAR OS updates that fix VMware VAAI ATS Heartbeat issue

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Customers that use HPE 3PAR StoreServs with 3PAR OS 3.2.1 or 3.2.2 and VMware ESXi 5.5 U2 or later, might notice one or more of the following symptoms:

  • hosts lose connectivity to a VMFS5 datastore
  • hosts disconnect from the vCenter
  • VMs hang during I/O operations
  • you see the messages like these in the vobd.log or vCenter Events tab

  • you see the following messages in the vmkernel.log

Interestingly, not only HPE is affected by this. Multiple vendors have the same issue. VMware described this issue in KB2113956. HPE has published a customer advisory about this.

Workaround

If you have trouble and you can update, you can use this workaround. Disable ATS heartbeat for VMFS5 datastores. VMFS3 datastores are not affected by this issue. To disable ATS heartbeat, you can use this PowerCLI one-liner:

Solution

But there is also a solution. Most vendors have published firwmare updates for their products. HPE has released

  • 3PAR OS 3.2.2 MU3
  • 3PAR OS 3.2.2 EMU2 P33, and
  • 3PAR OS 3.2.1 EMU3 P45

All three releases of 3PAR OS include enhancements to improve ATS heartbeat. Because 3PAR OS 3.2.2 has also some nice enhancements for Adaptive Optimization, I recommend to update to 3PAR OS 3.2.2.

Data Protector: Copy sessions to encrypted devices fail after update to 9.07

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Recently, a customer has informed me, that copy sessions to encrypted devices failed, after he has made an update to Data Protector 9.07. The copy sessions failed with this error:

The customer uses tape encryption. The destination for the backups is a HPE StoreOnce, and a post-backup copy creates a copy of the data on tape. Backup to disk was running fine, but the copy to tape failed immediately.

The customer has opened a ticket at the HPE support and got instantly a hotfix to resolve this issue. HPE has documented this error in QCCR2A69192. If you run into the same issue, please request hotfix QCCR2A69802. This hotfix consolidates QCCR2A69192 and QCCR2A69318 (The BMA ends abnormally during backup/copy to tape).

Thanks to Stefan for the hint!

Redundancy on the first hop – VRRP

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

The Virtual Router Redundancy Protocol (VRRP) was developed in 1998 as an open standard protocol. VRRP is the result of an Internet Engineering Task Force (IETF), and it’s described in RFC 5798 (VRRPv3). VRRP was designed as an open standard protocol, but it uses some patents from Cisco. Its function is comparable to Cisco Hot Standby Router Protocol (HSRP), or to the Common Address Redundancy Protocol (CARP). VRRP solves a very specific problem at the network edge: It offers highly available virtual router interfaces, or in simple words: A highly available default gateway. Its home is the network edge, and because of this, VRRP is a so called first hop redundancy protocol. When moving towards network core, VRRP loses importance. If you move from the network edge to the core, redundancy is primarily offered by dynamic routing protocols and redundant links.

Fun fact: Its home is the network edge, but most edge switches, doesn’t support VRRP…

As already mentioned, VRRP is comparable to HSRP, CARP, Cisco Gateway Load Balancing Protocol (GLBP), or the Extreme Standby Router Protocol (ESRP).

VRRPv3 supports IPv6 and IPv4.

How does it work?

 Pretty easy:

  • at least two routers or switches that support VRRP
  • a virtual IP address
  • a virtual mac address

Okay, maybe it’s not that easy.

Key point is the virtual router. A virtual router is defined on each physical router or switch that should offer high availability for a virtual IP address. A virtual router is defined on a per-vlan base, and it consists of a virtual router identifier (VRID), one or more virtual IP addresses, and a statement that declares a router or switch as a master or backup virtual router.

The virtual mac address is build upon the VRID. The mac address is always 00-00-5E-00-01-xx, in which xx is the VRID in hexadecimal format.

The interface IP address, or switch virtual interface (SVI), that is configured for a specific VLAN, and the virtual IP address of a virtual router configured for the same VLAN, must belong to the same subnet.

Master, Backup, Owner

A router or switch can have one of two roles:

  • master virtual router
  • backup virtual router

You can have one master, but multiple backup virtual router. The master virtual router answers to ARP requests and forwards packets for the virtual IP address. The backup virtual router comes into play, in case of a failure of the master virtual router. If a backup virtual router doesn’t receive packets from the master virtual router (a period longer than three times of the advertisement time), the backup virtual routers assume that the master virtual router is dead. An election process is then initiated, to select a new master virtual router.

Master and backup virtual routers communicate via multicast using the multicast IP address 224.0.0.18.

The virtual IP address must also be a real interface IP address on a router or switch. This router or switch is called IP address owner. The IP address owner has always the priority 255. Because of this. the IP address owner will always become the master virtual router, regardless what the configuration says.

vrrp_owner_master_backup

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

As you can see, R1 has the IP 10.0.0.1/24 and the virtual IP address (VIP) is also 10.0.0.1. In this case, R1 is the master virtual router and the IP address owner.

Some vendors allow a no owner design.

vrrp_no_owner_backup_backup

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

As you can see, R1 and R2 are both configured as backup virtual router, but R1 has a higher priority. In this case, R1 will answer to ARP requests and will forward packets for 10.0.0.254. Another interesting fact: The VIP is a true VIP, and it’s not a real interface IP address of any of the participating routers or switches.

Not all vendors seem to support such a design, and RFC 5798 has no references to it. According to some other vendor docs and RFC 5798, VRRP requires that the master virtual router has the virtual IP address configured as a physical IP address, which means that the master virtual router must also the IP address owner (as mentioned above).

VRRP-E – extended VRRP

Brocade and HPE offer VRRP-E, an extended and proprietary version of VRRP. Extended means, that it overcomes limitations of VRRP (told by Brocade and HPE).

VRRP-E doesn’t know the concept of master and backup virtual routers. All routers are acting as backup virtual routers. A priority value is used to determine, which router will act as master virtual router. Furthermore, VRRP-E doesn’t know the concept of the IP address owner.

Brocade states in one of their docs:

The most important difference is that all VRRP-E routers are Backups. There is no Owner router. VRRP-E overcomes the limitations in standard VRRP by removing the Owner.

VRRP and dynamic routing protcols

If VRRP is used together with dynamic routing protocols, like OSPF, there’s a worth mentioning fact: Not a single dynamic routing protocol like it, if the IP address, which is used to build adjacencies, moves to another router. It’s not the IP address that is the problem, but perhaps the not matching routing protocol configuration, a changed router ID or similar. Because of this, the VRRP VIP must not be used in the configuration for dynamic routing protocols. A no owner design can have some benefits if you have to use VRRP and dynamic routing protocols on the same router or switch. In this case, the real interface IP addresses can be used for the dynamic routing protocol configuration, and not the floating VIP.

HPE StoreVirtual – Managers and Quorum

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

HPE StoreVirtual is a scale-out storage platform, that is designed to meet the needs of virtualized environments. It’s based on LeftHand OS and because the magic is a piece of software, HPE StoreVirtual is available as HPE ProLiant/ BladeSystem-based hardware, or as Virtual Storage Appliance (VSA) for VMware ESXi, Microsoft Hyper-V and KVM. It comes with an all-inclusive enterprise feature set. This feature set provides

  • Storage clustering
  • Network RAID
  • Thin Provisioning (with support for space reclamation)
  • Snapshots
  • Asynchronous and synchronous replication across multiple sites
  • Automated software upgrades and self-healing storage
  • Adaptive Optimization (Tiering)

The license is alway all-inclusive. There is no need to license individual features.

HPE StoreVirtual is not a new product. Hewlett-Packard has acquired LeftHand Networks in 2008. The product had several names since 2008 (HP LeftHand, HP P4000 and since a couple of years it’s StoreVirtual), but the core intelligence, LeftHand OS, was constantly developed by HPE. There are rumours that HPE StoreOnce Recovery Manager Central will be available for StoreVirtual soon.

Management Groups & Clusters

A management group is a collection of multiple (at least one) StoreVirtual P4000 storage systems or StoreVirtual VSA. A management group represents the highest administrative domain. Administrative users, NTP and e-mail notification settings are configured on management group level. Clusters are created per management group. A management group can consist of multiple clusters. A cluster represents a pool of storage from which volumes are created. A volume spans all nodes of a cluster. Depending on the Network RAID level, multiple copies of data are distributed over the storage systems in a cluster. Capacity and IO are expanded by adding more storage systems to a cluster.

As in each cluster, there are aids to ensure the function of the cluster in case of node failes. This is where managers and quorums comes into play.

Managers & Quorums

HPE StoreVirtual is a scale-out storage platform. Multiple storage systems form a cluster. As in each cluster, availability must be maintained if one or more cluster nodes fail. To maintain availability, a majority of managers must be running and be able to communicate with each other. This majority is called “a quorum”. This is nothing new. Windows Failover Clusters can also use a majority of nodes to gain a quorum. The same applies to OpenVMS clusters.

A manager is a service running on a storage system. This service is running on multiple storage systems within a cluster, and therefore in a management group. A manager has several functions:

  • Monitor the data replication and the health of the storage systems
  • Resynchronize data after a storage system failure
  • Manage and monitor communication between storage systems in the cluster
  • Coordinate configuration changes (one storage system is the coordinating manager)

This manager is called a “regular manager”. Regular managers are running on storage systems. The number of managers are counted per management group. You can have up to 5 managers per management group. Even if you have multiple storage systems and clusters per management group, you can’t have more than 5 managers running on storage systems. Sounds like a problem, but it’s not. If you have three 3-node clusters in a single management group, you can start managers on 5 of the 6 storage systems. Even if two storage systems fail, the remaining three managers gain a quorum. But if the quorum is lost, all clusters in a management group will be unavailable.

I have two StoreVirtual VSA running in my lab. As you can see, the management group contains two regular managers and vsa1 is the coordinating manager.

storevirtual_manager_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

There are also specialized manager. There are three types of specialized managers:

  • Failover Manager (FOM)
  • Quorum Witness (NFS)
  • Virtual Manager

A FOM is a special version of LeftHand OS and its primary function is to act as a tie breaker in split-brain scenarios. it’s added to a management group. It is mainly used if an even number of storage systems is used in a cluster, or in case of multi-site deployments.

The Quorum Witness was added with LeftHand OS 12.5. The Quorum Witness can only be used in 2-node cluster configurations. It’s added to the management group and it uses a file on a NFS share to provide high availability. Like the FOM, the Quorum Witness is used as the tie breaker in the event of a failure.

The Virtual Manager is the third specialized managers. It can be added to a management group, but its not active until it is needed to regain quorum. It can be used to regain quorum and maintain access to data in a disaster recovery situation. But you have to start it manually. And you can’t add it, if the quorum is lost!

As you can see in this screenshot, I use the Quorum Witness in my tiny 2-node cluster.

storevirtual_manager_2

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Regardless of the number of storage systems in a management group, you should use an odd number of managers. An odd number of managers ensures, that a majority is easily maintained. In case of a even number of manager, you should add a FOM. I don’t recommend to add a Virtual Manager.

# of storage systems# of Manager
11 regular manager
22 regular manager + 1 specialized manager
33 regular manager or 2 + 1 FOM or Virtual Manager
43 regular manager or 4 + 1 FOM or Virtual Manager
> 55 regular manager or 4 + 1 FOM or Virtual Manager

In case of a multi-site deployment, I really recommend to place a FOM at a third site. I know that this isn’t always possible. If you can’t deploy it to a third site, place it at the “primary site”. A multi-site deployment is characterized by the fact, that the storage systems of a cluster are located in different locations. But it’s still a single cluster! This might lead to the situation, where a site failure causes the quorum gets lost. Think about a 4-node cluster with two nodes per site. In this case, the remaining two nodes wouldn’t gain quorum (split-brain situation). In this case, a FOM at a third site would help to gain quorum in case of a site failure. If you have multiple clusters in a management group, balance the managers across the clusters. I recommend to add a FOM. If you have a clusters at multiple sites, (primary and a DR site with remote copy), ensure that the majority of managers are at the primary site.

Final words

It is important to understand how managers, quorum, management groups and clusters are linked. Network RAID protects the data by storing multiple copies of data across storage systems in a cluster. Depending on the chosen Network RAID level, you can lose disks or even multiple storage systems. But never forget to have a sufficient number of managers (regular and specialized). If the quorum can’t be maintained, the access to the data will be unavailable. It’s not sufficient to focus on data protection. The availability of, or more specifically, the access to the data is at least as important. If you follow the guidelines, you will get a rock-solid, high performance scale-out storage.

I recommend to listen to Calvin Zitos podcast (7 Years of 100% uptime with StoreVirtual VSA) and to read Bart Heungens blog post about his experience with HPE StoreVirtual VSA (100% uptime for 7 years with StoreVirtual VSA? Check!).