Tag Archives: 3par

Backup from a secondary HPE 3PAR StoreServ array with Veeam Backup & Replication

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

When taking a backup with Veeam Backup & Replication, a VM snapshot is created to get a consistent state of the VM. The snapshot is taken prior the backup, and it is removed after the successful backup of the VM. The snapshot grows during its lifetime, and you should keep in mind, that you need some free space in the datastore for snapshots. This can be a problem, especially in case of multiple VM backups at a time, and if the VMs share the same datastore.

Benefit of storage snapshots

If your underlying storage supports the creation of storage snapshots, Veeam offers an additional way to create a consistent state of the VMs. In this case, a storage snapshot is taken, which is presented to the backup proxy, and is then used to backup the data. As you can see: No VM snapshot is taken.

Now one more thing: If you have a replication or synchronous mirror between two storage systems, Veeam can do this operation on the secondary array. This is pretty cool, because it takes load from you primary storage!

Backup from a secondary HPE 3PAR StoreServ array

Last week I was able to try something new: Backup from a secondary HPE 3PAR StoreServ array. A customer has two HPE 3PAR StoreServ 8200 in a Peer Persistence setup, a HPE StoreOnce, and a physical Veeam backup server, which also acts as Veeam proxy. Everything is attached to a pretty nice 16 Gb dual Fabric SAN. The customer uses Veeam Backup & Replication 9.5 U3a. The data was taken from the secondary 3PAR StoreServ and it was pushed via FC into a Catalyst Store on a StoreOnce. Using the Catalyst API allows my customer to use Synthetic Full backups, because the creation is offloaded to StoreOnce. This setup is dramatically faster and better than the prior solution based on MicroFocus Data Protector. Okay, this last backup solution was designed to another time with other priorities and requirements. it was a perfect fit at the time it was designed.

This blog post from Veeam pointed me to this new feature: Backup from a secondary HPE 3PAR StoreServ array. Until I found this post, it was planned to use “traditional” storage snapshots, taken from the primary 3PAR StoreServ.

With this feature enabled, Veeam takes the snapshot on the 3PAR StoreServ, that is hosting the synchronous mirrored virtual volume. This graphic was created by Veeam and shows the backup workflow.

Veeam/ Backup process/ Copyright by Veeam

My tests showed, that it’s blazing fast, pretty easy to setup, and it takes unnecessary load from the primary storage.

In essence, there are only three steps to do:

  • add both 3PARs to Veeam
  • add the registry value and restart the Veeam Backup Server Service
  • enable the usage of storage snapshots in the backup job

How to enable this feature?

To enable this feature, you have to add a single registry value on the Veeam backup server, and afterwards restart the Veeam Backup Server service.

  • Location: HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication\
  • Name: Hp3PARPeerPersistentUseSecondary
  • Type: REG_DWORD (0 False, 1 True)
  • Default value: 0 (disabled)

Thanks to Pierre-Francois from Veeam for sharing his knowledge with the community. Read his blog post Backup from a secondary HPE 3PAR StoreServ array for additional information.

Checking the 3PAR Quorum Witness appliance

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Two 3PAR StoreServs running in a Peer Persistence setup lost the connection to the Quorum Witness appliance. The appliance is an important part of a 3PAR Peer Persistence setup, because it acts as a tie-breaker in a split-brain scenario.

While analyzing this issue, I saw this message in the 3PAR Management Console:

3PAR Quorum Witness Status

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

In addition to that, the customer got e-mails that the 3PAR StoreServ arrays lost the connection to the Quorum Witness appliance. In my case, the CouchDB process died. A restart of the appliance brought it back online.

How to check the Quorum Witness appliance?

You can check the status of the appliance with a simple web request. The documentation shows a simple test based on curl. You can run this direct from the BASH of the appliance.

[[email protected] ~]# curl http://10.0.0.99:8080
{"couchdb":"Welcome","version":"1.0.4"}
[[email protected] ~]#

But you can also use the PowerShell cmdlet Invoke-WebRequest.

PS C:\Users\patrick> Invoke-WebRequest -Uri http://10.0.0.99:8080


StatusCode        : 200
StatusDescription : OK
Content           : {"couchdb":"Welcome","version":"1.0.4"}

RawContent        : HTTP/1.1 200 OK
                    Content-Length: 40
                    Cache-Control: must-revalidate
                    Content-Type: text/plain;charset=utf-8
                    Date: Mon, 30 Jan 2017 08:31:37 GMT
                    Server: CouchDB/1.0.4 (Erlang OTP/R14B04)

                    {"couchdb...
Forms             : {}
Headers           : {[Content-Length, 40], [Cache-Control, must-revalidate], [Content-Type, text/plain;charset=utf-8],
                    [Date, Mon, 30 Jan 2017 08:31:37 GMT]...}
Images            : {}
InputFields       : {}
Links             : {}
ParsedHtml        : mshtml.HTMLDocumentClass
RawContentLength  : 40

If you add /witness to the URL, you can test the access to the database, which is used for Peer Persistence.

PS C:\Users\patrick> Invoke-WebRequest -Uri http://10.0.0.99:8080/witness


StatusCode        : 200
StatusDescription : OK
Content           : {"db_name":"witness","doc_count":5,"doc_del_count":4,"update_seq":149557915,"purge_seq":0,"compact_
                    running":false,"disk_size":48988254,"instance_start_time":"1485763322826940","disk_format_version":
                    5,...
RawContent        : HTTP/1.1 200 OK
                    Content-Length: 234
                    Cache-Control: must-revalidate
                    Content-Type: text/plain;charset=utf-8
                    Date: Mon, 30 Jan 2017 08:36:38 GMT
                    Server: CouchDB/1.0.4 (Erlang OTP/R14B04)

                    {"db_nam...
Forms             : {}
Headers           : {[Content-Length, 234], [Cache-Control, must-revalidate], [Content-Type,
                    text/plain;charset=utf-8], [Date, Mon, 30 Jan 2017 08:36:38 GMT]...}
Images            : {}
InputFields       : {}
Links             : {}
ParsedHtml        : mshtml.HTMLDocumentClass
RawContentLength  : 234

If you get a connection error, check if the beam process is running.

[[email protected] ~]# netstat -tulpen |grep 8080
tcp        0      0 0.0.0.0:8080                0.0.0.0:*                   LISTEN      495        10726      1643/beam
[[email protected] ~]#

If not, reboot the appliance. This can be done without downtime. The appliance comes only into play, if a failover occurs.

HPE 3PAR OS updates that fix VMware VAAI ATS Heartbeat issue

This posting is ~7 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Customers that use HPE 3PAR StoreServs with 3PAR OS 3.2.1 or 3.2.2 and VMware ESXi 5.5 U2 or later, might notice one or more of the following symptoms:

  • hosts lose connectivity to a VMFS5 datastore
  • hosts disconnect from the vCenter
  • VMs hang during I/O operations
  • you see the messages like these in the vobd.log or vCenter Events tab
Lost access to volume <uuid><volume name> due to connectivity issues. Recovery attempt is in progress and the outcome will be reported shortly
  • you see the following messages in the vmkernel.log
ATS Miscompare detected beween test and set HB images at offset XXX on vol YYY

2015-11-20T22:12:47.194Z cpu13:33467)ScsiDeviceIO: 2645: Cmd(0x439dd0d7c400) 0x89, CmdSN 0x2f3dd6 from world 3937473 to dev &#34;naa.50002ac0049412fa&#34; failed H:0x0 D:0x2 P:0x0 Valid sense data: 0xe 0x1d 0x0.

Interestingly, not only HPE is affected by this. Multiple vendors have the same issue. VMware described this issue in KB2113956. HPE has published a customer advisory about this.

Workaround

If you have trouble and you can update, you can use this workaround. Disable ATS heartbeat for VMFS5 datastores. VMFS3 datastores are not affected by this issue. To disable ATS heartbeat, you can use this PowerCLI one-liner:

Get-AdvancedSetting -Entity hostname -Name VMFS3.UseATSForHBOnVMFS5 | Set-AdvancedSetting -Value 0 -Confirm:$false

Solution

But there is also a solution. Most vendors have published firwmare updates for their products. HPE has released

  • 3PAR OS 3.2.2 MU3
  • 3PAR OS 3.2.2 EMU2 P33, and
  • 3PAR OS 3.2.1 EMU3 P45

All three releases of 3PAR OS include enhancements to improve ATS heartbeat. Because 3PAR OS 3.2.2 has also some nice enhancements for Adaptive Optimization, I recommend to update to 3PAR OS 3.2.2.

Chicken-and-egg problem: 3PAR VSP 4.3 MU1 & 3PAR OS 3.2.1 MU3

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Since monday I’m helping a customer to put two HP 3PAR StoreServ 7200c into operation. Both StoreServs came factory-installed with 3PAR OS 3.2.1 MU3, which is available since July 2015. Usually, the first thing you do is to deploy the 3PAR Service Processor (SP). These days this is (in most cases) a Virtual Service Processor (VSP). The SP is used to initialize the storage system. Later, the SP reports to HP and it’s used for maintenance tasks like shutdown the StoreServ, install updates and patches. There are only a few cases in which you start the Out-of-the-Box (OOTB) procedure of the StoreServ without having a VSP. I deployed two (one VSP for each StoreServ) VSPs, started the Service Processor Setup Wizard, entered the StoreServ serial number and got this message:

3par_vsp_error

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

“No uninitialized storage system with the specified serial number could be found”. I double checked the network setup, VLANs, switch ports etc. The error occured with BOTH VSPs and BOTH StoreServs. I started the OOTB on both StoreServs using the serial console. My plan was to import the StoreServs later into the VSPs. To realize this, I tried was to setup the VSP using the console interface. I logged in as root (no password) and tried the third option: Setup SP with original SP ID.

3par_vsp_error_console

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Not the worst idea, but unsuccessful. I entered the SP ID, SP networking details, a lot other stuff, the serial number of the StoreServ, the IP address, credentials finally got this message:

StoreServ HP 3PAR OS version validation failed - unable to retrieve StoreServ's HP 3PAR OS version.

Hmm… I knew that P003 was mandatory for the VSP 4.3 MU1 and 3PAR OS 3.2.1 MU3. But could cause the missing patch this behaviour? I called HP and explained my guess. After a short remote session this morning, the support case was escalated to the 2nd level. While waiting for the 2nd level support, I was thinking about a solution. I knew that earlier releases of the VSP doesn’t check the serial number of the StoreServ or the version of the 3PAR OS. So I grabbed a copy of the VSP 4.1 MU2 with P009 and deployed the VSP. This time, I was able to finish the “Moment of Birth” (MOB). This release also asked for the serial number, the IP address and login credentials, but it didn’t checked the version of the 3PAR OS (or it doesn’t care if it’s unknown). At this point I had a functional SP running software release 4.1 MU2. I upgraded the SP to 4.3 MU1 with the physical SP ISO image and installed P003 afterwards. Now I was able to import the StoreServ 7200c with 3PAR OS 3.2.1 MU3.

I don’t know how HP covers this during the installation service. AFAIK there is no VSP 4.3 MU1 with P003 available and I guess HP ships all new StoreServs with 3PAR OS 3.2.1 MU3. If you upgrade from an earlier 3PAR OS release, please make sure that you install P003 before you update the 3PAR OS. The StoreServ Refresh matrix clearly says that P003 is mandatory. The release notes for the HP 3PAR Service Processor (SP) Software SP-4.3.0 MU1 P003 also indicate this:

SP-4.3.0.GA-24 P003 is a mandatory patch for SP-4.3.0.GA-24 and 3.2.1.MU3.

I’m excited to hear from the HP 2nd level support. I will update this blog post if I have more information.

EDIT

Together with the StoreServ 8000 series, HP released a new version of the 3PAR Service Processor. The new version 4.4 is necessary for the new StoreServ models, but it also supports 3PAR OS < 3.2.2 (which is the GA release for the new StoreServ models). So if you get a new StoreServ 7000 with 3PAR OS 3.2.1 MU3, simply deploy a SP version 4.4.

Tiering? Caching? Why it’s important to differ between them.

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Some days ago I talked to a colleague from our sales team and we discussed different solutions for a customer. I will spare you the details, but we discussed different solutions and we came across PernixData FVP, HP 3PAR Adaptive OptimizationHP 3PAR Adaptive Flash Cache and DataCore SANsymphony-V. And then the question of all questions came up: “What is the difference?”.

Simplify, then add Lightness

Lets talk about tiering. To make it simple: Tiering moves a block from one tier to another, depending on how often a block is accessed in a specific time. A tier is a class of storage with specific characteristics, for example ultra-fast flash, enterprise-grade SAS drives or even nearline drives. Characteristics can be the drive type, the used RAID level or a combination of characteristics. A 3-tier storage design can consist of only one drive type, but they can be organized in different RAID levels. Tier 1 can be RAID 1 and tier 3 can be RAID 6, but all tiers use enterprise-grade 15k SAS drives. But you can also mix drive types and RAID levels, for example tier 1 with flash, tier 2 with 15k SAS in a RAID 5 and tier 3 with SAS-NL and RAID 6. Each time a block is accessed, the block “heats up”. If it’s hot enough, it is moved one tier up. If it’s less often accessed, the block “cools down” and at a specific point, the block is moved a tier down. If a tier is full, colder blocks will to be moved down and hotter block have to be moved up. It’s a bit simplified, but products like DataCore SANsymphony-V with Auto-Tiering or HP 3PAR Adaptive Optimization are working this way.

Lets talk about caching. With caching a block is only copied to a faster region, which can be flash or even DRAM. The original block isn’t moved, only a copy of the accessed block is copied to a faster medium. If this block is accessed, the data is served from the faster medium. This also works for write I/O. If a block is written, the data is written to the faster medium and will be moved later to the underlying, slower medium. You can’t store block copies until infinity, so less accessed blocks have to be removed from cache if they are not accessed, or if the cache fills up. Examples for caching solutions are PernixData FVP, HP 3PAR Adaptive Flash Cache or NetApp Flash Pool (and also Flash Cache). I lead storage controller cache explicitly not appear in this list. All of the listed caching technologies (except NetApp Flash Cache) can do write-back caching. I wouldn’t recommend read-cache only solutions like VMware vSphere Flash Read Cache, except two situations: Your workload is focused on read I/O and/ or you already own a vSphere Enterprise Plus license, and you do not want to spend extra money.

Tiering or caching? What to choose?

Well… it depends. What is the main goal when using these techniques? Accelerate workloads and making best use of scarce and expensive storage (commonly flash storage).

Regardless of the workload, tiering will need some time to let the often accessed blocks heat up. Some vendors may anticipate this partially by writing data always to the fastest tier. But I don’t think that this is what I would call efficient. One benefit of tiering is, that you can have more then two tiers. You can have a small flash tier, a bigger SAS tier and a really big SAS-NL tier. Usually you will see a 10% flash / 40% SAS / 50% SAS-NL distribution. But as I also mentioned: You don’t have to use flash in a tiered storage design. That’s a plus. On the downside tiering can make mirrored storage designs complex. Heat maps aren’t mirrored between storage systems. If you failover your primary storage, all blocks need to be heaten up again. I know that vendors are working on that. HP 3PAR and DataCore SANsymphony-V currently have a “performance problem” after a failover. It’s only fair to mention it. Here are two examples of products I know well and both offer tiering: In a HP 3PAR Adaptive Optimization configuration, data is always written to the tier, from which the virtual volume was provisioned. This explains the best practice to provision new virtual volumes from the middle tier (Tier 1 CPG). DataCore SANsymphony-V uses the performance class in the storage profile of a virtual disk to determine where data should be written. Depending on the performance class, data is written to the highest available tier (tier affinity is taken into account). Don’t get confused with the tier numbering: Some vendors use tier 0 as the highest tier, others may start counting at tier 1.

Caching is more “spontaneous”. New blocks are written to the cache (usually flash storage, but it can also be DRAM). If a block is read from disks, it’s placed in the cache. Depending on the cache size, you can hold up a lot data. You can lose the cache, but you can’t lose the data ins this case. The cache only holds block copies (okay, okay, written blocks shouldn’t be acknowledged until they are in a second cache/ hose/ $WHATEVER). If the cache is gone, it’s relatively quickly filled up again. You usually can’t have more then two “tiers”. You can have flash and you can have rotating rust. Exception: PernixData FVP can also use host memory. I would call this as an additional half tier. ;) Nutanix uses a tiered storage desing in ther hyper-converged platform: Flash storage is used as read/ write cache, cost effective SATA drives are used to store the data. Caching is great if you have unpredictable workloads. Another interesting point: You can cache at different places in the stack. Take a look at PernixData FVP and HP 3PAR Adaptive Flash Cache. PernixData FVP is sitting next to the hypervisor kernel. HP 3PAR AFC is working at the storage controller level. FVP is awesome to accelerate VM workloads, but what if I have physical database servers? At this point, HP 3PAR AFC can play to its advantages. Because you usually have only two “tiers”, you will need more flash storage as compared to a tiered storage design. Especially then, if you mix flash and SAS-NL/ SATA.

Final words

Is there a rule when to use caching and when to use tiering? I don’t think so. You may use the workload as an indicator. If it’s more predictable you should take a closer look at a tiered storage design. In particular, if the customer wants to separate data from different classes. If you have more to do with unpredictable workloads, take a closer look at caching. There is no law that prevents combining caching and tiering. At the end, the customer requirements are the key. Do the math. Sometimes caching can outperform tiering from the cost perspective, especially if you mix flash and SAS-NL/ SATA in the right proportion.

What to consider when implementing HP 3PAR with iSCSI in VMware environments

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Some days ago a colleague and I implemented a small 3-node VMware vSphere Essentials Plus cluster with a HP 3PAR StoreServ 7200c. Costs are always a sore point in SMB environments, so it should not surprise that we used iSCSI in this design. I had some doubt about using iSCSI with a HP 3PAR StoreServ, mostly because of the performance and complexity. IMHO iSCSI is more complex to implement then Fibre Channel (FC). But in this case I had to deal with it.

iSCSI options for HP 3PAR StoreServ 7000

If you decide to use iSCSI with a HP 3PAR StoreServ, you have only one option: Adding a 2-port 10GbE iSCSI/ FCoE adapter to each node. There is no other iSCSI option. The available 2-port 10GbE ethernet adapter and 4-port 1GbE ethernet adapter can’t be used for iSCSI connectivity. These adapters can only be used with the HP 3PAR File Persona Software Suite.

The 2-port 10GbE iSCSI/ FCoE adapter is a converged network adapter (CNA) and supports iSCSI or Fibre Channel over Ethernet (FCoE). The adapter can only be used for host connectivity and you have to select iSCSI or FCoE. You can’t use the CNA for remote copy. You have to add a CNA to each nodes in a node pair. You can have up to four 10 GbE ports in a 3PAR 7200 series, or up to eight 10 GbE ports in a 3PAR 7400 series.

Network connectivity

10 GbE means 10 GbE, there is no way to connect the CNA to 1 GbE transceivers. The 2-port 10GbE iSCSI/ FCoE includes two 10 GbE SR SFP+ transceivers. With 3PAR OS 3.1.3 and later, you can use Direct Attach Copper (DAC) cables for network connectivity, but not for FCoE. Make sure that you use the correct cables for your switch! HP currently offers the following cables in different length:

  • X242 for HP ProVision switches
  • X240 for HP Comware switches
  • HP B-series SFP+ to SFP+ Active Copper for Brocade switches, or
  • HP C-series SFP+ to SFP+ Active Copper for Cisco switches

If you use any other switch vendor, I strongly recommend to use the included 10 GbE SR SFP+ transceivers and 10 GbE SR SFP+ transceivers on the switch-side. In this case you have to use fiber cable to connect the 3PAR to the network. In any other case I recommend to use DAC for network connectivity.

It’s a common practice to run iSCSI traffic in its own VLAN. Theoretically a single iSCSI VLAN is sufficient. I recommend to use two iSCSI VLANs in case of a 3PAR, one for each iSCSI subnet. Why two subnets? The answer is easy: Persistent Ports. Persistent Ports allows an host port to assume the identity (port WWN for Fibre Channel or IP address for iSCSI ports) of a failed port while retaining its own identity. This minimizes I/O disruption during failures or upgrade. Persistent Ports uses the NPIV feature for Fibre Channel/ FCoE and IP address failover for iSCSI. With the release of 3PAR OS 3.1.3, Persistent Ports was also available for iSCSI. A hard requirement of Persistent Ports is, that the same host ports of nodes of a node pair must be connected to the same IP network on the fabric. An example clarifies this:

Host port (N:S:P)VLAN IDIP subnet
0:2:111192.168.173.0/27
0:2:212192.168.173.32/27
1:2:111192.168.173.0/27
1:2:212192.168.173.32/27

The use of jumbo frames with iSCSI is a often discussed topic. It’s often argued that complexity and performance gain would be disproportionate. I’m a bit biased. I think that the use of jumbo frames is a must when using iSCSI. I always configure jumbo frames for vMotion, so the costs for configuring Jumbo frames is low for me in an iSCSI environment. Don’t forget to configure jumbo frames on all devices in the path: VMkernel ports, vSwitches, physical switches and 3PAR CNAs.

Always use at least two physical switches for iSCSI connectivity. This concept is compareable to a Fibre Channel dual-fabric SAN. I like the concept of switch aggegration (the wording may vary between vendors). I often work with HP Networking and I like the HP 2920 or 5820 Switch Series. These switches can form stacks in which multiple physical switches act a as a single virtual device. These stacks provide redundancy and operational simplicity. In combination with two VLANs you can build a powerful, redundant and resilient iSCSI SAN.

Host port configuration

The CNA ports can only be used for host connectivity, therefore there is no way to use them for disk or remote copy connectivity. Before you can use the port for host connectivity, you have to select iSCSI or FCoE as storage protocol.

3par_iscsi_cna_config_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Host and Virtual Volume sets

You can organize hosts and volumes in host sets and volume sets. I recommend to create a host set for all ESXi hosts in a vSphere cluster. I also recommend to create a volume set to group all volumes that should be presented to a host or host set. When exporting Virtual Volumes (VV), you can export a volume set to a host set. If you add a host to the host set, the host will see all volumes in the volume set. If you add a volume to a volume set, the hosts in the host set will all see the newly added volume. This simplifies host and volume management and it reduced the possibilty of human errors.

3par_iscsi_host_set_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

3par_iscsi_vv_set_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Custom SATP rule for ESXi 5.x and ESXi 6.0

3PAR OS 3.1.2 introduced the new Host Persona 11 for VMware which enables asymmetric logical unit access (ALUA). Beside Host Persona 11, Host Persona 6 for VMware is also available, but it doesn’t support ALUA. 3PAR OS 3.1.3 is the last release that included support Host Persona 6 AND 11 for VMware. All later releases only include Host Persona 11. I strongly recommend to use Host Persona 11 for VMware. You should also add a custom SATP rule. This rule can be added by using ESXCLI.

# esxcli storage nmp satp rule add -s "VMW_SATP_ALUA" -P "VMW_PSP_RR" -O iops=1 -c "tpgs_on" -V "3PARdata" -M "VV" -e "HP 3PAR Custom iSCSI/FC/FCoE ALUA Rule"

This custom rule sets VMW_PSP_RR as the default PSP and it evenly distribute the IOs over all active paths by switching to the next active path after each IO.

iSCSI discovery

Before you can use an exported volume, the host needs to discover the volume from the target. You have to configure the iSCSI discovery in the settings of the software iSCSI initiator. Typically you will use the dynamic discovery process. In this case, the initiator uses SendTargets request to get a list of available targets. After adding the IP addresses of the 3PAR CNAs to the dynamic discovery list, the static discovery list is filled automatically. In case of multiple subnets, the dynamic discovery process can carry some caveats. Chris Wahl has highlighted this problem in his blog post “Exercise Caution Using Dynamic Discovery for Multi-Homed iSCSI Targets“. My colleague Claudia and I stumbled over this behaviour in our last 3PAR project. Removing the IP addresses from the dynamic discovery will result in the loss of the static discovery entries. After a reboot, the entries in the static discovery list will be gone and therefore no volumes will be discovered. I added a comment to Chris blog post and he was able to confirm this behaviour. The solution is to use the dynamic discovery to get a list of targets, and then add the targets manually to the static discovery list.

Final words

HP 3PAR and iSCSI is a equivalent solution to HP 3PAR and Fibre Channel/ FCoE. Especially in SMB environments, iSCSI is a good choice to bring to the customer 3PAR goodness to a reasonable price.

HP Discover: New 3PAR StoreServ models

This posting is ~9 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

HP has brushed up the StoreServ 7000 series and updated the StoreServ 7200 and 7400 models. HP also added a new model to the 7000 series: The StoreServ 7440c.

New 3PAR StoreServ models:

Model3PAR StoreServ 7200c3PAR StoreServ 7400c3PAR StoreServ 7440c
Nodes22 or 42 or 4
CPUs2x 6-Core 1,8 GHz2x or 4x 6-Core 1,8 GHz2x or 4x 8-Core 2,3 GHz
Gen4 ASICs22 or 42 or 4
On-Node Cache40 GB48 – 96 GB 96 – 192 GB
Max Drives8 – 240 (max 120 SSDs)8 – 576 (max 240 SSDs)8 – 960 (max 240 SSDs)
Max Enclosures0 – 90 – 220 – 38

Old 3PAR StoreServ models

Model3PAR StoreServ 72003PAR StoreServ 7400
Nodes22 or 4
CPUs2x 4-Core 1,8 GHz2x or 4x 6-Core 1,8 GHz
Gen4 ASICs22 or 4
On-Node Cache24 GB32 – 64 GB
Max Drives8 – 240 (max 120 SSDs)8 – 480 (max 240 SSDs)
Max Enclosures0 – 90 – 22

Especially the 7440c is a monster: It scales up to 38 enclosures and 960 drives (just to compare: A 3PAR StoreServ 10400 scales also up to 960 drives!). Check the QuickSpecs for more details.

As you can see, the new models got new CPUs, more on-node Cache and tehy support mode disks. In addition to this, the they got support for a new dual port 16 Gb FC HBA, a dual port 10 GbE and a quad port 1 GbE NIC. You may ask yourself: Why 10 GbE and 1 GbE NICs (not iSCSI/ FCoE)? The answer is: HP 3PAR File Persona Software Suite for HP 3PAR StoreServ. This software license adds support for SMB, NFS, NMDP and Object Storage to the nodes of the 7200c, 7400c and 7440c. I assume that this license will not be available for the “older” 7200 and 7400. But this is only a guess. With this license you will be able to use 3PAR StoreServ natively with block and file storage protocols. I think this is a great chance to win more deals against EMC and NetApp.

Enrico Signoretti has written a very good article about the new announcements: HP 3PAR, 360° storage. He has the same view like me about the new HP 3PAR File Persona. Philip Sellers has written about another new announcement: Flat Backup direct from 3PAR to StoreOnce. Also check Craig Kilborns blog post about the new HP 3PAR StoreServ SSMC. Last, but not least: The 3pardude about the new 3PAR announcements.

HP publishes HP 3PAR OS 3.2.1 MU1 with Thin Deduplication

This posting is ~9 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

On October 28 2014 HP has published HP 3PAR OS 3.2.1 MU1, the first maintenance update for HP 3PAR OS 3.2.1. Beside some fixes, HP enabled in-line deduplication (Thin Deduplication) on all the systems with 3PAR GEN4 ASIC (StoreServ 7000 and 10000). Thin Deduplication does not require any license! It’s included in the base license and every customer can use it without spending money for it.

In-line deduplication is awesome, congrats to HP for making this possible. Deduplication on primary storage is nothing new, but the way how HP 3PAR doing it, is really cool. It’s not a post-process, like NetApps deduplication technology. With HP 3PAR, deduplication happens when data enters the array. I took this figure from a HP whitepaper. It shows in a simple way what enables HP 3PAR to do in-line deduplication: The 3PAR GEN4 ASIC (Who has criticised 3PAR for using custom ASICs…?). Thin Deduplication is in line with the other 3PAR thin technologies.

thin_dedup

HPE/ hpe.com

Ivan Iannaccone write a really good blog post on who Thin Deduplication works. I really recommend to read it! Welcome to Flash 2.0: HP 3PAR Thin Deduplication with Express Indexing

As already mentioned Thin Deplication is available on all HP 3PAR systems with GEN4 ASIC. This is currently the StoreServ 7000 and 10000 series. Even a customer with a “small” 7200 can use Thin Deduplication without additional cost. And who knows what the HP Discover will bring us… There are currently some small limitations when using Thin Deduplication. But I’m quite sure that these are only temporary.

  1. Thin Deduplication is currently only available for Virtual Volumes (VV) provisioned from a SSD tier.
  2. You can’t use TDVV with Adaptive Optimization Configuration. This is presumably because Thin Deduplication is only available for VV provisioned from a SSD tier. If a region from a TDVV has to be moved to a lower tier, the data has to be rehydrated.
  3. Converting from any VV to Thin Deduplication Virtual Volume (TDVV) can be accomplished with Dynamic Optimization, which is a licensable feature.

You can have up to 256 TDVV per SSD CPG. Deduplication is fully supported with 3PAR replication (sync, async), but the replicated data is not deduplicated. You can use a estimation functionality to estimate the amount of deduplicated data for TPVV. This estimation can be run online against any volumes, regardless on which tier the data reside.

Bugfixes

Beside the new Thin Deduplication feature, HP fixed some bugs in this MU. Here is an excerpt from the release notes:

  • 116690 An issue in QoS and ODX from Windows hosts causes an uncontrolled shutdown.
  • 117123 The version of Bash is updated to resolve the vulnerabilities CVE-2014-6271 and CVE-2014-7169
    commonly known as “shellshock”
  • 114947 The total capacity of the volumes in a Peer Persistence Remote Copy group is limited to 32 TB.
  • 114244 Loss of host persona capabilities after upgrading to HP 3PAR OS 3.2.1 GA from HP 3PAR OS 3.1.2
    MU5 or HP 3PAR OS 3.1.2 MU3 + P41.

For more details take a look into the Release Notes for HP 3PAR OS 3.2.1 GA/ MU1. If you’re interested in the basics concepts of HP 3PAR, take a look into the HP 3PAR StoreServ Storage Concepts Guide for HP 3PAR OS 3.2.1.

HP 3PAR Peer Persistence for Microsoft Windows Servers and Hyper-V

This posting is ~9 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Some days ago I wrote two blog posts (part I and part II) about VMware vSphere Metro Storage Cluster (vMSC) with HP 3PAR Peer Persistence. Because I wrote about it in the first of the two blog posts, allow me to take a short description, what Peer Persistence is and what it does, from that blog post:

HP 3PAR Peer Persistence adds functionalities to HP 3PAR Remote Copy software and HP 3PAR OS, that two 3PAR storage systems form a nearly continuous storage system. HP 3PAR Peer Persistence allows you, to create a VMware vMSC configuration and to achieve a new quality of availability and reliability.

You can transfer the concept of a Metro Storage Cluster to Microsoft Hyper-V. There is nothing VMWare specific in that concept.

With the GA of 3PAR OS 3.2.1 in September 2014, HP announced a lot of new features. The most frequently mentioned feature is Adaptive Flash Cache. If you’re interested in more details about Adaptive Flash Cache you will like the AFC Deep dive on 3pardude.com. A little lost is the newly added support for  Peer Persistence with Hyper-V. This section is taken from the release notes of 3PAR OS 3.2.1:

3PAR Peer Persistence Software supports Microsoft Windows 2008 R2 and Microsoft Windows 2012 R2 Server and Hyper-V, in addition to the existing support for VMware. HP 3PAR Peer Persistence software enables HP 3PAR StoreServ systems located at metropolitan distances to act as peers to each other, presenting a nearly continuous storage system to hosts and servers connected to them. This capability allows to configure a high availability solution between two sites or data centers where failover and failback remains completely transparent to the hosts and applications running on those hosts.

3PAR Peer Persistence with Microsoft Windows Server and Hyper-V

Currently supported are Windows Server 2008 R2 and Server 2012 R2 and the corresponding versions of Hyper-V. This table summarizes the currently supported environments.

HP 3PAR OSHost OSHost connectivityRemote Copy connectivity
3.2.1Windows Server 2008 R2FC, FCoE, iSCSIRCIP, RCFC
3.2.1Windows Server 2012 R2FC, FCoE, iSCSIRCIP, RCFC

At first glance, it seems that Microsoft Windows Server and Hyper-V support more options in terms of Host and Remote Copy Connectivity. This is not true! With 3PAR OS 3.2.1, HP added the support for FCoE and iSCSI host connectivity, as well as the support for RCIP for VMware. At this point, there is no winner. Check HP SPOCK for the latest support statements.

With 3PAR OS 3.2.1 a new host persona (Host Persona 15) was added for Microsoft Windows Server 2008, 2008 R2, 2012 and 2012 R2. This host persona must be used in Peer Persistence configurations. This is comparable to Host Persona 11 for ESXi. The setup and requirements for VMware and Hyper-V are similar. For a transparent failover a Quorum Witness is needed and it has to be deployed onto a Windows Server 2012 R2 Hyper-V host (not 2008, 2008 R2 or 2012!). Peer Persistence operates in the same manner as with VMware: The Virtual Volumes (VV) are grouped into Remote Copy Groups (RCG), mirrored synchronously between a source and destination storage system. Source and destination volume share the same WWN. They are presented using the same LUN ID and the paths to the destination storage are marked as standby. Check part I of my Peer Persistence blog series for more detailed information about how Peer Persistence works.

Final words

It was only a question of time until HP releases the support for Hyper-V with Peer Persistence. I would have assumed that HP makes more fuss about it, but AFC seems to be the killer feature in 3PAR OS 3.2.1. I’m quite sure that there are some companies out there that have waited eagerly for the support of Hyper-V with Peer Persistence. If you have any further questions about Peer Persistence with Hyper-V, don’t hesitate to contact me.

VMware vSphere Metro Storage Cluster with HP 3PAR Peer Persistence – Part II

This posting is ~9 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

The first part of this (short) blog series covered the basics of VMware vSphere Metro Storage Cluster (vMSC) with HP 3PAR Peer Persistence. This, the second, part will cover the basic tasks to configure Peer Persistence. Please note that this blog post relies on the features and supported configurations of 3PAR OS 3.1.3! This is essential to know, because 3.1.3 got some important enhancements in respect of 3PAR Remote Copy.

Fibre-Channel zoning

On of the very first tasks is to create zones with between the Remote Copy Fibre Channel (RCFC) ports. I used two ports from a quad-port FC Adapter for Remote Copy. This matrix shows the zone members in each Fibre Channel fabric. 3PAR OS 3.1.3 supports up to four RCFC ports per node. Earlier versions of 3PAR OS only support one RCFC port per node.

N:S:P0:2:10:2:21:2:11:2:2
0:2:1Fabric 1
0:2:2Fabric 2
1:2:1Fabric 1
1:2:2Fabric 2

RCFC port setup

After the zoning it’s time to setup the RCFC ports. In this case the RCFC ports will detect the partnering port by itself. I assume that the ports are unconfigured. Otherwise it’s necessary to take the ports offline. The command controlport is used to configure a port with a specific port role.

controlport config rcfc -ct point -f 0:2:1
controlport config rcfc -ct point -f 0:2:2
controlport config rcfc -ct point -f 1:2:1
controlport config rcfc -ct point -f 1:2:2

You can do the same with the 3PAR Management Console. After the RCFC port configuration the success of this procedure can checked with doing this on both StoreServs, you can check your success with showrctransport

showrctransport -rcfc

or with the 3PAR Management Console.

3par_remotecopy_port_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Remote Copy setup

Now it’s time to create the Remote Copy configuration. The screenshots below are schowing the configuration of a bidirectional 1-to-1 Remote Copy setup. Start the wizard and select the configuration.

3par_remotecopy_setup_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

In the next step, the RCFC ports have to be configured and paired together. Simply connect the ports by selecting a port and pull a connection to the other port. Both ports have to be in the same zone.

3par_remotecopy_setup_2

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

A Remote Copy Group groups Virtual Volumes (VV) together to ensure I/O consistency. To create a bidirectional Remote Copy configuration we need two Remote Copy Groups. One from A > B and a second from B > A. I recommend to enable the “Auto Recover” option. This option is only visible, if the “Show advanced options” tickbox is enabled.

3par_remotecopy_setup_3

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

This screenshot shows the bidirectional Remote Copy configuration. Each StoreServ acts as primary arry for a Remove Copy Group and as secondary array for a primary Remote Copy Group on the other StoreServ.

3par_remotecopy_setup_4

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

If you already created volumes, you can add the volumes in this step. I will show this step later.

3par_remotecopy_setup_5

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The last page shows a summary of the configured options. Simply click “Finish” and proceed with the next step.

3par_remotecopy_setup_6

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

After creating the volumes it’s necessary to add them to the Remote Copy groups. Right click the Remote Copy Group and select “Edit Remote Copy Group…”.

3par_rcg_setup_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Click “Next”.

3par_rcg_setup_2

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Select the volumes to add and check the box “Create new volume”. I recommend to use CPGs with the same characteristics as on the source system. I also recommend to use the same CPG as User and Copy CPG. Click “Add” and repeat this step for each volume that should belong to the Remote Copy Group. At the end click “Next”…

3par_rcg_setup_3

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

… and “Finish”.

3par_rcg_setup_4

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Repeat the steps for the second Remote Copy Group and the volumes on the secondary StoreServ.

3par_rcg_setup_5

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

This screenshot shows the result of the configuration process.

3par_rcg_setup_6

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

A very handy feature of 3PAR OS 3.1.3 is that it creates a Virtual Volume Set for each Remote Copy Group. When a VV is added to the Remote Copy Group, it belongs automatically to the Virtual Volume Set and will be exported to the hosts. This screenshots shows the Virtual Volume Sets on both StoreServs.

3par_rcg_setup_7

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

3par_rcg_setup_8

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Please ensure that both Virtual Volume Sets on both StoreServs are exported to all hosts (I recommend using Host Sets). If everything has been correctly presented 8 paths should be visible for each VMFS datastore: 4 active paths to the primary, and 4 standby paths to the secondary StoreServ.

3par_rcg_presentation

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Automate the failover

There are two requirements to automate the failover:

  • Quorum Witness
  • Enabled “Auto Failover” for Remote Copy Groups

The Quorum Witness is a VMware Appliance that needs to be deployed at a third site. The setup is really easy. Simply deploy the OVA and power it on. A short menu guides you through some setup tasks, like setting a password, assigning an IP address etc. When the Quorum Witness is available on the network, create a Peer Persistence configuration. Enter the IP address and select the targets, for which the Quorum Witness should act as a witness.

3par_pp_setup_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

If everything went fine, the “Quorum Status” should be “Started”.

3par_pp_setup_2

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Now the automatic failover for the Remote Copy Groups can be enabled.

3par_rcg_auto_failover_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Select the groups and click the right arrow to enable automatic failover for the selected Remote Copy Groups.

3par_rcg_auto_failover_3

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

That’s it! To test the failover you can use the 3PAR Management Console or this CLI command:

setrcopygroup switchover -t 7200-EDV1

With this command all secondary Remote Copy Groups on StoreServ 7200-EDV2 will become primary Remote Copy groups. If everything is configured accordingly, you will notice no or only a short IO interruption during the failover. An automatic failover will only occur, if a StoreServ loses all RCFC links AND the connection to the Quorum Witness. Otherwise there will be no automatic failover! The parameter “switchover” is only used for transparent and controlled failovers. It’s issued on the primary storage array. The parameter “failover” is automatically issued from the secondary storage system in case of a failover situation.

Finaly words

The basics tasks are:

  • create zones for the RCFC ports
  • configure the RCFC ports on each node
  • create a bidirectional 1-to-1 Remote Copy setup with Remote Copy Groups on each StoreServ
  • add volumes to the Remote Copy Groups
  • present Virtual Volume Sets (that were automatically created based on the Remote Copy Groups) to the hosts
  • deploy Quorum WItness
  • create a Peer Persistence configuration and configure Quorum Witness for the StoreServs that belong to the Peer Persistence Configuration
  • Enable “Automatic Failover” for the presented Remote Copy Groups

This is only a very rough overview about the configuration of a 3PAR Peer Persistence setup. I strongly recommend to put some brain into the design and the planning of the Peer Persistence setup.