Tag Archives: storage

The beginning of a deep friendship: Me & PernixData FVP 2.0

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

I’m a bit late, but better late than never. Some days ago I installed PernixData FVP 2.0 in my lab and I’m impressed! Until this installation, solutions such as PernixData FVP or VMware vSphere Flash Read Cache (vFRC) weren’t interesting for me or most of my customers. Some of my customers played around with vFRC, but most of them decieded to add flash devices to their primary storage system and use techniques like tiering or flash cache. Especially SMB customers had no chance to use flash or RAM to accelerate their workloads because of tight budgets. With decreasing costs for flash storage, solutions like PernixData FVP and VMware vSphere Flash Read Cache (vFRC) getting more interesting for my customers. Another reason was my lab. I simply hadn’t the equipment to play around with that fancy stuff. But things have changed and now I’m ready to give it a try.

The environment

For the moment I don’t have any SSDs in my lab servers, so I have to use RAM for acceleration. I will add some small SSDs later. Fortunately PernixData FVP 2.0 supports NFS and I can use host memory to accelerate my lab workloads.

The installation

I have installed PerniXata FVP 2.0 in my lab and deployed the host extension with the vSphere Update Manager to three of my lab hosts.

PernixData FVP consists of three components:

  • Host Extension
  • Management Server running on a Windows Server
  • UI Plugin for the vSphere C# and vSphere Web Client

The management server needs a MS SQL database and it installs the 64 bit version of Oracle Java SE 7. For a PoC or a small deployment, you can use the Express version of Microsoft SQL Server 2012. I installed the management server onto one of my Windows 2008 R2 servers. This server hosts also my vSphere Update Manager, so I had already a MS SQL database in place. I had some trouble right after the installation, because I missed to enable the SQL Browser service. This is clearly stated in the installation guide. So RTFM. ;)

NOTE: The Microsoft® SQL Server® instance requires an enabled TCP/IP protocol even if the database is installed locally. Additional details on enabling TCP/IP using the SQL Server Configuration Manager can be found here. If using a SQL Named Instance, as in the example above, ensure that the SQL Browser Service is enabled and running. Additional details on enabling the SQL Browser Service can be found here.

After I had fixed this, the management server service started without problems and I was able to install the vSphere C# client plugin. You need the plugin to manage FVP, but the plugin installation is only necessary, if you want to use the vSphere C# client. You don’t have to install a dedicated plugin for the vSphere Web Client.

To install the host extension, you can simply import the host extension into the vSphere Update Manager, build a host extension baseline, attach it to the hosts (or the cluster, datacenter object etc.) and remediate them. The hosts will go into the maintenance mode, install the host extension and then exit maintenance mode. A reboot of the hosts is not necessary!

Right after the installation, I created my first FVP cluster. The trial period starts with the installation of the management server. There is no special trial license to install. Simply install the management server and deploy the host extension. Then you have 30 days to evaluate PernixData FVP 2.0.

Both steps, the installation of the host extension using the vSphere Update Manager, as well as the installation of the Management server, are really easy. You can’t configure much, and you don’t need to configure much. You can customize the network configuration (what vMotion network or which ports should be used), you can blacklist VMs and select VADP VMs. Oh, and you can re-enable the “Getting started” started screen. Good for the customer, bad for the guy who’s payed to install FVP. ;) Nothing much to do. But I like it. It’s simple and you can quickly get started.

First impressions

My FVP cluster consists of three hosts. Because I don’t have any SSDs for the moment, I uses host memory to accelerate the workload. During my tests, 15 VMs were covered by FVP and they ran workloads like Microsoft SQL Server, Microsoft Exchange, some Linux VMs, Windows 7 Clients, Fileservices, Microsoft SCOM. I also played with Microsoft Exchange Jetstress 2013 in my lab. A mixed bag of different applications and workloads. A picture says more than a 1000 words. This is a screenshot of the usage tab after about one week. Quite impressive and I can confirm, that FVP accelerates my lab in a noticeable way.

pernixdata_results

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

I’ve enabled FVP on Monday evening. Check the latency diagram, that I’ve taken from vCenter. See the latencies dropping on Monday evening? The peaks during the week were caused by my tests.

pernixdata_results_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Final words

Now it’s time to convince my sales colleagues to sell PernixData FVP. Or some customers read this blog post and ask my sales colleagues for PernixData. ;) I am totally convinced of this solution. You can buy PernixData FVP in different editions:

  • FVP Enterprise: No limit on the number of hosts or VMs
  • FVP Subscription: FVP Enterprise purchased using a subscription model
  • FVP Standard: No limit on the number of hosts or VMs. Perpetual license only. No support for Fault Domains, Adaptive Resource Management and Disaster Recovery integration (only in FVP Enterprise).
  • FVP VDI: Exclusively for VDI (priced on a per VM basis)
  • FVP Essentials Plus: FVP Standard that supports 3 hosts and accelerates up to 100 VMs. This product can only be used with VMware vSphere Essentials (Plus).

If you’re interested in a PoC or demo, don’t hesitate to contact me.

I’d like to thank Patrick Schulz, Systems Engineer DACH at PernixData, for his support! I recommend to follow him on Twitter and don’t foget to take a look at his blog.

Shady upgrade path for NetApp ONTAP 7-Mode to cDOT

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

NetApp has offered Data ONTAP for some time in two flavours:

  • 7-Mode
  • Clustered Data ONTAP (cDOT)

With cDOT, NetApp has rewritten ONTAP nearly from scratch. The aim was to create an Storage OS, that leverages scale-out architecture and storage virtualization techniques, as well as providing non-disruptive operations. NetApp has needed some release cycles to get cDOT at that point, where it provides all features that customers know from 7-Mode. With Data ONTAP 8.3, NetApp has reached this point. Even Metrocluster is now supported. That’s a huge improvement and I’m glad that NetApp has made it. But NetApp wasted no time in cutting off old habits: With ONTAP 8.3, 7-Mode is no longer offered. Okay, no big deal. Customers can migrate from 7-Mode to cDOT. Yes, indeed. But it’s not that easy as you maybe think.

First of all: You can’t update to cDOT in-place. You have to wipe the nodes and re-install Data ONTAP. That makes it nearly impossible to migrate a running Filer without downtime and/ or buying or loaning additonal hardware. Most customers migrate to cDOT at the same time as they refresh the hardware. The data can be migrated on different ways. NetApp offers the 7-Mode Transition Tool (7MTT). 7MTT leverages SnapMirror to get the data from the 7-Mode to the cDOT Filer. But you can also use plain SnapMirror without 7MTT to migrate the data. The switchover from the old to the new volume is an offline process. The accessing servers have to be disconnected, and they must be connected to the new cDOT Filer and volume. 7MTT can only migrate NAS data! If you wish to migrate SAN data (LUNs), you have to use NetApps DTA2800 appliance or something like VMware Storage vMotion. Other migration techniques, like Storage vMotion, robocopy etc. can also be used.

I know that cDOT is nearly completely rewritten, but such migration paths are PITA. Especially if customers have just bought new equipment with ONTAP 8.1 or 8.2 and they now wish to migrate to 8.3.

Another pain point ist NetApps MetroCluster. With NetApp MetroCluster customers can deploy active/ active clusters between two sites up to 200 km apart. NetApp MetroCluster leverages SyncMirror to duplicate RAID groups to different disks. NetApp MetroCluster is certified for vSphere Metro Storage Cluster (vMSC). One can say that Metro cluster is a bestseller. I know many customers that use MetroCluster with only two nodes. That’s where a 2-node HA pair is cut in the middle and spread into to locations. Let’s assume that a customer is running a stretched MetroCluster with two nodes and Data ONTAP 8.2. The customer wants to migrate to ONTAP 8.3. This means, that he has to migrate to cDOT. No problem, because with ONTAP 8.3, cDOT offers support for NetApp MetroCluster.

  1. You can’t update to cDOT in-place. So either wipe the nodes or get (temporary) additional hardware.
  2. NetApp MetroCluster with cDOT requires a 2-node cluster at each of the two sites (four nodes in sum)

Especially when you look at the second statement, you will quickly realize that all customers that are running a 2-node MetroCluster, have to purchase additional nodes and disks. Otherwise they can’t use MetroCluster with cDOT. This allows only one migration path: Use ONTAP 8.2 with 7-Mode and wait until the hardware needs to be refreshed.

This is really bad… This is a shady upgrade path.

EDIT

NetApp is working hard to make the migration path better.

  • 7MTT is capable of migrating LUNs from 7DOT to cDOT in the newest Version
  • At NetApp Insight 2014 there was an announcement of 2-Node cDOT MetroCluster which will be released soon.

Thank you Sascha for this update.

vSphere Lab Storage: Synology DS414slim Part 4 – VAAI-NAS Plugin

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Chris Wahl wrote a good blog post about the VAAI-NAS plugin some days ago. I really recommend to read this posting. Because of his article, I will only describe the installation of the plugin. You can download the plugin on the Synology homepage for free.

There are two ways to install the plugin: With the vSphere Update Manager (VUM) and a host extension baseline, or with ESXCLI.

Plugin installation using the vSphere Update Manager

First of all, we need to import the plugin (host extension) to the patch repository. Open the vSphere C# client, switch to the “Home” screen and click “Update Manager” under “Solutions and Applications”. Switch to the “Patch Repository” tab and click “Import Patches”.

vaai-nas_plugin_installation_vum_01

Import the SYN-ESX-5.5.0-NasVAAIPlugin-1.0-offline_bundle-2092790.zip file. The next step is to create a new baseline, in this case a “Host Extension” baseline.

vaai-nas_plugin_installation_vum_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Scroll down and add the plugin to the baseline (click the down arrow button). Click “Next”.

vaai-nas_plugin_installation_vum_03

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Check the settings and finish the creation of the baseline.

vaai-nas_plugin_installation_vum_04

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Now attach the baseline to your hosts or cluster.

vaai-nas_plugin_installation_vum_05

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

As you can see, the VUM detected that my hosts are non-compliant, because the host extension is missing.

vaai-nas_plugin_installation_vum_06

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

During the installation process, the plugin is installed and a host reboot is triggered. After a reboot and a scan, all hosts should be compliant.

vaai-nas_plugin_installation_vum_07

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

In addition to the now compliant host status, the NFS datastores should now support hardware acceleration. You can check this in the vSphere C# or vSphere Web Client.

vaai-nas_plugin_installation_vum_08

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Another way to install the plugin is using the ESXCLI.

Install via ESXCLI

Upload the esx-nfsplugin.vib to a local or shared datastore. I placed the file in one of my NFS datastores. Then use ESXCLI to install the VIB.

~ # esxcli software vib install -v /vmfs/volumes/VMDS-NFS-SATA/esx-nfsplugin.vib
Installation Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed: Synology_bootbank_esx-nfsplugin_1.0-1
VIBs Removed:
VIBs Skipped:
~ #

Do enable the plugin, a host reboot is necessary. This ways is suitable for standalone hosts. I recommend to use the VUM whenever it’s possible.

Final words

I strongly recommend to install the plugin. Using the vSphere Update Manager, the installation is really easy. If you have a single host, try the installation using ESXCLI.

vSphere Lab Storage: Synology DS414slim Part 3 – Storage

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

This blog post covers the setup of the volumes and shares. Depending on your disk config, variuos volume configurations are possible. The DS414slim supports all important RAID levels (Synology Hybrid RAID, Basic, JBOD, RAID 0, 1, 5, 6 and 10). I recommend to use RAID 5, if you use more then two disks. I decided to create a RAID 5 with my three Crucial M550 SSDs and use the Seagate Momentus XT as a single disk.

Volume1: RAID 5

nas_volume_setup_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Volume2: Single disk

nas_volume_setup_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Create a NFS share

This disk setup gave me about 880 GB of SSD and 450 GB of SATA storage. To use this storage, we need to create at least one NFS share. Voume1 contains only a single NFS share. Volume2 contains a NFS share and an additional CIFS share, that I use for my Veeam backups. Since I use the Volume2 only for VM templates, I put both shares, the CIFS and NFS share, on the a single volume and a single disk.

To create a new NFS share, open the Control Panel > Shared Folders and click “Create”. Enter a name, a description and select a volume. Then click “OK”.

nas_setup_share_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Grant the local admin account “Read/ Write” permissions on the new share and click “NFS Permissions”.

nas_setup_share_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Enter the subnet or the IP address of your ESXi host to grant the host(s) access to the NFS share. Select “Map root to admin” and ensure that asynchronous transfer mode is enabled. Click “OK”.

nas_setup_share_03

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

That’s it. Now you can mount the NFS share to your ESXi hosts. You can mount the NFS share using ESXCLI, the vSphere C# client or with the vSphere Web Client. The latter provides the very handy NFS multimount feature. This allows you to mount a NFS share at multiple hosts at the same time. With ESXCLI, you can mount a datastore with this command:

esxcli storage nfs add -H 192.168.200.205 -s /volume1/vmds-ssd -v VMDS-NFS-SSD

To mount a NFS datastore with the vSphere Web Client, simply right-click a cluster and select “New Datastore”. Provide the needed information and in step 4 you can select one or multiple hosts, to which the NFS share should be mounted. Very handy!

Final words

Depending on your disk configuration, you have multiple options to configure volumes. I decided to go for a RAID 5. I strongly recommend to use SSDs, because rotating rust would be too slow. I also recommend to use NFS instead of iSCSI in a lab environment. It’s easier to setup and faster.

Part 4 of this series covers the installation of the Synology VAAI-NFS plugin: vSphere Lab Storage: Synology DS414slim Part 4 – VAAI-NAS Plugin

vSphere Lab Storage: Synology DS414slim Part 2 – Networking

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

The next step is to connect the Synology DS414slim to my lab network. I use two HP 1910 Switches in my lab, a 8 Port and a 24 Port model. The Synology DS414slim has two 1 GbE ports, which can configured in different ways. I wanted to use both ports actively, to I decided to create a bond.

Create a bond

Browse to the admin website and go to Control Panel > Network > Network Interfaces and select “Create”. Then select “Create Bond”.

nas_networking_settings_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

To utilize both NICs, select the first option: “IEEE 802.3ad Dynamic Link Aggregation”. This option requires switches that are capable to create a LACP LAG! I will show the configuration of a LACP LAG on one of my HP 1910 switches later.

nas_networking_settings_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Click “IPv4”. I have a dedicated VLAN and subnet for NFS. This subnet is routed in my lab, so I that the can reach the DS414slim for management. Make sure that you enable Jumbo Frames and that every component in the network path can handle Jumbo Frames! Switch to the “IPv6” tab.

nas_networking_settings_03

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

I don’t want to use IPv6, so I decided to disable it.

nas_networking_settings_04

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Click “OK” and wait until the configuration is finished.

Create a LAG

Now it’s time to create the LAG on the switch. As I already mentioned, I use two HP 1910 switches in my lab. Both are great home lab switches! They are cheap and they can do L3 routing. Browse to the web management, log in and select Network > Link Aggregation and click “Create”.

1910-24g_create_lag_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Enter a interface ID for the LAG. In my case there were no LAGs before, so the ID is 1. Select “Dynamic (LACP Enabled)” and select two ports on the figure of the switch. Check the settings in the “Summary” section and click on “Apply”.

1910-24g_create_lag_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Now we need to place the LAG in the correct VLAN. Select Network > VLAN and select “Modify Ports”. Select “BAGG1” from “Aggregation ports” and place the LAG as an untagged member in the NFS VLAN (in my case this is VLAN 100). Finish this task by clicking “Apply”.

1910-24g_create_lag_03

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

You can check the success of this operation by switching to the “Details” pageand then select the NFS VLAN.

1910-24g_create_lag_04

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Connect the DS414slim with the two patch cables to the ports that were now configured as a LAG. If everything is configured correctly, the DS414slim should be reachable, with its new IP and in the NFS VLAN.

VMkernel configuration

Make sure that you have at least one VMkernel port configured, that is in the same subnet and VLAN as you DS414slim. You can see that the VMkernel port is placed in VLAN 100 and that is has a IP from my NFS subnet.

nas_esxi_vmk_setup_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

You should also make sure, that the VMkernel port and the vSwitch can handle Jumbo Frames. The HP 1910 switch series has enabled Jumbo Frames by default.

Final words

The network setup depends on your needs. I strongly recommend to use a dedicated VLAN and IP subnet for NFS. I also recommend the use of Jumbo Frames. Make sure that all componentens in the network path can handle Jumbo Frames and that the VLAN membership is correctly set. If possible, use a bond on the Synology and a LAG on the switch.

Part 3 of this series covers the creation of NFS shares: vSphere Lab Storage: Synology DS414slim Part 3 – Storage

vSphere Lab Storage: Synology DS414slim Part 1 – Unboxing and initial setup

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

A VMware vSphere cluster is nothing without shared storage. Most of the functions, like VMware HA or VMware vMotion (okay, vMotion is possible without shared storage), can only be used with a shared storage. The servers in my lab have Fibre Channel Host Bus Adapters (HBA), but buying an old and cheap Fibre Channel storage system wasn’t an option in my case. This left two options when choosing the right storage protocol: iSCSI or NFS. I tried to virtualize the local storage in my ProLiants with the HP StoreVirtual VSA and DataCore SANsymphony-V, but both were too complex for my needs and a lab environment. Because of this I decided to move the local storage into a small storage system and use iSCSI or NFS. I searched for a while for a suiteable system until Chris Wahl started blogging about the Synology DS414slim.

Like Chris, I’m a fan of NFS. His blog posts encouraged me that, the DS414slim would be a good choice. In addition, the DS414slim is relatively cheap (~ 250 € incl. taxes in Germany) and Chris showed, that the system can achieve a good performance when used with SSDs. Fortunately I already had three Crucial M550 SSDs (each with a capacity of 480 GB) and a single Seagate Momentus XT with a capacity of 500 GB, so I bought the DS414slim without disks.

I shot the DS414slim for ~ 250 € at the end of 2014. The price varies between 230 € and 260 € in Germany for model without disks.

synology_unboxing_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The box contains the DS414slim itself, a stand, two patch cables, screws for the disk trays and a power supply. So it contains everything you need to bring the DS414slim to life.

synology_unboxing_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The system is really small as you can see on this picture (take the2,5″ disks as reference). It goes without saying that you only can use 2,5″ hard disks.

synology_unboxing_03

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The disks were quickly mounted into the disk trays, the needed screws are included. The initial setup is really easy. Simply power it on, open a browser and go to http://find.synology.com. My DS414slim was running DSM 4.1, but you can update the DSM during the installation process. Simply download DSM 5.1 at the Synology Download Center and provide the update file to the installer. The rest of the setup process is not very spectecular. I will not explain the installation process here in more detail – it’s too simple. :)

The next part of this series covers the network connectivity: vSphere Lab Storage: Synology DS414slim Part 2 – Networking.

HP publishes HP 3PAR OS 3.2.1 MU1 with Thin Deduplication

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

On October 28 2014 HP has published HP 3PAR OS 3.2.1 MU1, the first maintenance update for HP 3PAR OS 3.2.1. Beside some fixes, HP enabled in-line deduplication (Thin Deduplication) on all the systems with 3PAR GEN4 ASIC (StoreServ 7000 and 10000). Thin Deduplication does not require any license! It’s included in the base license and every customer can use it without spending money for it.

In-line deduplication is awesome, congrats to HP for making this possible. Deduplication on primary storage is nothing new, but the way how HP 3PAR doing it, is really cool. It’s not a post-process, like NetApps deduplication technology. With HP 3PAR, deduplication happens when data enters the array. I took this figure from a HP whitepaper. It shows in a simple way what enables HP 3PAR to do in-line deduplication: The 3PAR GEN4 ASIC (Who has criticised 3PAR for using custom ASICs…?). Thin Deduplication is in line with the other 3PAR thin technologies.

thin_dedup

HPE/ hpe.com

Ivan Iannaccone write a really good blog post on who Thin Deduplication works. I really recommend to read it! Welcome to Flash 2.0: HP 3PAR Thin Deduplication with Express Indexing

As already mentioned Thin Deplication is available on all HP 3PAR systems with GEN4 ASIC. This is currently the StoreServ 7000 and 10000 series. Even a customer with a “small” 7200 can use Thin Deduplication without additional cost. And who knows what the HP Discover will bring us… There are currently some small limitations when using Thin Deduplication. But I’m quite sure that these are only temporary.

  1. Thin Deduplication is currently only available for Virtual Volumes (VV) provisioned from a SSD tier.
  2. You can’t use TDVV with Adaptive Optimization Configuration. This is presumably because Thin Deduplication is only available for VV provisioned from a SSD tier. If a region from a TDVV has to be moved to a lower tier, the data has to be rehydrated.
  3. Converting from any VV to Thin Deduplication Virtual Volume (TDVV) can be accomplished with Dynamic Optimization, which is a licensable feature.

You can have up to 256 TDVV per SSD CPG. Deduplication is fully supported with 3PAR replication (sync, async), but the replicated data is not deduplicated. You can use a estimation functionality to estimate the amount of deduplicated data for TPVV. This estimation can be run online against any volumes, regardless on which tier the data reside.

Bugfixes

Beside the new Thin Deduplication feature, HP fixed some bugs in this MU. Here is an excerpt from the release notes:

  • 116690 An issue in QoS and ODX from Windows hosts causes an uncontrolled shutdown.
  • 117123 The version of Bash is updated to resolve the vulnerabilities CVE-2014-6271 and CVE-2014-7169
    commonly known as “shellshock”
  • 114947 The total capacity of the volumes in a Peer Persistence Remote Copy group is limited to 32 TB.
  • 114244 Loss of host persona capabilities after upgrading to HP 3PAR OS 3.2.1 GA from HP 3PAR OS 3.1.2
    MU5 or HP 3PAR OS 3.1.2 MU3 + P41.

For more details take a look into the Release Notes for HP 3PAR OS 3.2.1 GA/ MU1. If you’re interested in the basics concepts of HP 3PAR, take a look into the HP 3PAR StoreServ Storage Concepts Guide for HP 3PAR OS 3.2.1.

HP 3PAR Peer Persistence for Microsoft Windows Servers and Hyper-V

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Some days ago I wrote two blog posts (part I and part II) about VMware vSphere Metro Storage Cluster (vMSC) with HP 3PAR Peer Persistence. Because I wrote about it in the first of the two blog posts, allow me to take a short description, what Peer Persistence is and what it does, from that blog post:

HP 3PAR Peer Persistence adds functionalities to HP 3PAR Remote Copy software and HP 3PAR OS, that two 3PAR storage systems form a nearly continuous storage system. HP 3PAR Peer Persistence allows you, to create a VMware vMSC configuration and to achieve a new quality of availability and reliability.

You can transfer the concept of a Metro Storage Cluster to Microsoft Hyper-V. There is nothing VMWare specific in that concept.

With the GA of 3PAR OS 3.2.1 in September 2014, HP announced a lot of new features. The most frequently mentioned feature is Adaptive Flash Cache. If you’re interested in more details about Adaptive Flash Cache you will like the AFC Deep dive on 3pardude.com. A little lost is the newly added support for  Peer Persistence with Hyper-V. This section is taken from the release notes of 3PAR OS 3.2.1:

3PAR Peer Persistence Software supports Microsoft Windows 2008 R2 and Microsoft Windows 2012 R2 Server and Hyper-V, in addition to the existing support for VMware. HP 3PAR Peer Persistence software enables HP 3PAR StoreServ systems located at metropolitan distances to act as peers to each other, presenting a nearly continuous storage system to hosts and servers connected to them. This capability allows to configure a high availability solution between two sites or data centers where failover and failback remains completely transparent to the hosts and applications running on those hosts.

3PAR Peer Persistence with Microsoft Windows Server and Hyper-V

Currently supported are Windows Server 2008 R2 and Server 2012 R2 and the corresponding versions of Hyper-V. This table summarizes the currently supported environments.

HP 3PAR OSHost OSHost connectivityRemote Copy connectivity
3.2.1Windows Server 2008 R2FC, FCoE, iSCSIRCIP, RCFC
3.2.1Windows Server 2012 R2FC, FCoE, iSCSIRCIP, RCFC

At first glance, it seems that Microsoft Windows Server and Hyper-V support more options in terms of Host and Remote Copy Connectivity. This is not true! With 3PAR OS 3.2.1, HP added the support for FCoE and iSCSI host connectivity, as well as the support for RCIP for VMware. At this point, there is no winner. Check HP SPOCK for the latest support statements.

With 3PAR OS 3.2.1 a new host persona (Host Persona 15) was added for Microsoft Windows Server 2008, 2008 R2, 2012 and 2012 R2. This host persona must be used in Peer Persistence configurations. This is comparable to Host Persona 11 for ESXi. The setup and requirements for VMware and Hyper-V are similar. For a transparent failover a Quorum Witness is needed and it has to be deployed onto a Windows Server 2012 R2 Hyper-V host (not 2008, 2008 R2 or 2012!). Peer Persistence operates in the same manner as with VMware: The Virtual Volumes (VV) are grouped into Remote Copy Groups (RCG), mirrored synchronously between a source and destination storage system. Source and destination volume share the same WWN. They are presented using the same LUN ID and the paths to the destination storage are marked as standby. Check part I of my Peer Persistence blog series for more detailed information about how Peer Persistence works.

Final words

It was only a question of time until HP releases the support for Hyper-V with Peer Persistence. I would have assumed that HP makes more fuss about it, but AFC seems to be the killer feature in 3PAR OS 3.2.1. I’m quite sure that there are some companies out there that have waited eagerly for the support of Hyper-V with Peer Persistence. If you have any further questions about Peer Persistence with Hyper-V, don’t hesitate to contact me.

VMware vSphere Metro Storage Cluster with HP 3PAR Peer Persistence – Part II

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

The first part of this (short) blog series covered the basics of VMware vSphere Metro Storage Cluster (vMSC) with HP 3PAR Peer Persistence. This, the second, part will cover the basic tasks to configure Peer Persistence. Please note that this blog post relies on the features and supported configurations of 3PAR OS 3.1.3! This is essential to know, because 3.1.3 got some important enhancements in respect of 3PAR Remote Copy.

Fibre-Channel zoning

On of the very first tasks is to create zones with between the Remote Copy Fibre Channel (RCFC) ports. I used two ports from a quad-port FC Adapter for Remote Copy. This matrix shows the zone members in each Fibre Channel fabric. 3PAR OS 3.1.3 supports up to four RCFC ports per node. Earlier versions of 3PAR OS only support one RCFC port per node.

N:S:P0:2:10:2:21:2:11:2:2
0:2:1Fabric 1
0:2:2Fabric 2
1:2:1Fabric 1
1:2:2Fabric 2

RCFC port setup

After the zoning it’s time to setup the RCFC ports. In this case the RCFC ports will detect the partnering port by itself. I assume that the ports are unconfigured. Otherwise it’s necessary to take the ports offline. The command controlport is used to configure a port with a specific port role.

controlport config rcfc -ct point -f 0:2:1
controlport config rcfc -ct point -f 0:2:2
controlport config rcfc -ct point -f 1:2:1
controlport config rcfc -ct point -f 1:2:2

You can do the same with the 3PAR Management Console. After the RCFC port configuration the success of this procedure can checked with doing this on both StoreServs, you can check your success with showrctransport

showrctransport -rcfc

or with the 3PAR Management Console.

3par_remotecopy_port_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Remote Copy setup

Now it’s time to create the Remote Copy configuration. The screenshots below are schowing the configuration of a bidirectional 1-to-1 Remote Copy setup. Start the wizard and select the configuration.

3par_remotecopy_setup_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

In the next step, the RCFC ports have to be configured and paired together. Simply connect the ports by selecting a port and pull a connection to the other port. Both ports have to be in the same zone.

3par_remotecopy_setup_2

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

A Remote Copy Group groups Virtual Volumes (VV) together to ensure I/O consistency. To create a bidirectional Remote Copy configuration we need two Remote Copy Groups. One from A > B and a second from B > A. I recommend to enable the “Auto Recover” option. This option is only visible, if the “Show advanced options” tickbox is enabled.

3par_remotecopy_setup_3

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

This screenshot shows the bidirectional Remote Copy configuration. Each StoreServ acts as primary arry for a Remove Copy Group and as secondary array for a primary Remote Copy Group on the other StoreServ.

3par_remotecopy_setup_4

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

If you already created volumes, you can add the volumes in this step. I will show this step later.

3par_remotecopy_setup_5

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The last page shows a summary of the configured options. Simply click “Finish” and proceed with the next step.

3par_remotecopy_setup_6

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

After creating the volumes it’s necessary to add them to the Remote Copy groups. Right click the Remote Copy Group and select “Edit Remote Copy Group…”.

3par_rcg_setup_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Click “Next”.

3par_rcg_setup_2

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Select the volumes to add and check the box “Create new volume”. I recommend to use CPGs with the same characteristics as on the source system. I also recommend to use the same CPG as User and Copy CPG. Click “Add” and repeat this step for each volume that should belong to the Remote Copy Group. At the end click “Next”…

3par_rcg_setup_3

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

… and “Finish”.

3par_rcg_setup_4

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Repeat the steps for the second Remote Copy Group and the volumes on the secondary StoreServ.

3par_rcg_setup_5

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

This screenshot shows the result of the configuration process.

3par_rcg_setup_6

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

A very handy feature of 3PAR OS 3.1.3 is that it creates a Virtual Volume Set for each Remote Copy Group. When a VV is added to the Remote Copy Group, it belongs automatically to the Virtual Volume Set and will be exported to the hosts. This screenshots shows the Virtual Volume Sets on both StoreServs.

3par_rcg_setup_7

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

3par_rcg_setup_8

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Please ensure that both Virtual Volume Sets on both StoreServs are exported to all hosts (I recommend using Host Sets). If everything has been correctly presented 8 paths should be visible for each VMFS datastore: 4 active paths to the primary, and 4 standby paths to the secondary StoreServ.

3par_rcg_presentation

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Automate the failover

There are two requirements to automate the failover:

  • Quorum Witness
  • Enabled “Auto Failover” for Remote Copy Groups

The Quorum Witness is a VMware Appliance that needs to be deployed at a third site. The setup is really easy. Simply deploy the OVA and power it on. A short menu guides you through some setup tasks, like setting a password, assigning an IP address etc. When the Quorum Witness is available on the network, create a Peer Persistence configuration. Enter the IP address and select the targets, for which the Quorum Witness should act as a witness.

3par_pp_setup_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

If everything went fine, the “Quorum Status” should be “Started”.

3par_pp_setup_2

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Now the automatic failover for the Remote Copy Groups can be enabled.

3par_rcg_auto_failover_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Select the groups and click the right arrow to enable automatic failover for the selected Remote Copy Groups.

3par_rcg_auto_failover_3

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

That’s it! To test the failover you can use the 3PAR Management Console or this CLI command:

setrcopygroup switchover -t 7200-EDV1

With this command all secondary Remote Copy Groups on StoreServ 7200-EDV2 will become primary Remote Copy groups. If everything is configured accordingly, you will notice no or only a short IO interruption during the failover. An automatic failover will only occur, if a StoreServ loses all RCFC links AND the connection to the Quorum Witness. Otherwise there will be no automatic failover! The parameter “switchover” is only used for transparent and controlled failovers. It’s issued on the primary storage array. The parameter “failover” is automatically issued from the secondary storage system in case of a failover situation.

Finaly words

The basics tasks are:

  • create zones for the RCFC ports
  • configure the RCFC ports on each node
  • create a bidirectional 1-to-1 Remote Copy setup with Remote Copy Groups on each StoreServ
  • add volumes to the Remote Copy Groups
  • present Virtual Volume Sets (that were automatically created based on the Remote Copy Groups) to the hosts
  • deploy Quorum WItness
  • create a Peer Persistence configuration and configure Quorum Witness for the StoreServs that belong to the Peer Persistence Configuration
  • Enable “Automatic Failover” for the presented Remote Copy Groups

This is only a very rough overview about the configuration of a 3PAR Peer Persistence setup. I strongly recommend to put some brain into the design and the planning of the Peer Persistence setup.

VMware vSphere Metro Storage Cluster with HP 3PAR Peer Persistence – Part I

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

The title of this blog post mentions two terms that have to be explained. First, a VMware vSphere Metro Storage Cluster (or VMware vMSC) is a configuration of a VMware vSphere cluster, that is based on a a stretched storage cluster. Secondly, HP 3PAR Peer Persistence adds functionalities to HP 3PAR Remote Copy software and HP 3PAR OS, that two 3PAR storage systems form a nearly continuous storage system. HP 3PAR Peer Persistence allows you, to create a VMware vMSC configuration and to achieve a new quality of availability and reliability.

VMware vSphere Metro Storage Cluster

In a vMSC, server and storage are geographically distributed over short or medium-long distances. vMSC goes far beyond the well-known synchronous mirror between two storage systems. Virtualization hosts and storage belong to the same cluster, but they are geographically dispersed: They are stretched between two sites. This setup allows you to move virtual machines from one site to another (vMotion and Storage vMotion) without downtime (downtime avoidance). With a stretched cluster, technologies such as VMware HA can help to minimize the time of a service outage in case of a disaster (disaster avoidance).

The requirements for a vMSC are:

  • Storage connectivity using Fibre Channel/ Fibre Channel over Ethernet (FCoE), NFS or iSCSI
  • max. 10 ms round-trip time (RTT) for the ESXi management network (> 10 ms is supported with vSphere Enterprise Plus – Metro vMotion)
  • max. 5 ms round-trip time RTT for the synchronous storage replication links
  • at leat 250 Mbps per concurrent vMotion on the vMotion network

The complexity of the storage requirements is not the maximum round-trip time – It’s the requirement that a datastore must be accessible from both sites. This means, that a host in Site A must be able to access /read & write) a datastore on a storage in Site B and vice versa. vMSC knows to different methods of host access configuration:

  • Uniform host access configuration
  • Non-Uniform host access configuration

With a uniform host access configuration, the storage on both sides can accessed by all hosts. LUNs from both storage systems are zoned to all hosts and the Fibre-Channel fabric is stretched across the site-links. The following figure was taken from the “Implementing vSphere Metro Storage Cluster using HP 3PAR Peer Persistence” technical whitepaper and shows a typical uniform host access configuration.

uniform-host-access

HPE/ hpe.com

The second possible configuration is non-uniform host access configuration, in which the hosts only access the site-local storage system. The Fibre Channel fabrics are not stretched across the inter-site links. The following figure was taken from the “Implementing vSphere Metro Storage Cluster using HP 3PAR Peer Persistence” technical whitepaper and shows a typical non-uniform host access configuration. If a storage system fails, the ESXi in a datacenter will lose the connectivity and the virtual machine will fail. VMware HA will take care that the VM is restarted in other datacenter.

non-uniform-host-access

HPE/ hpe.com

Another possible non-uniform setup uses stretched Fibre Channel fabrics and some kind of virtual LUN. A LUN is mirrored between two storage systems and can be accessed from both sites. The storage systems take care of the consistency of the data. This figure was taken from the “VMware vSphere Metro Storage Cluster Case Study” technical whitepaper.

non-uniform-host-access-stretched-fabric

VMware/ vmware.com

The uniform host access configuration is currently used most frequently.

Regardless of the implementation it’s useful to think about the data locality. Let’s assume, that a host in datacenter A is running a VM, that is housed in a datastore on a storage system in datacenter B. As long as you’re using a stretched fabric between the sites, this is a potential scenario. What happens to the storage I/O of this VM? Right, it will travel across the inter-site links from datacenter A to datacenter B. To avoid this, you can use DRS groups and rules.

Examples for uniform host access configuration are:

Uniform host access configurationNon-Uniform host access configuration
vSphere 5.x support with NetApp MetroCluster (2031038)Implementing vSphere Metro Storage Cluster (vMSC) using EMC VPLEX (2007545)
Implementing vSphere Metro Storage Cluster (vMSC) using EMC VPLEX (2007545)
Implementing vSphere Metro Storage Cluster using HP 3PAR StoreServ Peer Persistence (2055904)
Implementing vSphere Metro Storage Cluster using HP LeftHand Multi-Site (2020097)
Implementing vSphere Metro Storage Cluster using Hitachi Storage Cluster for VMware vSphere (2073278)
Implementing vSphere Metro Storage Cluster using IBM System Storage SAN Volume Controller (2032346)

HP 3PAR Remote Copy, Peer Persistence & the Quorum Witness

HP 3PAR Peer Persistence uses synchronous Remote Copy and Asymmetric Logical Unit Access (ALUA) to realize a metro cluster configuration that allows host access from both sides. 3PAR Virtual Volumes (VV) are synchronous mirrored between two 3PAR StoreServs in a Remote Copy 1-to-1 relationship. The relationship may be uni- or bidirectional, which allows the StoreServs to act mutually as a failover system. To create a vMX configuration with HP 3PAR StoreServ storage systems, some requirements have to be fulfilled.

  • Firmware on both StoreServ storage systems must be 3.1.2 MU2 or newer (I recommend 3.1.3)
  • a remote copy 1-to-1 synchronous relationship
  • 2.6 ms or less round-trip time (RTT)
  • Quorum Witness VM must run at a 3rd site and must be reachable from each 3PAR StoreServ
  • same WWN and LUN ID for each source and target virtual volume
  • VMware ESXi 5.0, 5.1 or 5.5
  • Hosts must be created with Hostpersona 11
  • Hosts must be zoned to both 3PAR StoreServ storage systems (this requires a stretched Fibre Channel fabric between the sites)
  • iSCSI or FCoE for host connectivity is supported with 3PAR OS 3.2.1. Versions below 3PAR OS 3.2.1 only support FC for host connectivity with Peer Persistence
  • Both 3PAR StoreServ storage systems must be licensed for Remote Copy and Peer Persistence (I recommend to license the Replication Suite)

A VV can be a source or a target volume. Source VV belong to the primary remote copy group, target virtual volumes belong to a secondary remote copy groups. VV are grouped to remote copy groups to ensure I/O consistency. So all VV that require write order consistency should belong to a remote copy group. Even VV that don’t need write order consistency should belong to a remote copy group, just to simplify administration tasks. A typical uniform vMSC configuration with 3PAR StoreServs will have remote copy groups replicating in both directions. So both StoreServs act as source and target in a bi-directional synchronous remote copy relationship. It’s important to understand that the source and target volumes share the same WWN and they are presented using the same LUN ID. The ESXi hosts must use Hostpersona 11. During the process of creating the remote copy groups, the target volumes can be created automatically. This ensures that the source and target volumes use the same WWN. When the volumes from the source and target StoreServ are presented, the paths to the target StoreServ are marked as “Stand by”. In case of a failover the paths will become active and the I/O will continue. The Quorum Witness is a RHEL appliance that communicates with the StoreServs and triggers the failover in some specific scenarios. This table was taken from the “Implementing vSphere Metro Storage Cluster using HP 3PAR Peer Persistence” technical whitepaper. As you can see, the automatic failover is only triggered in one specific scenario.

Replication stoppedAutomatic failoverHost I/O impacted
Array to Array remote copy links failureYNN
Single site to Quorum Witness network failureNNN
Single site to Quorum Witness network and Array to Array
remote copy link failure
YYN
Both sites to Quorum Witness network failureNNN
Both sites to Quorum Witness network and Array to Array
remote copy link failure
YNY

Summary

VMware vSphere Metro Storage Cluster (vMSC) is a special configuration of a stretched compute and storage cluster. A vMSC is usually implemented to avoid downtime. A vMSC configuration makes it possible to move virtual machine, and thus workloads, between sites. Beyond this, vMSC can avoid downtime caused due to a failed storage system. Using HP 3PAR Remote Copy, 3PAR Peer Persistence and the Quorum Witness, two HP 3PAR StoreServ storage systems can form a uniform vMSC configuration. This allows movement of VMs/ workloads between sites and also a transparent failover between storage systems in case of a failure of one of the StoreServs.

Part II of this small series will cover the configuration of Remote Copy and Peer Persistence.