Category Archives: Storage

vSphere Lab Storage: Synology DS414slim Part 4 – VAAI-NAS Plugin

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Chris Wahl wrote a good blog post about the VAAI-NAS plugin some days ago. I really recommend to read this posting. Because of his article, I will only describe the installation of the plugin. You can download the plugin on the Synology homepage for free.

There are two ways to install the plugin: With the vSphere Update Manager (VUM) and a host extension baseline, or with ESXCLI.

Plugin installation using the vSphere Update Manager

First of all, we need to import the plugin (host extension) to the patch repository. Open the vSphere C# client, switch to the “Home” screen and click “Update Manager” under “Solutions and Applications”. Switch to the “Patch Repository” tab and click “Import Patches”.

vaai-nas_plugin_installation_vum_01

Import the SYN-ESX-5.5.0-NasVAAIPlugin-1.0-offline_bundle-2092790.zip file. The next step is to create a new baseline, in this case a “Host Extension” baseline.

vaai-nas_plugin_installation_vum_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Scroll down and add the plugin to the baseline (click the down arrow button). Click “Next”.

vaai-nas_plugin_installation_vum_03

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Check the settings and finish the creation of the baseline.

vaai-nas_plugin_installation_vum_04

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Now attach the baseline to your hosts or cluster.

vaai-nas_plugin_installation_vum_05

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

As you can see, the VUM detected that my hosts are non-compliant, because the host extension is missing.

vaai-nas_plugin_installation_vum_06

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

During the installation process, the plugin is installed and a host reboot is triggered. After a reboot and a scan, all hosts should be compliant.

vaai-nas_plugin_installation_vum_07

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

In addition to the now compliant host status, the NFS datastores should now support hardware acceleration. You can check this in the vSphere C# or vSphere Web Client.

vaai-nas_plugin_installation_vum_08

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Another way to install the plugin is using the ESXCLI.

Install via ESXCLI

Upload the esx-nfsplugin.vib to a local or shared datastore. I placed the file in one of my NFS datastores. Then use ESXCLI to install the VIB.

Do enable the plugin, a host reboot is necessary. This ways is suitable for standalone hosts. I recommend to use the VUM whenever it’s possible.

Final words

I strongly recommend to install the plugin. Using the vSphere Update Manager, the installation is really easy. If you have a single host, try the installation using ESXCLI.

vSphere Lab Storage: Synology DS414slim Part 3 – Storage

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

This blog post covers the setup of the volumes and shares. Depending on your disk config, variuos volume configurations are possible. The DS414slim supports all important RAID levels (Synology Hybrid RAID, Basic, JBOD, RAID 0, 1, 5, 6 and 10). I recommend to use RAID 5, if you use more then two disks. I decided to create a RAID 5 with my three Crucial M550 SSDs and use the Seagate Momentus XT as a single disk.

Volume1: RAID 5

nas_volume_setup_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Volume2: Single disk

nas_volume_setup_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Create a NFS share

This disk setup gave me about 880 GB of SSD and 450 GB of SATA storage. To use this storage, we need to create at least one NFS share. Voume1 contains only a single NFS share. Volume2 contains a NFS share and an additional CIFS share, that I use for my Veeam backups. Since I use the Volume2 only for VM templates, I put both shares, the CIFS and NFS share, on the a single volume and a single disk.

To create a new NFS share, open the Control Panel > Shared Folders and click “Create”. Enter a name, a description and select a volume. Then click “OK”.

nas_setup_share_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Grant the local admin account “Read/ Write” permissions on the new share and click “NFS Permissions”.

nas_setup_share_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Enter the subnet or the IP address of your ESXi host to grant the host(s) access to the NFS share. Select “Map root to admin” and ensure that asynchronous transfer mode is enabled. Click “OK”.

nas_setup_share_03

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

That’s it. Now you can mount the NFS share to your ESXi hosts. You can mount the NFS share using ESXCLI, the vSphere C# client or with the vSphere Web Client. The latter provides the very handy NFS multimount feature. This allows you to mount a NFS share at multiple hosts at the same time. With ESXCLI, you can mount a datastore with this command:

To mount a NFS datastore with the vSphere Web Client, simply right-click a cluster and select “New Datastore”. Provide the needed information and in step 4 you can select one or multiple hosts, to which the NFS share should be mounted. Very handy!

Final words

Depending on your disk configuration, you have multiple options to configure volumes. I decided to go for a RAID 5. I strongly recommend to use SSDs, because rotating rust would be too slow. I also recommend to use NFS instead of iSCSI in a lab environment. It’s easier to setup and faster.

Part 4 of this series covers the installation of the Synology VAAI-NFS plugin: vSphere Lab Storage: Synology DS414slim Part 4 – VAAI-NAS Plugin

vSphere Lab Storage: Synology DS414slim Part 2 – Networking

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

The next step is to connect the Synology DS414slim to my lab network. I use two HP 1910 Switches in my lab, a 8 Port and a 24 Port model. The Synology DS414slim has two 1 GbE ports, which can configured in different ways. I wanted to use both ports actively, to I decided to create a bond.

Create a bond

Browse to the admin website and go to Control Panel > Network > Network Interfaces and select “Create”. Then select “Create Bond”.

nas_networking_settings_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

To utilize both NICs, select the first option: “IEEE 802.3ad Dynamic Link Aggregation”. This option requires switches that are capable to create a LACP LAG! I will show the configuration of a LACP LAG on one of my HP 1910 switches later.

nas_networking_settings_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Click “IPv4”. I have a dedicated VLAN and subnet for NFS. This subnet is routed in my lab, so I that the can reach the DS414slim for management. Make sure that you enable Jumbo Frames and that every component in the network path can handle Jumbo Frames! Switch to the “IPv6” tab.

nas_networking_settings_03

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

I don’t want to use IPv6, so I decided to disable it.

nas_networking_settings_04

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Click “OK” and wait until the configuration is finished.

Create a LAG

Now it’s time to create the LAG on the switch. As I already mentioned, I use two HP 1910 switches in my lab. Both are great home lab switches! They are cheap and they can do L3 routing. Browse to the web management, log in and select Network > Link Aggregation and click “Create”.

1910-24g_create_lag_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Enter a interface ID for the LAG. In my case there were no LAGs before, so the ID is 1. Select “Dynamic (LACP Enabled)” and select two ports on the figure of the switch. Check the settings in the “Summary” section and click on “Apply”.

1910-24g_create_lag_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Now we need to place the LAG in the correct VLAN. Select Network > VLAN and select “Modify Ports”. Select “BAGG1” from “Aggregation ports” and place the LAG as an untagged member in the NFS VLAN (in my case this is VLAN 100). Finish this task by clicking “Apply”.

1910-24g_create_lag_03

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

You can check the success of this operation by switching to the “Details” pageand then select the NFS VLAN.

1910-24g_create_lag_04

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Connect the DS414slim with the two patch cables to the ports that were now configured as a LAG. If everything is configured correctly, the DS414slim should be reachable, with its new IP and in the NFS VLAN.

VMkernel configuration

Make sure that you have at least one VMkernel port configured, that is in the same subnet and VLAN as you DS414slim. You can see that the VMkernel port is placed in VLAN 100 and that is has a IP from my NFS subnet.

nas_esxi_vmk_setup_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

You should also make sure, that the VMkernel port and the vSwitch can handle Jumbo Frames. The HP 1910 switch series has enabled Jumbo Frames by default.

Final words

The network setup depends on your needs. I strongly recommend to use a dedicated VLAN and IP subnet for NFS. I also recommend the use of Jumbo Frames. Make sure that all componentens in the network path can handle Jumbo Frames and that the VLAN membership is correctly set. If possible, use a bond on the Synology and a LAG on the switch.

Part 3 of this series covers the creation of NFS shares: vSphere Lab Storage: Synology DS414slim Part 3 – Storage

vSphere Lab Storage: Synology DS414slim Part 1 – Unboxing and initial setup

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

A VMware vSphere cluster is nothing without shared storage. Most of the functions, like VMware HA or VMware vMotion (okay, vMotion is possible without shared storage), can only be used with a shared storage. The servers in my lab have Fibre Channel Host Bus Adapters (HBA), but buying an old and cheap Fibre Channel storage system wasn’t an option in my case. This left two options when choosing the right storage protocol: iSCSI or NFS. I tried to virtualize the local storage in my ProLiants with the HP StoreVirtual VSA and DataCore SANsymphony-V, but both were too complex for my needs and a lab environment. Because of this I decided to move the local storage into a small storage system and use iSCSI or NFS. I searched for a while for a suiteable system until Chris Wahl started blogging about the Synology DS414slim.

Like Chris, I’m a fan of NFS. His blog posts encouraged me that, the DS414slim would be a good choice. In addition, the DS414slim is relatively cheap (~ 250 € incl. taxes in Germany) and Chris showed, that the system can achieve a good performance when used with SSDs. Fortunately I already had three Crucial M550 SSDs (each with a capacity of 480 GB) and a single Seagate Momentus XT with a capacity of 500 GB, so I bought the DS414slim without disks.

I shot the DS414slim for ~ 250 € at the end of 2014. The price varies between 230 € and 260 € in Germany for model without disks.

synology_unboxing_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The box contains the DS414slim itself, a stand, two patch cables, screws for the disk trays and a power supply. So it contains everything you need to bring the DS414slim to life.

synology_unboxing_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The system is really small as you can see on this picture (take the2,5″ disks as reference). It goes without saying that you only can use 2,5″ hard disks.

synology_unboxing_03

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The disks were quickly mounted into the disk trays, the needed screws are included. The initial setup is really easy. Simply power it on, open a browser and go to http://find.synology.com. My DS414slim was running DSM 4.1, but you can update the DSM during the installation process. Simply download DSM 5.1 at the Synology Download Center and provide the update file to the installer. The rest of the setup process is not very spectecular. I will not explain the installation process here in more detail – it’s too simple. :)

The next part of this series covers the network connectivity: vSphere Lab Storage: Synology DS414slim Part 2 – Networking.

HP Discover: New 3PAR StoreServ models

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

HP has brushed up the StoreServ 7000 series and updated the StoreServ 7200 and 7400 models. HP also added a new model to the 7000 series: The StoreServ 7440c.

New 3PAR StoreServ models:

Model3PAR StoreServ 7200c3PAR StoreServ 7400c3PAR StoreServ 7440c
Nodes22 or 42 or 4
CPUs2x 6-Core 1,8 GHz2x or 4x 6-Core 1,8 GHz2x or 4x 8-Core 2,3 GHz
Gen4 ASICs22 or 42 or 4
On-Node Cache40 GB48 – 96 GB 96 – 192 GB
Max Drives8 – 240 (max 120 SSDs)8 – 576 (max 240 SSDs)8 – 960 (max 240 SSDs)
Max Enclosures0 – 90 – 220 – 38

Old 3PAR StoreServ models

Model3PAR StoreServ 72003PAR StoreServ 7400
Nodes22 or 4
CPUs2x 4-Core 1,8 GHz2x or 4x 6-Core 1,8 GHz
Gen4 ASICs22 or 4
On-Node Cache24 GB32 – 64 GB
Max Drives8 – 240 (max 120 SSDs)8 – 480 (max 240 SSDs)
Max Enclosures0 – 90 – 22

Especially the 7440c is a monster: It scales up to 38 enclosures and 960 drives (just to compare: A 3PAR StoreServ 10400 scales also up to 960 drives!). Check the QuickSpecs for more details.

As you can see, the new models got new CPUs, more on-node Cache and tehy support mode disks. In addition to this, the they got support for a new dual port 16 Gb FC HBA, a dual port 10 GbE and a quad port 1 GbE NIC. You may ask yourself: Why 10 GbE and 1 GbE NICs (not iSCSI/ FCoE)? The answer is: HP 3PAR File Persona Software Suite for HP 3PAR StoreServ. This software license adds support for SMB, NFS, NMDP and Object Storage to the nodes of the 7200c, 7400c and 7440c. I assume that this license will not be available for the “older” 7200 and 7400. But this is only a guess. With this license you will be able to use 3PAR StoreServ natively with block and file storage protocols. I think this is a great chance to win more deals against EMC and NetApp.

Enrico Signoretti has written a very good article about the new announcements: HP 3PAR, 360° storage. He has the same view like me about the new HP 3PAR File Persona. Philip Sellers has written about another new announcement: Flat Backup direct from 3PAR to StoreOnce. Also check Craig Kilborns blog post about the new HP 3PAR StoreServ SSMC. Last, but not least: The 3pardude about the new 3PAR announcements.

HP publishes HP 3PAR OS 3.2.1 MU1 with Thin Deduplication

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

On October 28 2014 HP has published HP 3PAR OS 3.2.1 MU1, the first maintenance update for HP 3PAR OS 3.2.1. Beside some fixes, HP enabled in-line deduplication (Thin Deduplication) on all the systems with 3PAR GEN4 ASIC (StoreServ 7000 and 10000). Thin Deduplication does not require any license! It’s included in the base license and every customer can use it without spending money for it.

In-line deduplication is awesome, congrats to HP for making this possible. Deduplication on primary storage is nothing new, but the way how HP 3PAR doing it, is really cool. It’s not a post-process, like NetApps deduplication technology. With HP 3PAR, deduplication happens when data enters the array. I took this figure from a HP whitepaper. It shows in a simple way what enables HP 3PAR to do in-line deduplication: The 3PAR GEN4 ASIC (Who has criticised 3PAR for using custom ASICs…?). Thin Deduplication is in line with the other 3PAR thin technologies.

thin_dedup

HPE/ hpe.com

Ivan Iannaccone write a really good blog post on who Thin Deduplication works. I really recommend to read it! Welcome to Flash 2.0: HP 3PAR Thin Deduplication with Express Indexing

As already mentioned Thin Deplication is available on all HP 3PAR systems with GEN4 ASIC. This is currently the StoreServ 7000 and 10000 series. Even a customer with a “small” 7200 can use Thin Deduplication without additional cost. And who knows what the HP Discover will bring us… There are currently some small limitations when using Thin Deduplication. But I’m quite sure that these are only temporary.

  1. Thin Deduplication is currently only available for Virtual Volumes (VV) provisioned from a SSD tier.
  2. You can’t use TDVV with Adaptive Optimization Configuration. This is presumably because Thin Deduplication is only available for VV provisioned from a SSD tier. If a region from a TDVV has to be moved to a lower tier, the data has to be rehydrated.
  3. Converting from any VV to Thin Deduplication Virtual Volume (TDVV) can be accomplished with Dynamic Optimization, which is a licensable feature.

You can have up to 256 TDVV per SSD CPG. Deduplication is fully supported with 3PAR replication (sync, async), but the replicated data is not deduplicated. You can use a estimation functionality to estimate the amount of deduplicated data for TPVV. This estimation can be run online against any volumes, regardless on which tier the data reside.

Bugfixes

Beside the new Thin Deduplication feature, HP fixed some bugs in this MU. Here is an excerpt from the release notes:

  • 116690 An issue in QoS and ODX from Windows hosts causes an uncontrolled shutdown.
  • 117123 The version of Bash is updated to resolve the vulnerabilities CVE-2014-6271 and CVE-2014-7169
    commonly known as “shellshock”
  • 114947 The total capacity of the volumes in a Peer Persistence Remote Copy group is limited to 32 TB.
  • 114244 Loss of host persona capabilities after upgrading to HP 3PAR OS 3.2.1 GA from HP 3PAR OS 3.1.2
    MU5 or HP 3PAR OS 3.1.2 MU3 + P41.

For more details take a look into the Release Notes for HP 3PAR OS 3.2.1 GA/ MU1. If you’re interested in the basics concepts of HP 3PAR, take a look into the HP 3PAR StoreServ Storage Concepts Guide for HP 3PAR OS 3.2.1.

VMware vSphere Metro Storage Cluster with HP 3PAR Peer Persistence – Part II

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

The first part of this (short) blog series covered the basics of VMware vSphere Metro Storage Cluster (vMSC) with HP 3PAR Peer Persistence. This, the second, part will cover the basic tasks to configure Peer Persistence. Please note that this blog post relies on the features and supported configurations of 3PAR OS 3.1.3! This is essential to know, because 3.1.3 got some important enhancements in respect of 3PAR Remote Copy.

Fibre-Channel zoning

On of the very first tasks is to create zones with between the Remote Copy Fibre Channel (RCFC) ports. I used two ports from a quad-port FC Adapter for Remote Copy. This matrix shows the zone members in each Fibre Channel fabric. 3PAR OS 3.1.3 supports up to four RCFC ports per node. Earlier versions of 3PAR OS only support one RCFC port per node.

N:S:P0:2:10:2:21:2:11:2:2
0:2:1Fabric 1
0:2:2Fabric 2
1:2:1Fabric 1
1:2:2Fabric 2

RCFC port setup

After the zoning it’s time to setup the RCFC ports. In this case the RCFC ports will detect the partnering port by itself. I assume that the ports are unconfigured. Otherwise it’s necessary to take the ports offline. The command controlport is used to configure a port with a specific port role.

You can do the same with the 3PAR Management Console. After the RCFC port configuration the success of this procedure can checked with doing this on both StoreServs, you can check your success with showrctransport

or with the 3PAR Management Console.

3par_remotecopy_port_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Remote Copy setup

Now it’s time to create the Remote Copy configuration. The screenshots below are schowing the configuration of a bidirectional 1-to-1 Remote Copy setup. Start the wizard and select the configuration.

3par_remotecopy_setup_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

In the next step, the RCFC ports have to be configured and paired together. Simply connect the ports by selecting a port and pull a connection to the other port. Both ports have to be in the same zone.

3par_remotecopy_setup_2

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

A Remote Copy Group groups Virtual Volumes (VV) together to ensure I/O consistency. To create a bidirectional Remote Copy configuration we need two Remote Copy Groups. One from A > B and a second from B > A. I recommend to enable the “Auto Recover” option. This option is only visible, if the “Show advanced options” tickbox is enabled.

3par_remotecopy_setup_3

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

This screenshot shows the bidirectional Remote Copy configuration. Each StoreServ acts as primary arry for a Remove Copy Group and as secondary array for a primary Remote Copy Group on the other StoreServ.

3par_remotecopy_setup_4

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

If you already created volumes, you can add the volumes in this step. I will show this step later.

3par_remotecopy_setup_5

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The last page shows a summary of the configured options. Simply click “Finish” and proceed with the next step.

3par_remotecopy_setup_6

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

After creating the volumes it’s necessary to add them to the Remote Copy groups. Right click the Remote Copy Group and select “Edit Remote Copy Group…”.

3par_rcg_setup_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Click “Next”.

3par_rcg_setup_2

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Select the volumes to add and check the box “Create new volume”. I recommend to use CPGs with the same characteristics as on the source system. I also recommend to use the same CPG as User and Copy CPG. Click “Add” and repeat this step for each volume that should belong to the Remote Copy Group. At the end click “Next”…

3par_rcg_setup_3

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

… and “Finish”.

3par_rcg_setup_4

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Repeat the steps for the second Remote Copy Group and the volumes on the secondary StoreServ.

3par_rcg_setup_5

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

This screenshot shows the result of the configuration process.

3par_rcg_setup_6

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

A very handy feature of 3PAR OS 3.1.3 is that it creates a Virtual Volume Set for each Remote Copy Group. When a VV is added to the Remote Copy Group, it belongs automatically to the Virtual Volume Set and will be exported to the hosts. This screenshots shows the Virtual Volume Sets on both StoreServs.

3par_rcg_setup_7

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

3par_rcg_setup_8

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Please ensure that both Virtual Volume Sets on both StoreServs are exported to all hosts (I recommend using Host Sets). If everything has been correctly presented 8 paths should be visible for each VMFS datastore: 4 active paths to the primary, and 4 standby paths to the secondary StoreServ.

3par_rcg_presentation

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Automate the failover

There are two requirements to automate the failover:

  • Quorum Witness
  • Enabled “Auto Failover” for Remote Copy Groups

The Quorum Witness is a VMware Appliance that needs to be deployed at a third site. The setup is really easy. Simply deploy the OVA and power it on. A short menu guides you through some setup tasks, like setting a password, assigning an IP address etc. When the Quorum Witness is available on the network, create a Peer Persistence configuration. Enter the IP address and select the targets, for which the Quorum Witness should act as a witness.

3par_pp_setup_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

If everything went fine, the “Quorum Status” should be “Started”.

3par_pp_setup_2

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Now the automatic failover for the Remote Copy Groups can be enabled.

3par_rcg_auto_failover_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Select the groups and click the right arrow to enable automatic failover for the selected Remote Copy Groups.

3par_rcg_auto_failover_3

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

That’s it! To test the failover you can use the 3PAR Management Console or this CLI command:

With this command all secondary Remote Copy Groups on StoreServ 7200-EDV2 will become primary Remote Copy groups. If everything is configured accordingly, you will notice no or only a short IO interruption during the failover. An automatic failover will only occur, if a StoreServ loses all RCFC links AND the connection to the Quorum Witness. Otherwise there will be no automatic failover! The parameter “switchover” is only used for transparent and controlled failovers. It’s issued on the primary storage array. The parameter “failover” is automatically issued from the secondary storage system in case of a failover situation.

Finaly words

The basics tasks are:

  • create zones for the RCFC ports
  • configure the RCFC ports on each node
  • create a bidirectional 1-to-1 Remote Copy setup with Remote Copy Groups on each StoreServ
  • add volumes to the Remote Copy Groups
  • present Virtual Volume Sets (that were automatically created based on the Remote Copy Groups) to the hosts
  • deploy Quorum WItness
  • create a Peer Persistence configuration and configure Quorum Witness for the StoreServs that belong to the Peer Persistence Configuration
  • Enable “Automatic Failover” for the presented Remote Copy Groups

This is only a very rough overview about the configuration of a 3PAR Peer Persistence setup. I strongly recommend to put some brain into the design and the planning of the Peer Persistence setup.

VMware vSphere Metro Storage Cluster with HP 3PAR Peer Persistence – Part I

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

The title of this blog post mentions two terms that have to be explained. First, a VMware vSphere Metro Storage Cluster (or VMware vMSC) is a configuration of a VMware vSphere cluster, that is based on a a stretched storage cluster. Secondly, HP 3PAR Peer Persistence adds functionalities to HP 3PAR Remote Copy software and HP 3PAR OS, that two 3PAR storage systems form a nearly continuous storage system. HP 3PAR Peer Persistence allows you, to create a VMware vMSC configuration and to achieve a new quality of availability and reliability.

VMware vSphere Metro Storage Cluster

In a vMSC, server and storage are geographically distributed over short or medium-long distances. vMSC goes far beyond the well-known synchronous mirror between two storage systems. Virtualization hosts and storage belong to the same cluster, but they are geographically dispersed: They are stretched between two sites. This setup allows you to move virtual machines from one site to another (vMotion and Storage vMotion) without downtime (downtime avoidance). With a stretched cluster, technologies such as VMware HA can help to minimize the time of a service outage in case of a disaster (disaster avoidance).

The requirements for a vMSC are:

  • Storage connectivity using Fibre Channel/ Fibre Channel over Ethernet (FCoE), NFS or iSCSI
  • max. 10 ms round-trip time (RTT) for the ESXi management network (> 10 ms is supported with vSphere Enterprise Plus – Metro vMotion)
  • max. 5 ms round-trip time RTT for the synchronous storage replication links
  • at leat 250 Mbps per concurrent vMotion on the vMotion network

The complexity of the storage requirements is not the maximum round-trip time – It’s the requirement that a datastore must be accessible from both sites. This means, that a host in Site A must be able to access /read & write) a datastore on a storage in Site B and vice versa. vMSC knows to different methods of host access configuration:

  • Uniform host access configuration
  • Non-Uniform host access configuration

With a uniform host access configuration, the storage on both sides can accessed by all hosts. LUNs from both storage systems are zoned to all hosts and the Fibre-Channel fabric is stretched across the site-links. The following figure was taken from the “Implementing vSphere Metro Storage Cluster using HP 3PAR Peer Persistence” technical whitepaper and shows a typical uniform host access configuration.

uniform-host-access

HPE/ hpe.com

The second possible configuration is non-uniform host access configuration, in which the hosts only access the site-local storage system. The Fibre Channel fabrics are not stretched across the inter-site links. The following figure was taken from the “Implementing vSphere Metro Storage Cluster using HP 3PAR Peer Persistence” technical whitepaper and shows a typical non-uniform host access configuration. If a storage system fails, the ESXi in a datacenter will lose the connectivity and the virtual machine will fail. VMware HA will take care that the VM is restarted in other datacenter.

non-uniform-host-access

HPE/ hpe.com

Another possible non-uniform setup uses stretched Fibre Channel fabrics and some kind of virtual LUN. A LUN is mirrored between two storage systems and can be accessed from both sites. The storage systems take care of the consistency of the data. This figure was taken from the “VMware vSphere Metro Storage Cluster Case Study” technical whitepaper.

non-uniform-host-access-stretched-fabric

VMware/ vmware.com

The uniform host access configuration is currently used most frequently.

Regardless of the implementation it’s useful to think about the data locality. Let’s assume, that a host in datacenter A is running a VM, that is housed in a datastore on a storage system in datacenter B. As long as you’re using a stretched fabric between the sites, this is a potential scenario. What happens to the storage I/O of this VM? Right, it will travel across the inter-site links from datacenter A to datacenter B. To avoid this, you can use DRS groups and rules.

Examples for uniform host access configuration are:

Uniform host access configurationNon-Uniform host access configuration
vSphere 5.x support with NetApp MetroCluster (2031038)Implementing vSphere Metro Storage Cluster (vMSC) using EMC VPLEX (2007545)
Implementing vSphere Metro Storage Cluster (vMSC) using EMC VPLEX (2007545)
Implementing vSphere Metro Storage Cluster using HP 3PAR StoreServ Peer Persistence (2055904)
Implementing vSphere Metro Storage Cluster using HP LeftHand Multi-Site (2020097)
Implementing vSphere Metro Storage Cluster using Hitachi Storage Cluster for VMware vSphere (2073278)
Implementing vSphere Metro Storage Cluster using IBM System Storage SAN Volume Controller (2032346)

HP 3PAR Remote Copy, Peer Persistence & the Quorum Witness

HP 3PAR Peer Persistence uses synchronous Remote Copy and Asymmetric Logical Unit Access (ALUA) to realize a metro cluster configuration that allows host access from both sides. 3PAR Virtual Volumes (VV) are synchronous mirrored between two 3PAR StoreServs in a Remote Copy 1-to-1 relationship. The relationship may be uni- or bidirectional, which allows the StoreServs to act mutually as a failover system. To create a vMX configuration with HP 3PAR StoreServ storage systems, some requirements have to be fulfilled.

  • Firmware on both StoreServ storage systems must be 3.1.2 MU2 or newer (I recommend 3.1.3)
  • a remote copy 1-to-1 synchronous relationship
  • 2.6 ms or less round-trip time (RTT)
  • Quorum Witness VM must run at a 3rd site and must be reachable from each 3PAR StoreServ
  • same WWN and LUN ID for each source and target virtual volume
  • VMware ESXi 5.0, 5.1 or 5.5
  • Hosts must be created with Hostpersona 11
  • Hosts must be zoned to both 3PAR StoreServ storage systems (this requires a stretched Fibre Channel fabric between the sites)
  • iSCSI or FCoE for host connectivity is supported with 3PAR OS 3.2.1. Versions below 3PAR OS 3.2.1 only support FC for host connectivity with Peer Persistence
  • Both 3PAR StoreServ storage systems must be licensed for Remote Copy and Peer Persistence (I recommend to license the Replication Suite)

A VV can be a source or a target volume. Source VV belong to the primary remote copy group, target virtual volumes belong to a secondary remote copy groups. VV are grouped to remote copy groups to ensure I/O consistency. So all VV that require write order consistency should belong to a remote copy group. Even VV that don’t need write order consistency should belong to a remote copy group, just to simplify administration tasks. A typical uniform vMSC configuration with 3PAR StoreServs will have remote copy groups replicating in both directions. So both StoreServs act as source and target in a bi-directional synchronous remote copy relationship. It’s important to understand that the source and target volumes share the same WWN and they are presented using the same LUN ID. The ESXi hosts must use Hostpersona 11. During the process of creating the remote copy groups, the target volumes can be created automatically. This ensures that the source and target volumes use the same WWN. When the volumes from the source and target StoreServ are presented, the paths to the target StoreServ are marked as “Stand by”. In case of a failover the paths will become active and the I/O will continue. The Quorum Witness is a RHEL appliance that communicates with the StoreServs and triggers the failover in some specific scenarios. This table was taken from the “Implementing vSphere Metro Storage Cluster using HP 3PAR Peer Persistence” technical whitepaper. As you can see, the automatic failover is only triggered in one specific scenario.

Replication stoppedAutomatic failoverHost I/O impacted
Array to Array remote copy links failureYNN
Single site to Quorum Witness network failureNNN
Single site to Quorum Witness network and Array to Array
remote copy link failure
YYN
Both sites to Quorum Witness network failureNNN
Both sites to Quorum Witness network and Array to Array
remote copy link failure
YNY

Summary

VMware vSphere Metro Storage Cluster (vMSC) is a special configuration of a stretched compute and storage cluster. A vMSC is usually implemented to avoid downtime. A vMSC configuration makes it possible to move virtual machine, and thus workloads, between sites. Beyond this, vMSC can avoid downtime caused due to a failed storage system. Using HP 3PAR Remote Copy, 3PAR Peer Persistence and the Quorum Witness, two HP 3PAR StoreServ storage systems can form a uniform vMSC configuration. This allows movement of VMs/ workloads between sites and also a transparent failover between storage systems in case of a failure of one of the StoreServs.

Part II of this small series will cover the configuration of Remote Copy and Peer Persistence.

New HP 3PAR StoreServ AFA, VMware VVols and some thoughts

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

On the HP Discover in June 2013 (I wrote 2014, sorry for that typo). HP has announced the HP 3PAR StoreServ 7450 All-Flash Array. To optimize the StoreServ platform for all-flash workloads, HP made some changes to the hardware of the nodes. The 7450 uses 8-core Intel Xeon CPUs instead 6-core 1.8 Ghz CPUs, the cache was doubled from 64GB to 128GB and they added some changes to the 3PAR OS: HP added additional cache flush queues to separate the flushing of cache for rotating rust and SSD devices. They also made some write I/O optimizations and added the ability to perform fragmented writes. Instead of writing 16 KB blocks, 3PAR OS is now able to write only 4 KB of a 16 KB block. This software-based changes may be used also on the 7200 and 7400. This leads to the new…

HP 3PAR StoreServ 7200 All-Flash Array

HP has now announced the next StoreServ All-Flash Array: The HP 3PAR StoreServ 7200 All-Flash Array which is nothing else then a 7200 with 8x 480 GB cMLC drives. The 8 drives result in a raw capacity of ~ 3,5 TB (8 drives are at least necessary to create a CPG). The HP 3PAR StoreServ 7200 All-Flash Array is available for 35.000 US-$ (currently ~ 26.400 €). An interesting price if you consider, that a StoreServ 7200 with 8x 480 GB cMLC drives, no additional support or software has a list price of ~ 60.000 € or ~ 80.000 US-$. On the other side, the 7200 hardware wasn’t optimized for all-flash workloads, so the cache and CPUs are the same.

Some thoughts

HP states that you can achieve 7 TB usable space with only 3,5 TB raw space. First thought: WTF?! Second thought: Oh, there’s an asterisk behind the statement.

Usable capacity calculations based on 25% overhead and 4:1 compaction ratio.

My thoughts about that: First, it doesn’t match the “3,5 TB raw == 7 TB usable” quote. Later in the text HP writes

…you can scale the solution to 690 TB usable and 230 TB raw with our Thin Deduplication software.

Short calculation (230 x 0,75) x 4 = 690. That fits! It seems that HP is more conservative in perspective of the usable capacity on the 7200 AFA, if you take the “3,5 TB raw == 7 TB usable” quote into account (~ 2,5:1). Second, Thin Deduplication on the 7200? Currently HP speaks of it only in connection with the 7450 (Source 1, Source 2) You maybe know, that the Gen4 ASICs are used for Thin Deduplication. The 7200 and 7400 also use the Gen4 ASICs, so there is no constraint why Thin Deduplication shouldn’t work in the 7200 and 7400. I assume that HP will announce Thin Deduplication later for the 7200 and 7400. However, it has been mentioned only in connection with StoreServ AFA. I also think that the HP 3PAR StoreServ 7200 All-Flash Array is an attack on EMC XtremIO and Pure Storage. I will not comment the statement, that the new 7200 AFA is 50% cheaper then EMC XtremIO or Pure Storage:

Based on comparison of US list prices for the HP 3PAR StoreServ 7200 All-Flash Starter Kit and EMC XtremIO with 5TB of raw capacity and Pure Storage FA-405 entry-level configuration with 2.75TB raw capacity.

Finally I’m glad that HP has announced the 7200 AFA, especially for that price. HP 3PAR StoreServ is an awesome storage and I’m sure that it does not have to hide.

VMware VVols

HP has also announced that HP 3PAR StoreServ is ready for VMwares new storage architecture, Virtual Volume (VVols), which is currently tested in the VMware vSphere beta. VMware VVols will revolutionize the way how storage in VMware vSphere is treated by offering VM-level storage control, snapshots and quality of service. The support for VMware VVols will be available with the next release of HP 3PAR OS.

This video was released in 2012 by Calvin Zito and shows you a demo of VMware VVols with 3PAR StoreServ storage.

It is good to see how VMware and HP work together to get this great new technology ready for production.

DataCore In SANsymphony-V 10: Potential for data corruption

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

This is only a short blog post. Just got an e-mail from the DataCore Support. They found a critical bug in SANsymphony-V 10.0.0.0 which should be fixed with Update 1. Only VMware customers are affected, because the bug is related to VMware Thin Provisioning Thresholds. Update 1 is planned for early September 2014. If you’re running SANsymphony-V 10.0.0.0 open an incident at the DataCore Support to get an available hotfix. If you have planned to update to SANsymphony-V 10, delay this update until the release of SANsymphony-V 10 Update 1.