Tag Archives: hp

Safe (or safer) than backup to tape: HP StoreOnce

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

When talking to SMB customers, most of them don’t want to talk about their backup strategy. It’s paradox: They know that data loss can ruin their business, but they don’t want to invest money into a fully tested recovery concept (I try to avoid the word “backup concept” – Recovery is the key). Because of tight budgets and lacking knowledge, many customers use traditional concepts in a virtualized world. This often ends  in traditional backup applications with agents deployed into guest OS, and backups that are written to tape (or worse: On USB disks). If you ask a customer “Why do you store your data on tape?”, only a few argue with costs per GB or performance. Most the customer argue with something like

  • “We’re doing this for years, so why we should change it?”
  • “We have to store our tapes offsite”
  • “There is a corporate policy that forces us to store our backups on tape”

In most cases, the attempt to sell a backup-to-disk appliance (like HP StoreOnce backup system) dies with the last arguments. Customers tend to doesn’t trust designs in which they don’t have a backup on tape. Some customers have a strong desire to have a tape which is labled with “MONDAY” or “FRIDAY FULL”. To be honest: Usually I see this behaviour only at SMB customers. Backup-to-disk appliances are often described as

  • expensive,
  • complex, and
  • vulnerable

None of them applies to a HP StoreOnce backup system. Not even expensive, if you not only focus on CAPEX.

HP StoreOnce

Please allow me to write some sentences about HP StoreOnce.

A HP StoreOnce backup system is available as physical or virtual appliance. HP offers a broad range of physical appliances that can store between 5,5 TB and 1.728 TB BEFORE deduplication. The virtual StoreOnce VSA is available with a capacity of 4 TB, 10 TB and 50 TB before deduplication. And don’t forget the free 1 TB StoreOnce VSA! All HP StoreOnce backup systems, regardless if physical appliance or VSA, share the same StoreOnce deduplication technology, as well as the same replication and security features. In fact, the StoreOnce VSA runs the same (linux based) software as the physical applanices and vice versa. You can add features by adding software options:

  • HP StoreOnce Catalyst
  • HP StoreOnce Replication
  • HP StoreOnce Security Pack
  • HP StoreOnce Enterprise Manager

HP StoreOnce Catalyst allow the seamless movement of deduplicated data across StoreOnce capable devices. This means, that a HP Data Protector media agent can deduplicate data during a backup, write the data to a HP StoreOnce backup system, and then the data can replicated to another HP StoreOnce backup system. All without the need to rehydrate on the source, and deduplicate it on the destionation again. The StoreOnce VSA includes a HP StoreOnce Catalyst license!

HP StoreOnce Replication enables an appliance or a VSA to act as a target in a replication relationship. Only the target needs to be licensed. Fan-in describes the number of possible source appliances.

ModelFan-in
StoreOnce VSA8
StoreOnce 27008
StoreOnce 290024
StoreOnce 450024
StoreOnce 470050
StoreOnce 490050
StoreOnce 6200384

As you can see, even the StoreOnce VSA can used as a target for up to 8 source appliances. Replication is a licensable feature, except for the StoreOnce VSA. The StoreOnce VSA includes the replication license!

HP StoreOnce Enterprise Manager can be obtained for free and allows you to monitor up to 400 physical appliances or StoreOnce VSAs. It provides monitoring, reporting, trend analysis and forcasting. It integrates with the StoreOnce GUI for single pane-of-glass management for physical appliances and VSA.

HP StoreOnce Security Pack enables data-at-rest and data-in-flight encryption (using IPsec and only for StoreOnce Catalyst), as well as secure data deletion. Here applies the same as for the HP StoreOnce Catalyst and Replication license: The StoreOnce VSA includes this license already.

HP StoreOnce Deduplication

Deduplication is nothing really new. In simple terms it’s a technique to reduce the amount of stored data by removing redundancies. Data that is being detected as redundant, isn’t stored again on the disks. Only a pointer to the stored data is set. This runs the risk of potential data loss. What if the original block gets corrupted? Grist to the mill of the tape lovers (Tapes never fail… for sure…).

Integrity Plus

Don’t worry. I won’t bore you with stuff about a dead (or nearly dead) CPU architecture. Integrity Plus is HPs approach for an end-to-end verification process. Let’s take a look on how data comes into a StoreOnce backup system. From a client perspective, you can choose between Virtual Tape Library (VTL), NAS emulation (CIFS or NFS) and StoreOnce Catalyst.

When data is written to a VTL, a CRC is computed for each block and it’s stored together with the data block on disk. During a restore, a CRC is computed for every block that is read from disk and it’s compared to the initial stored CRC. If it differs, a SCSI check condition is reported. Because NAS emulation and StoreOnce Catalyst doesn’t use SCSI protocol, no CRC is computed and stored to disk. The integrity of the written data is guaranteed in other ways.

At the beginning of the deduplication process, the incoming data is divided into chunks. HP uses a variable length for each data chunk, but in average a data chunk is 4 KB. A smaller chunk size leads to better deduplication results. A SHA-1 (AFAIK 160 bit) hash is computed for each data chunk. This chunk hash is used to identify duplicate data by comparing it to other chunk hashes. At this point, a sparse index is used to find possible candidates of redundant data chunks. Instead of holding all chunk hashes in the memory, only a few hashes are stored in the RAM. The remaining chunk hashes are stored as metadata on disk. The container index contains a list of chunk hashes and a pointer to the data container where the data chunk is stored. Before data chunks are stored on disk, multiple chunks are compressed (using LZO) and a SHA-1 checksum is computed for the compressed chunks. This checksum is stored on disk. When the compressed data is decompressed, a new checksum is computed and it’s compared to the stored SHA-1 checksum. Metadata and container index files are protected with MD5 checksums. In addition, a transaction log file is maintained for the whole process and the sparse index is frequently flushed to disk.

When data is coming into the StoreOnce backup system, a match with a chunk hash in the memory can lead the system (using the sparse index, metadata and container index files) to containers with associated data chunk (e.g. data chunks that represent a backup VM). And if a data chunk of the incoming data is a duplicate, it is very likely that many of the following data chunks are also duplicates.

All physical appliances use RAID 6 to protect data in case of disk failures. Only the HP StoreOnce 2700 uses a RAID 5, because the appliance can only hold 4 SAS-NL disks. When using StoreOnce VSA, you can use any RAID level for the underlying storage. But you should use something above RAID 0…

Conclusion

Let’s summarize:

  • RAID
  • Supercapacitors on RAID controllers to protect write cache in case of power loss
  • ECC memory
  • Integrity Plus to protect the data within the StoreOnce backup system
  • StoreOnce Replication to replicate data to another HP StoreOnce backup systems
  • data-at-rest, data-in-flight encryption and secure deletion with StoreOnce Security Pack

Sounds very safe to me. Tape isn’t dead. Tape has its right to exist. But backup to tape isn’t safer than a backup to a StoreOnce backup system. Latter can offer you faster backups AND restores, new backup and recovery options (e.g. backups in RoBo offices that are replicated to the central datacenter). Think about the requirements for storing tapes (temperature, humidity, physical access), regular recovery tests, copy tapes to newer tapes etc. Consider not only CAPEX. Also remember OPEX.

A HP StoreOnce backup system is perfect for SMBs. It simplifies backup and recovery and it can offer new opportunities. Testdrive it using the free 1 TB StoreOnce VSA! Remember: The StoreOnce VSA includes StoreOnce Replication, Catalyst and the Security Pack! Even the free 1 TB StoreOnce VSA.

HP offers 1TB StoreOnce VSA for free

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

A free StoreOnce VSA, like the well known 1 TB StoreVirtual VSA? That would be too cool to be real. But it is real! Since February, HP offers a free 1 TB version of their StoreOnce VSA. I totally missed this announcement, but thanks to Calvin Zito I noticed it today:

The link leads to another blog post from Ashwin Shetty (Can you protect your data for free? Introducing the new free 1TB StoreOnce VSA), in which he provides more information about the free 1 TB StoreOnce VSA.

HP StoreOnce VSA

HP StoreOnce VSA runs with the same software as the hardware-based StoreOnce appliances, but it’s delivered as a VM. You can run the VM on top of VMware ESXi, Microsoft Hyper-V or KVM. Beside the free 1 TB license, the StoreOnce VSA can purchased with 4 TB, 10 TB or 50 TB capacity (usable, non-deduplicated). In contrast to the hardware-based appliances, the StoreOnce VSA comes with licenses for replication and StoreOnce Catalyst. This makes the StoreOnce VSA a perfect fit for remote and branch offices. You can quickly deploy the StoreOnce VSA and replicate the backuped data to the central datacenter. But you can also deploy the VSA with the 4 TB, 10 TB or 50 TB license in your central datacenter and use it as a replication target for StoreOnce VSAs in the remote and branch offices (the replication target needs the replication license). A single VSA can act as replication target for up to 8 StoreOnce VSA and/ or StoreOnce appliances. You can scale the free 1 TB license with license upgrades to 4 TB, 10 TB and 50 TB. The StoreOnce VSA supports Catalyst, VTL (iSCSI) and as NAS (CIFS or NFS) backup targets. Take a look into the QuickSpecs for more information. I also recommend to read the two blog posts from Ashwin Shetty on Around the Storage Block:

Last year I’ve published several posts about the StoreOnce VSA. I recommend to download the free 1 TB StoreOnce VSA and to play with it. Some of my blog posts should help you get started.

What to consider when implementing HP 3PAR with iSCSI in VMware environments

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Some days ago a colleague and I implemented a small 3-node VMware vSphere Essentials Plus cluster with a HP 3PAR StoreServ 7200c. Costs are always a sore point in SMB environments, so it should not surprise that we used iSCSI in this design. I had some doubt about using iSCSI with a HP 3PAR StoreServ, mostly because of the performance and complexity. IMHO iSCSI is more complex to implement then Fibre Channel (FC). But in this case I had to deal with it.

iSCSI options for HP 3PAR StoreServ 7000

If you decide to use iSCSI with a HP 3PAR StoreServ, you have only one option: Adding a 2-port 10GbE iSCSI/ FCoE adapter to each node. There is no other iSCSI option. The available 2-port 10GbE ethernet adapter and 4-port 1GbE ethernet adapter can’t be used for iSCSI connectivity. These adapters can only be used with the HP 3PAR File Persona Software Suite.

The 2-port 10GbE iSCSI/ FCoE adapter is a converged network adapter (CNA) and supports iSCSI or Fibre Channel over Ethernet (FCoE). The adapter can only be used for host connectivity and you have to select iSCSI or FCoE. You can’t use the CNA for remote copy. You have to add a CNA to each nodes in a node pair. You can have up to four 10 GbE ports in a 3PAR 7200 series, or up to eight 10 GbE ports in a 3PAR 7400 series.

Network connectivity

10 GbE means 10 GbE, there is no way to connect the CNA to 1 GbE transceivers. The 2-port 10GbE iSCSI/ FCoE includes two 10 GbE SR SFP+ transceivers. With 3PAR OS 3.1.3 and later, you can use Direct Attach Copper (DAC) cables for network connectivity, but not for FCoE. Make sure that you use the correct cables for your switch! HP currently offers the following cables in different length:

  • X242 for HP ProVision switches
  • X240 for HP Comware switches
  • HP B-series SFP+ to SFP+ Active Copper for Brocade switches, or
  • HP C-series SFP+ to SFP+ Active Copper for Cisco switches

If you use any other switch vendor, I strongly recommend to use the included 10 GbE SR SFP+ transceivers and 10 GbE SR SFP+ transceivers on the switch-side. In this case you have to use fiber cable to connect the 3PAR to the network. In any other case I recommend to use DAC for network connectivity.

It’s a common practice to run iSCSI traffic in its own VLAN. Theoretically a single iSCSI VLAN is sufficient. I recommend to use two iSCSI VLANs in case of a 3PAR, one for each iSCSI subnet. Why two subnets? The answer is easy: Persistent Ports. Persistent Ports allows an host port to assume the identity (port WWN for Fibre Channel or IP address for iSCSI ports) of a failed port while retaining its own identity. This minimizes I/O disruption during failures or upgrade. Persistent Ports uses the NPIV feature for Fibre Channel/ FCoE and IP address failover for iSCSI. With the release of 3PAR OS 3.1.3, Persistent Ports was also available for iSCSI. A hard requirement of Persistent Ports is, that the same host ports of nodes of a node pair must be connected to the same IP network on the fabric. An example clarifies this:

Host port (N:S:P)VLAN IDIP subnet
0:2:111192.168.173.0/27
0:2:212192.168.173.32/27
1:2:111192.168.173.0/27
1:2:212192.168.173.32/27

The use of jumbo frames with iSCSI is a often discussed topic. It’s often argued that complexity and performance gain would be disproportionate. I’m a bit biased. I think that the use of jumbo frames is a must when using iSCSI. I always configure jumbo frames for vMotion, so the costs for configuring Jumbo frames is low for me in an iSCSI environment. Don’t forget to configure jumbo frames on all devices in the path: VMkernel ports, vSwitches, physical switches and 3PAR CNAs.

Always use at least two physical switches for iSCSI connectivity. This concept is compareable to a Fibre Channel dual-fabric SAN. I like the concept of switch aggegration (the wording may vary between vendors). I often work with HP Networking and I like the HP 2920 or 5820 Switch Series. These switches can form stacks in which multiple physical switches act a as a single virtual device. These stacks provide redundancy and operational simplicity. In combination with two VLANs you can build a powerful, redundant and resilient iSCSI SAN.

Host port configuration

The CNA ports can only be used for host connectivity, therefore there is no way to use them for disk or remote copy connectivity. Before you can use the port for host connectivity, you have to select iSCSI or FCoE as storage protocol.

3par_iscsi_cna_config_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Host and Virtual Volume sets

You can organize hosts and volumes in host sets and volume sets. I recommend to create a host set for all ESXi hosts in a vSphere cluster. I also recommend to create a volume set to group all volumes that should be presented to a host or host set. When exporting Virtual Volumes (VV), you can export a volume set to a host set. If you add a host to the host set, the host will see all volumes in the volume set. If you add a volume to a volume set, the hosts in the host set will all see the newly added volume. This simplifies host and volume management and it reduced the possibilty of human errors.

3par_iscsi_host_set_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

3par_iscsi_vv_set_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Custom SATP rule for ESXi 5.x and ESXi 6.0

3PAR OS 3.1.2 introduced the new Host Persona 11 for VMware which enables asymmetric logical unit access (ALUA). Beside Host Persona 11, Host Persona 6 for VMware is also available, but it doesn’t support ALUA. 3PAR OS 3.1.3 is the last release that included support Host Persona 6 AND 11 for VMware. All later releases only include Host Persona 11. I strongly recommend to use Host Persona 11 for VMware. You should also add a custom SATP rule. This rule can be added by using ESXCLI.

This custom rule sets VMW_PSP_RR as the default PSP and it evenly distribute the IOs over all active paths by switching to the next active path after each IO.

iSCSI discovery

Before you can use an exported volume, the host needs to discover the volume from the target. You have to configure the iSCSI discovery in the settings of the software iSCSI initiator. Typically you will use the dynamic discovery process. In this case, the initiator uses SendTargets request to get a list of available targets. After adding the IP addresses of the 3PAR CNAs to the dynamic discovery list, the static discovery list is filled automatically. In case of multiple subnets, the dynamic discovery process can carry some caveats. Chris Wahl has highlighted this problem in his blog post “Exercise Caution Using Dynamic Discovery for Multi-Homed iSCSI Targets“. My colleague Claudia and I stumbled over this behaviour in our last 3PAR project. Removing the IP addresses from the dynamic discovery will result in the loss of the static discovery entries. After a reboot, the entries in the static discovery list will be gone and therefore no volumes will be discovered. I added a comment to Chris blog post and he was able to confirm this behaviour. The solution is to use the dynamic discovery to get a list of targets, and then add the targets manually to the static discovery list.

Final words

HP 3PAR and iSCSI is a equivalent solution to HP 3PAR and Fibre Channel/ FCoE. Especially in SMB environments, iSCSI is a good choice to bring to the customer 3PAR goodness to a reasonable price.

vSphere Lab Storage: Synology DS414slim Part 2 – Networking

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

The next step is to connect the Synology DS414slim to my lab network. I use two HP 1910 Switches in my lab, a 8 Port and a 24 Port model. The Synology DS414slim has two 1 GbE ports, which can configured in different ways. I wanted to use both ports actively, to I decided to create a bond.

Create a bond

Browse to the admin website and go to Control Panel > Network > Network Interfaces and select “Create”. Then select “Create Bond”.

nas_networking_settings_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

To utilize both NICs, select the first option: “IEEE 802.3ad Dynamic Link Aggregation”. This option requires switches that are capable to create a LACP LAG! I will show the configuration of a LACP LAG on one of my HP 1910 switches later.

nas_networking_settings_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Click “IPv4”. I have a dedicated VLAN and subnet for NFS. This subnet is routed in my lab, so I that the can reach the DS414slim for management. Make sure that you enable Jumbo Frames and that every component in the network path can handle Jumbo Frames! Switch to the “IPv6” tab.

nas_networking_settings_03

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

I don’t want to use IPv6, so I decided to disable it.

nas_networking_settings_04

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Click “OK” and wait until the configuration is finished.

Create a LAG

Now it’s time to create the LAG on the switch. As I already mentioned, I use two HP 1910 switches in my lab. Both are great home lab switches! They are cheap and they can do L3 routing. Browse to the web management, log in and select Network > Link Aggregation and click “Create”.

1910-24g_create_lag_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Enter a interface ID for the LAG. In my case there were no LAGs before, so the ID is 1. Select “Dynamic (LACP Enabled)” and select two ports on the figure of the switch. Check the settings in the “Summary” section and click on “Apply”.

1910-24g_create_lag_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Now we need to place the LAG in the correct VLAN. Select Network > VLAN and select “Modify Ports”. Select “BAGG1” from “Aggregation ports” and place the LAG as an untagged member in the NFS VLAN (in my case this is VLAN 100). Finish this task by clicking “Apply”.

1910-24g_create_lag_03

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

You can check the success of this operation by switching to the “Details” pageand then select the NFS VLAN.

1910-24g_create_lag_04

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Connect the DS414slim with the two patch cables to the ports that were now configured as a LAG. If everything is configured correctly, the DS414slim should be reachable, with its new IP and in the NFS VLAN.

VMkernel configuration

Make sure that you have at least one VMkernel port configured, that is in the same subnet and VLAN as you DS414slim. You can see that the VMkernel port is placed in VLAN 100 and that is has a IP from my NFS subnet.

nas_esxi_vmk_setup_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

You should also make sure, that the VMkernel port and the vSwitch can handle Jumbo Frames. The HP 1910 switch series has enabled Jumbo Frames by default.

Final words

The network setup depends on your needs. I strongly recommend to use a dedicated VLAN and IP subnet for NFS. I also recommend the use of Jumbo Frames. Make sure that all componentens in the network path can handle Jumbo Frames and that the VLAN membership is correctly set. If possible, use a bond on the Synology and a LAG on the switch.

Part 3 of this series covers the creation of NFS shares: vSphere Lab Storage: Synology DS414slim Part 3 – Storage

HP Data Protector: JSONizer error when restoring from StoreOnce

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

After installing the Data Protector patch bundle 8.13, you may ran into this error when trying to restore data from a HP StoreOnce appliance.

This problem is known and it is described in QCCR2A56465. A fix is available (new BMA, CMA, MMA and RMA binaries). Simply open a service request and ask for the fix. Make sure that you add a copy of the session messages or a screenshot to the service request.

HP Data Protector: Can’t delete old DCBF directories

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
This applies to upgrades from Data Protector 6.x and 7.x to 8.x and 9.x.

It seems that today is my debugging day… Yesterday I performed a Data Protector update from 7.03 to 8.13. During this update, the Data Protector IDB is migrated to another database format. Last night the backups went smoothly, but today I noticed that two old Detail Catalog Binary File (DCBF) directories were still referenced in the HP Data Protector IDB.

The two directories with the “db40” inside the path are old DCBF directories. Because the directories contained actively used DCBF files, I relocated the files and did a “omnidbutil -remap_dcdir”:

A quick check after the relocation showed no errors.

Looking good. Time to remove the old DCBF directories:

Did I mentioned that today was my debugging day? To make a long story short: HP switched the path separator character for the Data Protector IDB. They are using now a / instead a \ on both platforms (Windows & UNIX). During the update, this change is not performed correctly. Sebastian Koehler wrote a small SQL script that fixes this problem. Check his blog post (he had the same problem as me).

This is the script (you can also find it on Sebastian Koehlers blog):

This is the output of the script when I run it.

You can clearly see that the wrong path separator is used for the old DB40 directories (the upper part of the output). Compare it to the output of omnidbutil -list_dcdirs! The lower part shows that the correct path separator was set. After the run of the script I was able to delete the old DCBF directories.

Thanks to Sebastian, who described this bug.

HP Discover: New 3PAR StoreServ models

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

HP has brushed up the StoreServ 7000 series and updated the StoreServ 7200 and 7400 models. HP also added a new model to the 7000 series: The StoreServ 7440c.

New 3PAR StoreServ models:

Model3PAR StoreServ 7200c3PAR StoreServ 7400c3PAR StoreServ 7440c
Nodes22 or 42 or 4
CPUs2x 6-Core 1,8 GHz2x or 4x 6-Core 1,8 GHz2x or 4x 8-Core 2,3 GHz
Gen4 ASICs22 or 42 or 4
On-Node Cache40 GB48 – 96 GB 96 – 192 GB
Max Drives8 – 240 (max 120 SSDs)8 – 576 (max 240 SSDs)8 – 960 (max 240 SSDs)
Max Enclosures0 – 90 – 220 – 38

Old 3PAR StoreServ models

Model3PAR StoreServ 72003PAR StoreServ 7400
Nodes22 or 4
CPUs2x 4-Core 1,8 GHz2x or 4x 6-Core 1,8 GHz
Gen4 ASICs22 or 4
On-Node Cache24 GB32 – 64 GB
Max Drives8 – 240 (max 120 SSDs)8 – 480 (max 240 SSDs)
Max Enclosures0 – 90 – 22

Especially the 7440c is a monster: It scales up to 38 enclosures and 960 drives (just to compare: A 3PAR StoreServ 10400 scales also up to 960 drives!). Check the QuickSpecs for more details.

As you can see, the new models got new CPUs, more on-node Cache and tehy support mode disks. In addition to this, the they got support for a new dual port 16 Gb FC HBA, a dual port 10 GbE and a quad port 1 GbE NIC. You may ask yourself: Why 10 GbE and 1 GbE NICs (not iSCSI/ FCoE)? The answer is: HP 3PAR File Persona Software Suite for HP 3PAR StoreServ. This software license adds support for SMB, NFS, NMDP and Object Storage to the nodes of the 7200c, 7400c and 7440c. I assume that this license will not be available for the “older” 7200 and 7400. But this is only a guess. With this license you will be able to use 3PAR StoreServ natively with block and file storage protocols. I think this is a great chance to win more deals against EMC and NetApp.

Enrico Signoretti has written a very good article about the new announcements: HP 3PAR, 360° storage. He has the same view like me about the new HP 3PAR File Persona. Philip Sellers has written about another new announcement: Flat Backup direct from 3PAR to StoreOnce. Also check Craig Kilborns blog post about the new HP 3PAR StoreServ SSMC. Last, but not least: The 3pardude about the new 3PAR announcements.

Add a new version of HP Agentless Management Service to a customized ESXi 5.5.0 ISO

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

While preparing for a VMware vSphere 5.5 update at a customer of mine, I stumbled over VMware KB2085618 (ESXi host cannot initiate vMotion or enable services and reports the error: Heap globalCartel-1 already at its maximum size.Cannot expand.). I checked the HP AMS version in the latest HP custom ESXi image and found out, that version hp-ams-esx-550.10.0.0-18.1198610 is included (source). Unfortunately the bug is not fixed in 10.0.0, but it’s fixed in 10.0.1 (source).

hp_ams_10_0_1

HPE/ hpe.com

According to the VMware KB article only the HP AMS versions hp-ams 500.9.6.0-12.434156 and hp-ams-550.9.6.0-12.1198610 should be affected. But since I do not like surprises, I decided to update the HP AMS version in the latest HP custom ESXi image from 10.0.0 to 10.0.1.

Prerequisites

Before you can start building a new customized ESXi image, you have to fulfill some prerequisites.

  1. Latest version of the HP customized ESXi Image. Make sure that you download the ZIP and not the ISO file! Download
  2. Latest version of the HP Agentless Management Service Offline Bundle for VMware vSphere 5.5. Download
  3. VMware Power CLI installed on your computer. Download

Updating HP AMS

Copy both downloaded files into a temporary folder. Then import both depot files. You can proof the success with Get-EsxImageProfile which should show you the just imported ESXi image file version.

The next step is to clone the image profile. This cloned image profile will be the target for our software package update. You can check the success again with Get-EsxImageProfile. At this point you should get two image profiles listed.

Now you can update the HP AMS package. The update is done using the Add-EsxSoftwarePackage commandlet.

When you compare the original and the clones profile, you should get this result. Note the UpgradeFromRef entry.

The last step is to export the clones and updates image profile to a ZIP or, this is our case, to a ISO file. This ISO file can be used to upgrade hosts using VMware Update Manager.

That’s it. Now you can update you hosts with this ISO and you have automatically updates the HP Agentless Management Services.

EDIT

Ivo Beerens wrote a nice script to check the installed version of the HP AMS version. Checkout his blog post about this topic.

EDIT 2

I discovered today that HP has published a new version of their customized HP ESXi release (vSphere 5.5 U2 Nov 2014). This release includes the latest version of the HP Agentless Management Service Offline Bundle for VMware vSphere 5.5.

HP publishes HP 3PAR OS 3.2.1 MU1 with Thin Deduplication

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

On October 28 2014 HP has published HP 3PAR OS 3.2.1 MU1, the first maintenance update for HP 3PAR OS 3.2.1. Beside some fixes, HP enabled in-line deduplication (Thin Deduplication) on all the systems with 3PAR GEN4 ASIC (StoreServ 7000 and 10000). Thin Deduplication does not require any license! It’s included in the base license and every customer can use it without spending money for it.

In-line deduplication is awesome, congrats to HP for making this possible. Deduplication on primary storage is nothing new, but the way how HP 3PAR doing it, is really cool. It’s not a post-process, like NetApps deduplication technology. With HP 3PAR, deduplication happens when data enters the array. I took this figure from a HP whitepaper. It shows in a simple way what enables HP 3PAR to do in-line deduplication: The 3PAR GEN4 ASIC (Who has criticised 3PAR for using custom ASICs…?). Thin Deduplication is in line with the other 3PAR thin technologies.

thin_dedup

HPE/ hpe.com

Ivan Iannaccone write a really good blog post on who Thin Deduplication works. I really recommend to read it! Welcome to Flash 2.0: HP 3PAR Thin Deduplication with Express Indexing

As already mentioned Thin Deplication is available on all HP 3PAR systems with GEN4 ASIC. This is currently the StoreServ 7000 and 10000 series. Even a customer with a “small” 7200 can use Thin Deduplication without additional cost. And who knows what the HP Discover will bring us… There are currently some small limitations when using Thin Deduplication. But I’m quite sure that these are only temporary.

  1. Thin Deduplication is currently only available for Virtual Volumes (VV) provisioned from a SSD tier.
  2. You can’t use TDVV with Adaptive Optimization Configuration. This is presumably because Thin Deduplication is only available for VV provisioned from a SSD tier. If a region from a TDVV has to be moved to a lower tier, the data has to be rehydrated.
  3. Converting from any VV to Thin Deduplication Virtual Volume (TDVV) can be accomplished with Dynamic Optimization, which is a licensable feature.

You can have up to 256 TDVV per SSD CPG. Deduplication is fully supported with 3PAR replication (sync, async), but the replicated data is not deduplicated. You can use a estimation functionality to estimate the amount of deduplicated data for TPVV. This estimation can be run online against any volumes, regardless on which tier the data reside.

Bugfixes

Beside the new Thin Deduplication feature, HP fixed some bugs in this MU. Here is an excerpt from the release notes:

  • 116690 An issue in QoS and ODX from Windows hosts causes an uncontrolled shutdown.
  • 117123 The version of Bash is updated to resolve the vulnerabilities CVE-2014-6271 and CVE-2014-7169
    commonly known as “shellshock”
  • 114947 The total capacity of the volumes in a Peer Persistence Remote Copy group is limited to 32 TB.
  • 114244 Loss of host persona capabilities after upgrading to HP 3PAR OS 3.2.1 GA from HP 3PAR OS 3.1.2
    MU5 or HP 3PAR OS 3.1.2 MU3 + P41.

For more details take a look into the Release Notes for HP 3PAR OS 3.2.1 GA/ MU1. If you’re interested in the basics concepts of HP 3PAR, take a look into the HP 3PAR StoreServ Storage Concepts Guide for HP 3PAR OS 3.2.1.

My lab network design

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Inspired by Chris Wahls blog post “Building a New Network Design for the Lab“, I want to describe how my lab network designs looks like.

The requirements

My lab is separated from my home network, and it’s focused on the needs of a lab. A detailed overview about my lab can be found here. My lab is a lab and therefore I divided it into a lab, and an infrastructure part. The infrastructure part of my lab consists of devices that are needed to provide basic infrastructure and management. The other part is my playground.

While planning my lab, I focused on these requirements:

  • Reuse of existing equipment
  • Separation of traffic within the lab and to the outer world
  • Scalable, robust and predictable performance

The equipment

To meet my requirements, I had the following equipment available:

  • HP 1910-24G switch
  • HP 1910-8G switch
  • Juniper 5GT firewall

The design

The HP 1910 switch is an awesome product with a very good price / performance ratio. Especially because the can do IP routing, which was important for my lab design. Each of my ESXi hosts has 4x 1 GbE interfaces, plus one interface for ILO. In sum 20 ports are necessary to connect my ESXi hosts to my network. The 1910-24G and 1910-8G were connected with a 1 GbE RJ45 SFP. The 1910-8G is used to connect the firewall and client devices, e.g. a Thin Client or a laptop. No other devices are connected to my lab. Because storage is delivered by a HP StoreVirtual VSA, no ports are needed for a NAS or similar.

To separate the traffic, I created a couple of VLANs. Unlike Chris, I’m still using VLAN 1 in my lab. In a customer environment, I would avoid the use of VLAN 1.

VLAN IDNameUsage
1Access (Default)Client connectivity
2ManagementILO, Management VMkernel ports
3InfraVMs and devices for the lab infrastructure
4Lab 1Lab VLAN
5Lab 2Lab VLAN
6Lab 3Lab VLAN
7Temptemporary connectivity
10iSCSI 1iSCSI
11iSCSI 2iSCSI
100NFSNFS
200vMotionvMotion VMkernel ports

VLAN 1 (Default) and 3 are carried to the 1910-8G. All VLANs are carried to the ESXi hosts using trunk ports on the 1910-24G. The Juniper 5GT is connected to the 1910-8G and the trusted interface is connected to an access port in VLAN 3. The untrusted port is connected to the outer world.

The routing is a bit complex on the first look. I configured a couple of switch virtual interfaces (SVI) on the 1910-24G. I configured a SVI for the VLANs 1, 2, 3, 7, 10, 11 and 100. But how do I get traffic in and out of my lab VLANs? I use a small firewall VM that is housed in VLAN 3 (Infra). It has interfaces (vNICs) in VLAN 4, 5 and 6. With this VM, I can carry traffic in and out of my lab VLANs, as long as a policy allows the traffic.

I use  /27 subnets for VLAN 1 to 7, two /28 for VLAN 100 (NFS) and 200 (vMotion), and two /24 for VLAN 10 and 11 (both iSCSI).

VLAN IDNameIP Subnet
1Access (Default)192.168.200.0/27
2Management192.168.200.32/27
3Infra192.168.200.64/27
4Lab 1192.168.200.96/27
5Lab 2192.168.200.128/27
6Lab 3192.168.200.160/27
7Temp192.168.200.192/27
10iSCSI 1192.168.110.0/24
11iSCSI 2192.168.111.0/24
100NFS192.168.200.224/28
200vMotion192.168.200.240/28

I don’t use a routing protocol inside my lab. It looks complex, but with this design I can easily separate the traffic for my three lab VLANs. iSCSI is routed, but I don’t route iSCSI traffic. The same applies for NFS. This drawing gives you an overview about the routing.

vlans_and_routing

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

To simplify address assignment, I use a central DHCP on VLAN 3 with several scopes. The HP 1910-24G and my firewall VM act as DHCP relay and forward DHCP requests to my DHCP. For each VLAN only a small number of dynamic IPs are available. Usually, the servers get a fixed IP.

VLAN IDNameDHCP Scope
1Access (Default)192.168.200.0/27
3Infra192.168.200.64/27
4Lab 1192.168.200.96/27
5Lab 2192.168.200.128/27
6Lab 3192.168.200.160/27
7Temp192.168.200.192/27

The VLAN 10 is used to carry iSCSI from the HP StoreVirtual VSA to my ESXi hosts. The second iSCSI VLAN (ID 11) can be used for tests, e.g. to simulate routed iSCSI traffic. The VLANs 4, 5 and 6 are used for lab work. Until I add a  rule on my firewall VM, no traffic can enter or leave VLAN 4, 5 and 6. When deploying a new VM, I add the VM to VLAN 1 or 3. The VM is installed using MDT and PXE. After applying all necessary updates (MDT uses WSUS during the setup), I can add the VM to VLAN 4, 5 or 6.

Final words

Sure, a lab network design could be easier. The IP subnets can be a pitfall, if you’re not familiar with subnetting. The routing seems to be complex, if you’re not an expert in IP routing. Until today, the network has done exactly what I expected.