Tag Archives: virtualization

VMware disables inter VM Transparent Page Sharing (TPS) for security reasons

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

This morning I discovered a tweet from Derek Seaman in my timeline, that caught my attention.

TPS stands for Transparent Page Sharing and it’s one of VMware memory management technologies. VMware ESX(i) uses four different technologies to manage host and guest memory resources (check VMware KB2017642 for more information). The preference increases from TPS to swapping.

  • Transparent page sharing (TPS)
  • Ballooning
  • Memory Compression
  • Swapping

TPS is a technology by which redundant copies of memory pages are eliminated. You can understand TPS like some kind of memory deduplication. The hypervisor scans the memory periodically for memory pages that could be possibly  shared. For each candidate memory page a hash is calculated and it’s saved in a hash table. If a second candidate page has the same hash, a full bit-by-bit comparison for both pages is triggered. If both memory pages are identical, only one page is saved and the other memory page is reclaimed. TPS is enabled by default and shows good results, especially if you were running a lot VMs with the same OS, like in VDI or terminal server environments.

With the advent of hardware-assisted memory virtualization systems, like Intel EPT or AMD RVI, VMware changed the behaviour of TPS and how guest memory is backed to physical memory. Guest memory was now backed with larger memory pages (2MB instead of 4KB ) for better performance. But 4 KB pages were still used if there were no 2 MB continuous memory, e.g. in case of memory overcommitment or memory fragmentation. Using 2 MB memory pages has advantages, for sure, but in perspective of TPS it has two disadvantage:

  • small chance to find two identical memory pages
  • the expense of a bit-by-bit comparison is at 2 MB pages incredibly much higher than with 4 KB pages

The punchline is, that with hardware-assisted memory virtualization systems, TPS is only actively used if the host is under memory pressure. But it is still there and working.

Safety over performance

Yesterday VMware published KB2080735 (Security considerations and disallowing inter-Virtual Machine Transparent Page Sharing). The purpose of this KB:

This article acknowledges the recent academic research that leverages Transparent Page Sharing (TPS) to gain unauthorized access to data under certain highly controlled conditions and documents VMware’s precautionary measure of no longer enabling TPS in upcoming ESXi releases. At this time, VMware believes that the published information disclosure due to TPS between virtual machines is impractical in a real world deployment.

Because of this, TPS will be disabled by default with the release of:

  • ESXi 5.5 Update release (Q1/ 2015)
  • ESXi 5.1 Update release (Q4/ 2014)
  • ESXi 5.0 Update release (Q1/ 2015)
  • The next major version of ESXi (ESXi 6.0)

Prior these updates VMware will release patches that introduce additional TPS management capabilities and that WILL NOT change the existing settings for inter VM TPS (check KB2091682). As stated in KB2080735, the planned ESXi patch releases are:

  • ESXi 5.5 Patch 3
  • ESXi 5.1
  • ESXi 5.0

The patches for ESXi 5.0 and 5.1 are planned for Q4/ 2014. For ESXi 5.5 a patch the patch is already available (ESXi550-201410401-BG).

My 2 cents

Several years ago, the deactivation of TPS would have been fatal. Today, and in consideration of “safety over performance”, I think it was the right decision. If your design heavily relies on TPS, then you maybe have a bad design. ;)

Also a good read:

Frank DennemanFuture direction of disabling TPS by default and its impact on capacity planning
Magnus AnderssonChanges in ESXi Transparent Page Sharing (TPS) behaviour
Kenneth van SurksumVMware decides to disable TPS in future ESXi releases by default
Marcel van den BergVMware wil disable Transparant Page Sharing by default in future ESXi releases
Andrea MauroBye bye Transparent Page Sharing
Chris WahlTransparent Page Sharing Vulnerable, Yet Largely Irrelevant

More will follow, ping me on Twitter if you found a good one!

My lab network design

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Inspired by Chris Wahls blog post “Building a New Network Design for the Lab“, I want to describe how my lab network designs looks like.

The requirements

My lab is separated from my home network, and it’s focused on the needs of a lab. A detailed overview about my lab can be found here. My lab is a lab and therefore I divided it into a lab, and an infrastructure part. The infrastructure part of my lab consists of devices that are needed to provide basic infrastructure and management. The other part is my playground.

While planning my lab, I focused on these requirements:

  • Reuse of existing equipment
  • Separation of traffic within the lab and to the outer world
  • Scalable, robust and predictable performance

The equipment

To meet my requirements, I had the following equipment available:

  • HP 1910-24G switch
  • HP 1910-8G switch
  • Juniper 5GT firewall

The design

The HP 1910 switch is an awesome product with a very good price / performance ratio. Especially because the can do IP routing, which was important for my lab design. Each of my ESXi hosts has 4x 1 GbE interfaces, plus one interface for ILO. In sum 20 ports are necessary to connect my ESXi hosts to my network. The 1910-24G and 1910-8G were connected with a 1 GbE RJ45 SFP. The 1910-8G is used to connect the firewall and client devices, e.g. a Thin Client or a laptop. No other devices are connected to my lab. Because storage is delivered by a HP StoreVirtual VSA, no ports are needed for a NAS or similar.

To separate the traffic, I created a couple of VLANs. Unlike Chris, I’m still using VLAN 1 in my lab. In a customer environment, I would avoid the use of VLAN 1.

VLAN IDNameUsage
1Access (Default)Client connectivity
2ManagementILO, Management VMkernel ports
3InfraVMs and devices for the lab infrastructure
4Lab 1Lab VLAN
5Lab 2Lab VLAN
6Lab 3Lab VLAN
7Temptemporary connectivity
200vMotionvMotion VMkernel ports

VLAN 1 (Default) and 3 are carried to the 1910-8G. All VLANs are carried to the ESXi hosts using trunk ports on the 1910-24G. The Juniper 5GT is connected to the 1910-8G and the trusted interface is connected to an access port in VLAN 3. The untrusted port is connected to the outer world.

The routing is a bit complex on the first look. I configured a couple of switch virtual interfaces (SVI) on the 1910-24G. I configured a SVI for the VLANs 1, 2, 3, 7, 10, 11 and 100. But how do I get traffic in and out of my lab VLANs? I use a small firewall VM that is housed in VLAN 3 (Infra). It has interfaces (vNICs) in VLAN 4, 5 and 6. With this VM, I can carry traffic in and out of my lab VLANs, as long as a policy allows the traffic.

I use  /27 subnets for VLAN 1 to 7, two /28 for VLAN 100 (NFS) and 200 (vMotion), and two /24 for VLAN 10 and 11 (both iSCSI).

VLAN IDNameIP Subnet
1Access (Default)
4Lab 1192.168.200.96/27
5Lab 2192.168.200.128/27
6Lab 3192.168.200.160/27
10iSCSI 1192.168.110.0/24
11iSCSI 2192.168.111.0/24

I don’t use a routing protocol inside my lab. It looks complex, but with this design I can easily separate the traffic for my three lab VLANs. iSCSI is routed, but I don’t route iSCSI traffic. The same applies for NFS. This drawing gives you an overview about the routing.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

To simplify address assignment, I use a central DHCP on VLAN 3 with several scopes. The HP 1910-24G and my firewall VM act as DHCP relay and forward DHCP requests to my DHCP. For each VLAN only a small number of dynamic IPs are available. Usually, the servers get a fixed IP.

1Access (Default)
4Lab 1192.168.200.96/27
5Lab 2192.168.200.128/27
6Lab 3192.168.200.160/27

The VLAN 10 is used to carry iSCSI from the HP StoreVirtual VSA to my ESXi hosts. The second iSCSI VLAN (ID 11) can be used for tests, e.g. to simulate routed iSCSI traffic. The VLANs 4, 5 and 6 are used for lab work. Until I add a  rule on my firewall VM, no traffic can enter or leave VLAN 4, 5 and 6. When deploying a new VM, I add the VM to VLAN 1 or 3. The VM is installed using MDT and PXE. After applying all necessary updates (MDT uses WSUS during the setup), I can add the VM to VLAN 4, 5 or 6.

Final words

Sure, a lab network design could be easier. The IP subnets can be a pitfall, if you’re not familiar with subnetting. The routing seems to be complex, if you’re not an expert in IP routing. Until today, the network has done exactly what I expected.

HP 3PAR Peer Persistence for Microsoft Windows Servers and Hyper-V

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Some days ago I wrote two blog posts (part I and part II) about VMware vSphere Metro Storage Cluster (vMSC) with HP 3PAR Peer Persistence. Because I wrote about it in the first of the two blog posts, allow me to take a short description, what Peer Persistence is and what it does, from that blog post:

HP 3PAR Peer Persistence adds functionalities to HP 3PAR Remote Copy software and HP 3PAR OS, that two 3PAR storage systems form a nearly continuous storage system. HP 3PAR Peer Persistence allows you, to create a VMware vMSC configuration and to achieve a new quality of availability and reliability.

You can transfer the concept of a Metro Storage Cluster to Microsoft Hyper-V. There is nothing VMWare specific in that concept.

With the GA of 3PAR OS 3.2.1 in September 2014, HP announced a lot of new features. The most frequently mentioned feature is Adaptive Flash Cache. If you’re interested in more details about Adaptive Flash Cache you will like the AFC Deep dive on 3pardude.com. A little lost is the newly added support for  Peer Persistence with Hyper-V. This section is taken from the release notes of 3PAR OS 3.2.1:

3PAR Peer Persistence Software supports Microsoft Windows 2008 R2 and Microsoft Windows 2012 R2 Server and Hyper-V, in addition to the existing support for VMware. HP 3PAR Peer Persistence software enables HP 3PAR StoreServ systems located at metropolitan distances to act as peers to each other, presenting a nearly continuous storage system to hosts and servers connected to them. This capability allows to configure a high availability solution between two sites or data centers where failover and failback remains completely transparent to the hosts and applications running on those hosts.

3PAR Peer Persistence with Microsoft Windows Server and Hyper-V

Currently supported are Windows Server 2008 R2 and Server 2012 R2 and the corresponding versions of Hyper-V. This table summarizes the currently supported environments.

HP 3PAR OSHost OSHost connectivityRemote Copy connectivity
3.2.1Windows Server 2008 R2FC, FCoE, iSCSIRCIP, RCFC
3.2.1Windows Server 2012 R2FC, FCoE, iSCSIRCIP, RCFC

At first glance, it seems that Microsoft Windows Server and Hyper-V support more options in terms of Host and Remote Copy Connectivity. This is not true! With 3PAR OS 3.2.1, HP added the support for FCoE and iSCSI host connectivity, as well as the support for RCIP for VMware. At this point, there is no winner. Check HP SPOCK for the latest support statements.

With 3PAR OS 3.2.1 a new host persona (Host Persona 15) was added for Microsoft Windows Server 2008, 2008 R2, 2012 and 2012 R2. This host persona must be used in Peer Persistence configurations. This is comparable to Host Persona 11 for ESXi. The setup and requirements for VMware and Hyper-V are similar. For a transparent failover a Quorum Witness is needed and it has to be deployed onto a Windows Server 2012 R2 Hyper-V host (not 2008, 2008 R2 or 2012!). Peer Persistence operates in the same manner as with VMware: The Virtual Volumes (VV) are grouped into Remote Copy Groups (RCG), mirrored synchronously between a source and destination storage system. Source and destination volume share the same WWN. They are presented using the same LUN ID and the paths to the destination storage are marked as standby. Check part I of my Peer Persistence blog series for more detailed information about how Peer Persistence works.

Final words

It was only a question of time until HP releases the support for Hyper-V with Peer Persistence. I would have assumed that HP makes more fuss about it, but AFC seems to be the killer feature in 3PAR OS 3.2.1. I’m quite sure that there are some companies out there that have waited eagerly for the support of Hyper-V with Peer Persistence. If you have any further questions about Peer Persistence with Hyper-V, don’t hesitate to contact me.

VMware jumps on the fast moving hyper-converged train

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

The whole story began with a tweet and a picture:

This picture  in combination with rumors about Project Mystic have motivated Christian Mohn to publish an interesting blog post. Today, two and a half months later, “Marvin” or project Mystic got its final name: EVO:RAIL.

What is EVO:RAIL?

Firstly, we have to learn a new acronym: Hyper-Converged Infrastructure Appliance (HCIA). EVO:RAIL will be exactly this: A HCIA. IMHO EVO:RAIL is VMwares try to jump on the fast moving hyper-converged train. EVO:RAIL combines different VMware products (vSphere Enterprise Plus, vCenter Server, Virtual SAN and vCenter Log Insight) along with EVO:RAIL deployment, configuration and management to a hyper-converged infrastructure appliance. Appliance? Yes, an appliance. A single stock keeping unit (SKU) including hardware, software and support. To be honest: VMware will no try to sell hardware. The hardware will be provided by partners (currently Dell, EMC, Fujitsu, Inspur, NetOne and SuperMicro).

VMware Chief Technologist Duncan Epping described four advantages of EVO:RAIL in a today published blog post:

EVO:RAIL is software-defined. Based on well-known VMware products, the EVO:RAIL engine simplifies the deployment, management and configuration of the building blocks.

EVO:RAIL is simple: The EVO:RAIL engine allows you to reduce the time from rack & stack until you can power-on your first VM. You need less time for basic tasks, like creation of VMs or for the patch management of the hosts. If you need more compute or storage capacity, simply add additional 2U blocks (currently max 4 blocks > 16 nodes).

EVO:RAIL is highly resilient: A 2U block consists of four nodes. This results in a single four host vSphere cluster, with a single VSAN datastore and full support für VMware HA, DRS, FT etc. This facilitate no downtime for VMs during planned maintenance or node failures.

EVO:RAIL allows customers to choose: Customers can obtain EVO:RAIL using a single SKU from their preferred EVO:RAIL partner. The partner provides hardware, software and support for the EVO:RAIL HCIA.

Each HCIA node will provide at least:

  • 2x Intel Xeon E5-2620 v2 six-core CPUs
  • at least 192GB of memory
  • 1x SLC SATADOM or SAS HDD as boot device
  • 3x SAS 10K RPM 1.2TB HDD for the VMware Virtual SAN datastore
  • 1x 400GB MLC enterprise-grade SSD for read/ write cache
  • 1x Virtual SAN-certified pass-through disk controller
  • 2x 10GbE NIC ports (either 10GBase-T or SFP+)
  • 1x 1GbE IPMI port for out-of-band management

This results in a four node vSphere cluster with 48 cores, 768 GB RAM and 14,4 TB raw disk space on just 2U. A single block allows you to run 100 average-sized (2 vCPU, 4GB RAM, 60GB with redundancy) general-purpose VMs, or 250 View VMs (2vCPU, 2GB RAM, 32GB linked clones).

My thoughts

Looks like a Nutanix clone, isn’t it? Yes, it’s a HCIA like a Nutanix block. But it’s focused on VMware (you can’t run Microsoft Hyper-V or KVM on it) and it will be sold by EVO:RAIL partners. This allows VMware to use a much wider distribution channel. It will be fun to see how other hyper-converged companies will react to this announcement. Unfortunately HP isn’t listed as a HCIA partner company. But DELL is listed. Fun fact: DELL and Nutanix signed a contract in June 2014.

Strategic Relationship Significantly Expands Access and Distribution of Nutanix Solutions with Dell’s World-Class Hardware, Services and Marketing to Accelerate Adoption of Web-scale Converged Infrastructure in the Enterprise

Take a look into the “Introduction to VMware EVO: RAIL” whitepaper. There are other great blog posts about EVO:RAIL:

Duncan EppingMeet VMware EVO:RAIL™ – A New Building Block for your SDDC
Chris WahlVMware Announces Software Defined Infrastructure with EVO:RAIL
Marcel van den BergVMware announces EVO:RAIL , a turnkey appliance  offering SDDC in a box featuring vSphere  and Virtual SAN
Marco BroekenVMworld 2014: Introducing VMware EVO: RAIL
Vladan SEGETVMware EVO:RAIL – New Hyper-Converged Solution By VMware
Eric SloofVMware EVO: RAIL Hyper-Converged Infrastructure Appliance

Memory management: VMware ESXi vs. Microsoft Hyper-V

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Virtualization is an awesome technology. Last weeks I visited a customer and we took a walk through their data centers. While standing in one of their data centers I thought: Imagine that all server, that they are currently run as VMs, would be physical?. I’m still impressed about the influence of virtualization. The idea is so simple You share the resources of the physical hardware between multiple virtual instances. I/O, network bandwidth, CPU cycles and memory. After nearly 10 years of experience with server virtualization I can tell that especially the memory resources is one of the weak points. When a customer experiences performance problems, they were mostly caused by a  lack of storage I/O or memory.

The reason for this post

Today I like to write a bit about memory management of hypervisors, in this case the memory management of VMware ESXi (the trombone in the flutes orchestra) and Microsoft Hyper-V. They are the leading hypervisors on the market (source: Magic Quadrant for x86 Server Virtualization Infrastructure). But there is another cause, why I took a closer look at the memory management of Hyper-V: Microsofts support policies and recommendations for exchange servers in hardware virtualization environments. In the run up to a Exchange migration project I took a quick look into Microsofts TechNet, just to verify some questions. And then I stumbled over this statement, valid for Exchange 2013:

Exchange memory requirements and recommendations

Some hypervisors have the ability to oversubscribe or dynamically adjust the amount of memory available to a specific guest machine based on the perceived usage of memory in the guest machine as compared to the needs of other guest machines managed by the same hypervisor. This technology makes sense for workloads in which memory is needed for brief periods of time and then can be surrendered for other uses. However, it doesn’t make sense for workloads that are designed to use memory on an ongoing basis. Exchange, like many server applications with optimizations for performance that involve caching of data in memory, is susceptible to poor system performance and an unacceptable client experience if it doesn’t have full control over the memory allocated to the physical or virtual machine on which it’s running. As a result, using dynamic memory features for Exchange isn’t supported.

There are similar statement for Exchange 2007 and 2010. At the first moment I thought “Okay, looks like the Exchange-on-NFS thing”. Check Josh Odgers blog post if you want to know more about this Exchange-on-NFS thing. If you’re running your Exchange on NFS, don’t read it. There is reason to believe that you will go out and shoot a Microsoft engineer after reading it. After a couple of seconds I thought “What does dynamic memory feature mean?” This was the beginning of a journey into the deep of hypervisor memory management.

The derivation

Memory is the only component in a server that can’t be oversubscribed. That’s plausible, because you can schedule multiple VMs to a single CPU core using a  time-slice mechanism. But you can’t share a memory cell, if a VM has stored data in it. Now you have a number of options. You can configue a static memory size for each VM. If you have 32 GB memory in your virtualization host, you can run e.g. two VMs with 8 GB and four VMs with 4 GB memory. But what if a VM needs more memory? Either you reduce the amount of memory for the other VMs or you have to add memory to your host. Not very flexible. Now suppose that the VMs take full advantage of its configured memory only very rarely. In this case we can use the unused memory for the running or new VMs. We can oversubscribe the memory of the physical host. But this only works as long as the number of actively used memory is less or equal the memory size of the host. We have to react only in the case that a VM wants to take full advantage of its configured memory.

How does VMware ESXi manages its memory?

VMware ESXi uses four technologies to manage its memory:

  • Transparent Page Sharing (TPS)
  • Ballooning
  • Memory Compression
  • Swapping

Since the introduction of large pages (2 MB memory pages), TPS is only used under memory contention (thanks to Manfred for this hint). With TPS the memory is divided into pages and the hypervisor checks, if some of the pages are identical. If this is the case, the hypervisor stores only one copy of page and sets pointers to the identical ones. If you’re running a lot of similar VMs, then TPS can reduce the amount of used memory. Ballooning uses a special driver inside the VM. The hypervisor can use this driver to allocate memory inside of a VM. The OS inside the VM then frees up memory that isn’t used. The hypervisor then can reclaim that memory. Memory compression is used shortly before the hypervisor has to swap to disk. If a memory page can be compressed by at least 50% it’s held in the memory compression cache (10% of the memory is reserved for this). Otherwise it’s swapped to disk. Swapping is the last technology. If there is no more memory left and the other technologies are used up to their maximum, memory pages are swapped out to disk. Please note, that this is a very rough summary. For more information, please check VMware vSphere Resource Management Guide. With this techniques you can easily create four VMs with each 16 GB memory on a host with 32 GB memory. Important is, that the VMs can only allocate less then 32 GB memory, because the hypervisor also needs some memory for itself and virtualization overhead. A VM needs at least the amount of overhead memory to start on a VMware ESXi.

How does Microsoft Hyper-V manages its memory?

Until Windows Server 2008 R2 SP1 Microsoft Hyper-V was unable to do dynamic memory management. Only static memory allocation was possible. To stay with the example, it wasn’t possible to start four 16 GB VMs on a 32 GB host with Hyper-V. During the power-on, Hyper-V reserves the configured memory of the VM, which makes unused memory unavailable for other VMs. With Windows Server 2008 R2 SP1 Microsoft added dynamic memory management to Hyper-V. Since then you can enable dynamic memory on VM-level. After anabling it for a VM you can set a so called “Startup RAM”. This is the amount of memory which is assigned to the VM during startup. This is because Windows needs more memory during startup than the steady state (Source). It should be set to the amount of memory which is needed to run the server and application with the desired performance. You can also configure a “Minimum RAM”. This is the amount of memory up to which the hypervisor can reclaim memory using a ballooning technique. Because you can configure a “Minimum RAM”, you can also configure a “Maximum RAM”. This is the amount of memory up to which the hypervisor can add memory to the VM. And now comes the interesting part. Any idea how the hypervisor adds memory to the VM? No? He’s using memory hot-add! If the VMs needs more memory, it’s simply hot-added to the VM. This explains why the OS inside the VM has to support memory hot-add in the case you want to use dynamic memory. And it explains also why some applications are not supported with Hyper-V dynamic memory.


IMHO it’s not the dynamic memory managemet itself, which leads to Microsofts support statement. It’s the way how Microsoft Hyper-V manages the dynamic memory. Exchange checks the configured memory during the start of its services. If the memory size is increased after the start of the services, Exchange simply doesn’t recognizes it. On the other hand Microsoft SQL Server can profit from hot-added memory. Because of this, dynamic memory is supported with Microsoft SQL Server (check question 7 and answer 7 in the linked KB article). VMware ESXi doesn’t hot-add memory to a VM. Therefore you have to configure a suitable memory size. If you hot-add memory, then the same restrictions apply as for Hyper-V. Instead of relying on memory hot-add you can configure the suitable memory size when using VMware ESXi. But always remember: Memory oversubscription can lead to performance problems, if the VMs try to allocate the configured memory! Best practice is not to oversubscribe memory.


Memory management in ESXi and Hyper-V strongly differs. There’s no better or worse. They are too different to compare and they are developed for different use cases.

Deploying HP StoreVirtual VSA – Part II

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Part I of this series covered the deployment, part II is dedicated to the configuration of the StoreVirtual VSA cluster. I assume that the Centralized Management Console (CMC) was installed. Start the CMC. If you see no systems unter “Available Systems”, client “Find” on the menu and then choose “Find Systems…”. A dialog will appear. Click “Add…” and enter the ip address of one of the earlier deployed VSA nodes. Repeat this until all deployed VSA nodes are added. Then click “Close”. Now you should have all available VSA nodes listed under “Available Systems”.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

A management group contains virtual and physical StoreVirtual systems that are managed together. Cluster and volumes are defined per management group. Also user accounts are defined per management group. Right click a node and choose “Add to New Management Group…” from the context menu. We will add all three nodes into this new management group.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Click “Next”. On the next page of the wizard we have to enter a username and password for a administrative user, that will be added to all nodes.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

On the next page we have to provide a NTP server. You can set the time manually, but I recommend to use a NTP server. In this case it’s the Active Directory Domain Controller in my lab. Please note, that this server has to be reachable for the VSA nodes! In part I we deployed the VSA nodes with two NICs and with eth0 they can reach the NTP server.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

On the next page of the wizard, you have to provide information about the DNS: DNS domain name, additional DNS suffixes and one or more DNS servers. For the DNS servers the same applies as for the NTP server. They have to be reachable for the VSA nodes!


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

To use the e-mail notification, you have to provide a SMTP server. I don’t have one in my lab, so I left the fields empty. This results in a warning message which can safly be ignored.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Now comes a very important question: Standard or Multi-Site Cluster? A Multi-Site cluster is necessary if site fault tolerance is needed. It also takes care, that traffic from hosts is only send to the local site. A Multi-Site cluster can span multiple sites and can have cluster virtual ip addresses (cluster VIP) in different subnets. A Multi-Site cluster is needed, if you want to build a vSphere Metro Storage Cluster (vSMC) with HP StoreVirtual. I chose to create a standard cluster.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

After chosing the cluster type, we have to provide a cluster name and the number of nodes, that should be member of this new cluster.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The next step is to configure the cluster virtual ip address (cluster VIP). This ip address has to be in the same subnet as the VSA nodes. This ip address is used to access the cluster. After the initial connection to the cluster VIP, the initiator will contact a VSA node for the data transfer.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The wizard allows us to create a volume. This step can be skipped. I created a 1 TB thin-provisioned volume.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

After clicking “Finish” the management group and the cluster will be created. This steps could take some time.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

At the end you will get a summary screen. You can create further volumes or you can repeat the whole wizard to create additional management groups or cluster.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Congratulations! You have now a fully functional HP StoreVirtual VSA cluster.

Possible cluster VIP error message

Depending on your deployment, you will get this error message in the CMC:

VIP error: System is not reachable by any VIP in the cluster


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

This message occures, if you have deployed your VSA nodes and two NIC and the NIC, that is used for iSCSI, isn’t selected as the preferred SAN/iQ interface. I mentioned in part I that I would refer to the “Select the preferred SAN/iQ interface” option later. This is now. You can get rid of this message, by selecting the right interface as the preferred SAN/iQ interface. Select “Network” on a VSA node, then click the “Communication” tab and choose “Select LeftHandOS Interface…” from the “Communications Tasks” drop-down menu on the bottom of the page.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The message should disappear after changing this on each affected VSA node.

Add hosts

To present volumes to hosts, you have to add hosts. A host consits of a name, an ip address, an iSCSI IQN and, if needed, CHAP credentials. Multiple hosts can grouped to server clusters. You need at least to hosts to build a server cluster. But first of all, we will add a single host:


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

If you want to work with application-managed snapshots, you have to provide a “Controlling Server IP Address”. When working with VMware vSphere, this is the ip address of the vCenter server.

With at least two hosts, you can create a server group. A server group simplifies the volume management, because you can assign and unassign volumes to a group of hosts with a single click. This ensures the consistency of volume presentations for a group of hosts.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Presenting a volume

During the initial configuration we created a 1 TB thin-provisioned nRAID 10 volume. To assign this volume to a host, right-click the volume in the CMC and click “Assign and Unassign Servers…”. A windows will popup and you can check or uncheck the server, to which the volume should be assigned. A volume can be presented read-only or read-write.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

We are nearly at the end. We only have to add the cluster VIP to the iSCSI initiator and create a datastore out of the presented volume.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

After a rescan a new datastore can be added by using the presented volume. Have I mentioned that each VSA node has only 10 GB of data storage? Thin provisioning can be treacherous… ;)


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Final words

The deployment and configuration is really easy. But this short series only scratched the surface. You can now add more volumes, play with SmartClones and remote snapshots. Have fun!

Deploying HP StoreVirtual VSA – Part I

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

I would like to thank Calvin Zito for the donation of StoreVirtual NFR licenses to vExperts. This will help to spread the knowhow about this awesome product! If you are not a vExpert, you can download the StoreVirtual VSA for free and try it for 60 days. If you are a vExpert, ping Calvin on Twitter for a 1y NFR license.

This blog post covers the deployment of the current StoreVirtual VSA release (LeftHand OS 11). A second blog post covers the configuration using the CMC. Both posts are focused on LeftHand OS 11 and VMware vSphere. If you are searching for a deployment and configuration guide for LeftHand OS 9.x or 10 on VMware vSphere, take a look at this two blog posts from Craig Kilborn: Part 1 – How To Install & Configure HP StoreVirtual VSA On vSphere 5.1 & Part 2 – How To Install & Configure HP StoreVirtual VSA On vSphere 5.1. Another blog post that covers LeftHand OS 11 is from Hugo Strydom. Hugo wrote about what he did with his VSA (vExpert : What I did with my HP VSA). I wrote a blog post about the HP StoreVirtual VSA some weeks ago. If you are interested in some basics about the VSA, check my mentioned blog post.


The deployment process has been simplified. The setup wizard did a good job at my lab, but AFAIK there are problem if you use distributed Switches in your lab. If you are affected, please leave a comment or ping me via Twitter. But before we start the setup wizard, we have to think about the goals of our setup. There are some things, that we need to consider. The deployment process can be divided into tree steps:

  1. Planning
  2. Deployment
  3. Configuration

Planning the installation

Before you start, you should have a plan. There are some things you should consider.

vSwitches: We have to design and configure the virtual switches (vSwitches) and port groups. The vSwitches should be dedicated to the VSA cluster and the accessing hosts. You should configure at least 2x 1 GbE for performance and redundancy as vSwitch uplinks. If the iSCSI initiators and all nodes of the VSA cluster are running on the same host, you can use a vSwitch with no uplinks. If you want to use jumbo frames, you need to configure the vSwitches, port groups and VMkernel ports accordingly. I recommend to use a dedicated iSCSI VLAN to separate the traffic.

IP addresses: Each VSA needs an ip address. I recommend to use two ip addresses: One for eth0 and one for eth1. eth0 will be used for management and must attached to a port group, that makes it possible to reach the interface. Either because you client is attached to the same port group, the traffic is routed or the physical client is in the same VLAN as the VSA. eth1 will be used for iSCSI. You also need an ip address for the cluster virtual ip address (cluster VIP). This address must be in the same subnet as the eth1 ip addresses of the VSA nodes. If you want to use multipathing for your iSCSI initiators, each initiator needs two ip addresses in the same subnet as the VIP and the VSA nodes.

Hostnames: Meaningful hostnames facilitate management. I named my VSA nodes vsa01.lab.local, vsa02.lab.local and vsa03.lab.local. Feel free to name you VSAs in another fashion. :)

Storage: A VSA node has a single disk for the OS. All other disks are attached to a seperate controller (when using VMware the Paravirtual SCSI adapter is used). Storage can be added as VMDK or RDM to a VSA node, beginning with SCSI 1:0 (first device on second controller). If you want to use Adaptive Optimization (AO), you should have 10% of the total capacity on SSDs. The VMDK or RDM should be RAID protected, so you should avoid the use of RAID 0. Disks can be hot-added, but not hot-removed. You need at least 5 GB, but a VSA can scale up to 50 TB.

CPU & Memory: CPU and memory resources have to be reserved. You should have at least two 2 GHz cores reserved for each VSA node. The memory requirements depend on the virtualized storage capacity. For 4 TB up to 10 TB you should have 7 GB RAM for each VSA node. If you want to use the same capacity with AO, you should have 8 GB RAM. For 500 MB up to 4 TB, you should have 5 GB RAM. This applies also when using AO. In a productional environment I strongly recommend to use CPU and memory reservation and not to run more than one VSA on a single host. This does not apply to a lab environment.

The deployment

I took some screenshots during the deployment of a VSA using the setup wizard. I ran the wizard on a Windows 8.1 client.

The setup file (HP_StoreVirtual_VSA_2014_Installer_for_VMware_vSphere_TA688-10518.exe) is self-extracting. After the extraction a CMD comes up asking you, if you want to use the GUI or  CLI interface. I chose the GUI wizard. Unfortunately after pressing “2” for the GUI wizard, the wizard didn’t appeared. I had to run the setup file as administrator (right click the file, then choose “Run as administrator”). On the welcome page simply click “Next”.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

You have to provide hostname or ip address, and login credentials for the target ESXi host or the vCenter server. I chose a ESXi as target for my VSA deployment.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

On the third page you get a summary of the host you chosen one step earlier.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Now you can choose between deploying a VSA or a Failover Manager. The latter is a special manger used in clusters as a quorum tie-breaker. But we want to deploy a VSA.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

In the next step we have to chose a datastore in which the VSA should reside. This has no impact on the later configured storage.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The next step covers the NIC setup of the VSA. As I mentioned earlier I recommend to use two NICs for the VSA: One for management and a second one for iSCSI traffic. As you can see on the screenshot, I used eth0 for management.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The second NIC is dedicated to iSCSI traffic. Please notice the drop-down menu on the bottom “Select the preferred SAN/iQ interface”. I will refer to it later.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Now it’s time to give a name to the VM and to select the drive type. Because I had no RDMs in my lab, the option is greyed out.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Now we have to configure the data disks.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The wizard allows you to deploy more than one VSA. In the next step you can choose, if you want to deploy another VSA on the same or another host, or if you are done. I only deployed one VSA, so I was done at this point.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Before you click “Deploy”, you should check the settings. If everything is fine, hit the “Deploy” button. The deployment will start immediately.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

After a couple of minutes the deployment is finished. Hit “Finish”. Now it’s time to start the Centralized Management Console (CMC). If not already installed, you can install it manually. Usually the CMC is installed automatically by the wizard.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Part II covers the configuration of the management group, cluster etc. If you have further questions or feedback, feel free to leave a comment!

DataCore announces SANsymphony-V10

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Today DataCore announced their latest SANsymphony-V release. After the merge of SANmelody & SANsymphony, SANsymphony-V10 is the 10th generation of DataCores flagship product. Interestingly DataCore uses the terms  “software-defined” and “Virtual SAN”. Whether the product of the definition of the terms corresponds everyone should decide for themselves. But this is another story.

What is DataCore SANsymphony-V?

What DataCore definitely does is automating and simplifying storage management and provisioning. I really like it the simplicity. DataCore SANsymphony-V can deliver enterprise-class functionality, like synchronous mirroring, replication, snapshots, clones, thin-provisioning and tiering . It runs on x86 hardware with Microsoft Windows Server 2008 or 2012. Multiple servers can grouped together for load balancing and redundancy. A storage pool can created out of the internal or external flash and roting rust. Single or mirrored virtual disks can be carved out of this storage pool. Hosts can access these virtual disks using iSCSI or Fibre-Channel. Because DataCore SANsymphony-V10 can use several different technologies as backend for storage pools, it’s easy to replace backend storage. You can add or remove disks to or from storage pools. If you backend storage is an old EMC CLARiiON and you get a new HP MSA 2040 Storage, you can replance the old storage without disruption.

What’s new in SANsymphony-V10?

I took this information directly from the DataCore SANsymphony-V10 announcement page:

  • Scalability has doubled from 16 to 32 nodes; Enables Metro-wide N+1 grid data protection
  • Supports high-speed 40/56 GigE iSCSI; 16Gbps Fibre Channel; iSCSI Target NIC teaming
  • Performance visualization/Heat Map tools add insight into the behavior of Flash and disks
  • New auto-tiering settings optimize expensive resources (e.g., flash cards) in a pool
  • Intelligent disk rebalancing, dynamically redistributes load across available devices within a tier
  • Automated CPU load leveling and Flash optimizations to increase performance
  • Disk pool optimization and self-healing storage; Disk contents are automatically restored across the remaining storage in the pool; Enhancements to easily select and prioritize order of recovery
  • New self-tuning caching algorithms and optimizations for flash cards and SSDs
  • ‘Click-simple’ configuration wizards to rapidly set up different use cases (Virtual SAN; High-Availability SANs; NAS File Shares; etc.)

Along with new feature DataCore has announced is a new licensing model. Aside the traditional server license, there will be a Virtual SAN license which includes tiering, adaptive caching, storage pooling, synchronous mirroring, thin provisioning and snapshots/ clones. Both variations, the traditional SANsymphony-V10 and the Virtual SAN, running on Windows Server 2012. So the Virtual SAN will not be a virtual appliance. AFAIK it will only be a special license.

The general availability is scheduled for May 30, 2014. So stay tuned. :) I hope to get SANsymphony-V10 as fast as it’s possible into my lab.

HP StoreVirtual VSA – An introduction

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

In 2008 HP acquired LeftHand Networks for “only” $360 million. In relation to the acquiration of 3PAR in 2010 ($2.35 billion) this was a  really cheap buy. LeftHand Networks was a pioneer in regard of IP based storage build on commodity server hardware. Their secret was SAN/iQ, a linux-based operating system, that did the magic. HP StoreVirtual is the TAFKAP (or Prince…? What’s his current name?) in the HP StorageWorks product familiy. ;) HP LeftHand, HP P4000 and now StoreVirtual. But the secret sauce never changed: SAN/iQ or LeftHand OS. Hardware comes and goes, but the secret of StoreVirtual was and is the operating system. And because of this it was easy for HP to bring the OS into a VM. StoreVirtual Virtual Storage Appliance (VSA) was born. So you can chose between the StoreVirtual Storage nodes (HW appliances) and the StoreVirtual VSA, the virtual storage appliance. This article will focus on the StoreVirtual VSA with LeftHand OS 11.

HP StoreVirtual VSA

The solution of LeftHand Networks differed in one imporant point: Their concept was not based on the “traditional” dual-controller paradigm. Their storage nodes formed a cluster and the data blocks were copied between the nodes. The access to the cluster was realized with a cluster virtual IP (VIP). So each node provided capacity and IO. And with each block, that were added to the cluster, performance and IO increased. Imagine a train, not a diesel locomotive, but a modern train where each axis has a motor. With each car of the train is added, capacity (for passengers) and drive power increases. You can call it GRID Storage.

The StoreVirtual Storage appliances uses HP ProLiant hardware. Depending on the model between 4 and 25 SAS or SAS-NL disks are configured. If you use the StoreVirtual VSA, storage is allocated in form of raw device mappings (RDM) or VMDK. You simply add RDM or VMDK to the VSA. With this you can use the StoreVirtual VSA to utilize local storage in hosts. Beside the local RAIDs inside the HW appliances, StoreVirtual provides resiliency through Network RAID (nRAID). Karim Vaes wrote an excellent article and described the different nRAID level in detail. To make a long story short: Network RAID works like the well known raid levels. Instead of dealing with disks, you deal with data blocks. And the data blocks are copied between two or more nodes. Depending on the number of nodes inside of a cluster, you can use different nRAID levels and get more or less redundancy and resiliency in case of one or more node failures. Currently you can choose between Network RAID 0, 5, 6, 10, 10+1 and 10+2 to protect against double disk, controller, node, power, network or site failure.

A cluster is a group of nodes. One or more clusters can be created in a management group. So the smallest setup is a Managementgroup with one cluster. The storage capacity of all nodes inside of a cluster is pooled and can be used to create volumes, clones and snapshots. The volumes seamlessly span the nodes in the cluster. You can expand the storage and IO capacity by adding nodes to the cluster. The StoreVirtual VSA offers their storage via iSCSI. A cluster has at least one IP address and each node has also at least one IP address. The cluster virtual IP address (VIP) is used to connect to the cluster. As long as the cluster is online, the VIP will stay online and will provide access to the volumes. A quorum (majority of nodes) determines if a cluster can stay online or if it will go down. For this a special manager is running on each node. You can also use specialized managers, so called Failover Manager (FOM). If you have two nodes and a FOM, at least one node and the FOM need to stay online and must be able to communicate with each other. If this isn’t the case, the cluster will go down and access to volumes is no longer possible. StoreVirtual provides two clustering modes: Standard Cluster and Multi-Site Cluster. A standard cluster can’t contain nodes that are designated to a site, nodes can’t span multiple subnets and it can only have a single cluster VIP. So if you need to deploy StoreVirtual Storage or VSA nodes to different sites, you have to build a Multi-Site cluster. Otherwise a standard cluster is sufficient. Don’t try to deploy a standard cluster in a multi-site enviroment. It will work, but in unawareness of multiple sites, LeftHand OS won’t guarantee that block copies are written to both sites.

LeftHand OS provides a broad range of features:

  • Storage Clustering
  • Network RAID
  • Thin Provisioning
  • Application integrated snapshots
  • SmartClone
  • Remote Copy
  • Adaptive Optimization

The HP StoreVirtual VSA is… a virtual storage appliance. It’s delivered as a ready-to-run appliance for VMware vSphere or Microsoft Hyper-V. Because the VSA is a VM, it consumes CPU, memory and disk resources from the hypervisor. Therefor you have to ensure that the VSA gets the resources it needs to operate correctly. These are best practices taken from the “HP StoreVirtual Storage VSA Installation and Configuration Guide”

Configure the VSA for vSphere to start automatically and first, and before any other virtual machines, when the vSphere Server on which it resides is started. This ensures that the VSA for vSphere is brought back online as soon as possible to automatically re-join its cluster.

Locate the VSA for vSphere on the same virtual switch as the VMkernel network used for iSCSI traffic. This allows for a portion of iSCSI I/O to be served directly from the VSA for vSphere to the iSCSI initiator without using a physical network.

Locate the VSA for vSphere on a virtual switch that is separate from the VMkernel network used for VMotion. This prevents VMotion traffic and VSA for vSphere I/O traffic from interfering with each other and affecting performance.

HP recommends installing vSphere Server on top of a redundant RAID configuration with a RAID controller that has battery-backed cache enabled. Do not use RAID 0.

And if there are best practices, there are always some things you shouldn’t do…

Use of VMware snapshots, VMotion, High-Availability, Fault Tolerance, or Distributed Resource Scheduler (DRS) on the VSA for vSphere itself.

Use of any vSphere Server configuration that VMware does not support.

Co-location of a VSA for vSphere and other virtual machines on the same physical platform without reservations for the VSA for vSphere CPUs and memory in vSphere.

Co-location of a VSA for vSphere and other virtual machines on the same VMFS datastore.

Running VSA for vSphere’s on top of existing HP StoreVirtual Storage is not recommended.

Because the OS is the same for HW appliances and VSA, you can manage both with the same tool. A StoreVirtual solution is managed with the Centralized Management Console (CMC). You can run the CMC on Windows or Linux. The CMC is the only ways to manage StoreVirtual Storage and VSA nodes. On the nodes itself you can only assign an IP address and set user and password. Everything else is configured with the CMC.

Meanwhile there are some really cool solutions that integrates with HP StoreVirtual. Take a look at Veeam Explorer for SAN Snapshots. StoreVirtual is also certified for vSphere Metro Storage Cluster. You can get a 60 days evaluation copy on the HP website. Give it a try! If you’re a vExpert you can get a free NFR license from HP!

Blog posts about deploying StoreVirtual VSA, features like Snapshots or Adaptive Optimization and solutions like Veeam Explorer for SAN Snapshots will follow. I will also blog about the HP Data Protector Zero Downtime Backup with HP StoreVirtual.

HP StoreOnce VSA – An introduction

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

A side effect of data growth is the growth of the amount of data that must be backed up. The path of least resistance is buying more disks and/ or tapes. Another possible solution is data deplucation. With data deduplication you can’t reduce the amount of data that must be backed up, but you can reduce the amount of data that must be stored. HP StoreOnce Backup is HPs solution to address this problem.

HP StoreOnce Backup is a software-based solution. It’s included in hardware appliances and in HP Data Protector (as part of the Advanced Backup-to-Disk option). But you can buy it also as a Virtual Store Appliance. This the HP StoreOnce VSA. This article will focus on the HP StoreOnce VSA.

Delivery format & requirements

The HP StoreOnce VSA is delivered as VMware virtual appliance (OVF format). You can start with 1 TB of usable capacity and you can increment this capacity in 1 TB steps up to 10 TB. The capacity is provides in form of thick provisioned 1 TB VMDKs. Raw devices are not supported. You can use a NFS datastore if you like. Because the HP StoreOnce VSA is a VM, it allocates processor, memory, storage and networking resources from the hypervisor. To ensure sufficient performance, it’s recommended to meet some requirements.

1 to 5 TBmin. 3 disks as RAID 5min. 16 GBmin. 4 vCPUsmin. 2x 1 GbE
6 to 10 TBmin. 3 disks as RAID 5min. 32 GBmin. 4 vCPUsmin. 2x 1 GbE

Yes, the StoreOnce VSA is not a tiny, resource-saving VM. But because it’s a VM, it can benefit from several VMware features, like VMware HA, vMotion or SRM. For customers with AMD hosts the Quick Specs contain a interesting need:

If the VMware host has AMD CPUs some configuration is needed to run the StoreOnce VSA. It is necessary to create a single host cluster witht the EVC (Enhanced vMotion Compatibiity) mode set to AMD generation 3 or earlier.


The license provides by HP is a three-year license, so it has to be renewed after three years. The license includes the whole capacity, so there is no need to aquire additional liceses. The license also includes replication functionality and StoreOnce Catalyst. The downside is: When you need to backup ony 1 TB, you need to purchase a 10 TB license.


HP StoreOnce VSA can be configured in ethernet environments with StoreOnce Catalyst, VTL and as NAS (CIFS) backup targets. Fibre-Channel isn’t supported. If you want to use StoreOnce with Fibre-Channel, you have to use a harwdare-based StoreOnce appliance. You can also use it as a replication target (max. 1 source appliance). HP StoreOnce Catalyst allows you to transfer deduplicated data between StoreOnce devices without the need of rehydrate the data. All devices use the same dedulication algorithm. So StoreOnce Catalyst allows you to deduplicate data on a application server and then transfer it (still deduplicated) to a remote StoreOnce device.


HP StoreOnce Enterprise Manager (SEM) is a centralized management solution for physical and virtual StoreOnce devices. It can manage up to 400 physical and virtual StoreOnce devices across multiple sites. It provides monitoring, reporting and it integrates with the StoreOnce GUI for single pane-of-glass management. You can also deploy StoreOnce VSA through SEM.

Try it!

HP offers a 60 day evaluation. Simply download and try it. If you enter a valid license key during the trial period, you can continue using it without the need of a reinstallation.