Tag Archives: virtualization

The beginning of a deep friendship: Me & PernixData FVP 2.0

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

I’m a bit late, but better late than never. Some days ago I installed PernixData FVP 2.0 in my lab and I’m impressed! Until this installation, solutions such as PernixData FVP or VMware vSphere Flash Read Cache (vFRC) weren’t interesting for me or most of my customers. Some of my customers played around with vFRC, but most of them decieded to add flash devices to their primary storage system and use techniques like tiering or flash cache. Especially SMB customers had no chance to use flash or RAM to accelerate their workloads because of tight budgets. With decreasing costs for flash storage, solutions like PernixData FVP and VMware vSphere Flash Read Cache (vFRC) getting more interesting for my customers. Another reason was my lab. I simply hadn’t the equipment to play around with that fancy stuff. But things have changed and now I’m ready to give it a try.

The environment

For the moment I don’t have any SSDs in my lab servers, so I have to use RAM for acceleration. I will add some small SSDs later. Fortunately PernixData FVP 2.0 supports NFS and I can use host memory to accelerate my lab workloads.

The installation

I have installed PerniXata FVP 2.0 in my lab and deployed the host extension with the vSphere Update Manager to three of my lab hosts.

PernixData FVP consists of three components:

  • Host Extension
  • Management Server running on a Windows Server
  • UI Plugin for the vSphere C# and vSphere Web Client

The management server needs a MS SQL database and it installs the 64 bit version of Oracle Java SE 7. For a PoC or a small deployment, you can use the Express version of Microsoft SQL Server 2012. I installed the management server onto one of my Windows 2008 R2 servers. This server hosts also my vSphere Update Manager, so I had already a MS SQL database in place. I had some trouble right after the installation, because I missed to enable the SQL Browser service. This is clearly stated in the installation guide. So RTFM. ;)

NOTE: The Microsoft® SQL Server® instance requires an enabled TCP/IP protocol even if the database is installed locally. Additional details on enabling TCP/IP using the SQL Server Configuration Manager can be found here. If using a SQL Named Instance, as in the example above, ensure that the SQL Browser Service is enabled and running. Additional details on enabling the SQL Browser Service can be found here.

After I had fixed this, the management server service started without problems and I was able to install the vSphere C# client plugin. You need the plugin to manage FVP, but the plugin installation is only necessary, if you want to use the vSphere C# client. You don’t have to install a dedicated plugin for the vSphere Web Client.

To install the host extension, you can simply import the host extension into the vSphere Update Manager, build a host extension baseline, attach it to the hosts (or the cluster, datacenter object etc.) and remediate them. The hosts will go into the maintenance mode, install the host extension and then exit maintenance mode. A reboot of the hosts is not necessary!

Right after the installation, I created my first FVP cluster. The trial period starts with the installation of the management server. There is no special trial license to install. Simply install the management server and deploy the host extension. Then you have 30 days to evaluate PernixData FVP 2.0.

Both steps, the installation of the host extension using the vSphere Update Manager, as well as the installation of the Management server, are really easy. You can’t configure much, and you don’t need to configure much. You can customize the network configuration (what vMotion network or which ports should be used), you can blacklist VMs and select VADP VMs. Oh, and you can re-enable the “Getting started” started screen. Good for the customer, bad for the guy who’s payed to install FVP. ;) Nothing much to do. But I like it. It’s simple and you can quickly get started.

First impressions

My FVP cluster consists of three hosts. Because I don’t have any SSDs for the moment, I uses host memory to accelerate the workload. During my tests, 15 VMs were covered by FVP and they ran workloads like Microsoft SQL Server, Microsoft Exchange, some Linux VMs, Windows 7 Clients, Fileservices, Microsoft SCOM. I also played with Microsoft Exchange Jetstress 2013 in my lab. A mixed bag of different applications and workloads. A picture says more than a 1000 words. This is a screenshot of the usage tab after about one week. Quite impressive and I can confirm, that FVP accelerates my lab in a noticeable way.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

I’ve enabled FVP on Monday evening. Check the latency diagram, that I’ve taken from vCenter. See the latencies dropping on Monday evening? The peaks during the week were caused by my tests.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Final words

Now it’s time to convince my sales colleagues to sell PernixData FVP. Or some customers read this blog post and ask my sales colleagues for PernixData. ;) I am totally convinced of this solution. You can buy PernixData FVP in different editions:

  • FVP Enterprise: No limit on the number of hosts or VMs
  • FVP Subscription: FVP Enterprise purchased using a subscription model
  • FVP Standard: No limit on the number of hosts or VMs. Perpetual license only. No support for Fault Domains, Adaptive Resource Management and Disaster Recovery integration (only in FVP Enterprise).
  • FVP VDI: Exclusively for VDI (priced on a per VM basis)
  • FVP Essentials Plus: FVP Standard that supports 3 hosts and accelerates up to 100 VMs. This product can only be used with VMware vSphere Essentials (Plus).

If you’re interested in a PoC or demo, don’t hesitate to contact me.

I’d like to thank Patrick Schulz, Systems Engineer DACH at PernixData, for his support! I recommend to follow him on Twitter and don’t foget to take a look at his blog.

Juniper publishes vMX

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

This tweet from @JuniperNetworks has really inspired me yesterday. I liked Junipers Firefly Perimeter (vSRX) from the first day. I like the idea behind this product (yes, I like everything that can be run as a VM…). But yesterday Juniper has go one better.

Juniper Networks announced yesterday a virtualized and carrier-grade version of their MX Series 3D router. The Juniper Networks vMX is a virtual MX Series 3D Universal Edge Router and it’s optimized to run on x86 hardware. Juniper vMX can run on all major Hypervisors, including VMware ESXi and KVM. It was also mentioned, that vMX can be run in Docker containers or on bare-metal.

The development of vMX was relieved by Junipers acquisition of Contrail. Junipers physical MX series router is powered by Junipers Trio chipset and Juniper has virtualized their Trio chipset for vMX (now called vTrio). It was also optimized for x86 hardware. Depending on the number of physical resources, a vMX can achieve a throughput of 160 Gbps. vMX uses vTrio, Junos OS and supports the same feature set, so it feels and behaves like a physical MX series router. This ensures that customers can leverage their Juniper MX knowhow to run vMX in their environment. If a customer uses physical or virtual MX router is only a question of performance. Multiple vMX can be managed with Junos Space, Contrail SDN controller and OpenStack Cloud Manager. Customers will be able to buy vMX with beginning of Q1/2015 in a flexible license model (Pay-as-you-grow, perpetual or subscription license). Details about the pricing weren’t revealed by Juniper.

This short video was published by Juniper Networks and it’s available on YouTube.

VMware disables inter VM Transparent Page Sharing (TPS) for security reasons

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

This morning I discovered a tweet from Derek Seaman in my timeline, that caught my attention.

TPS stands for Transparent Page Sharing and it’s one of VMware memory management technologies. VMware ESX(i) uses four different technologies to manage host and guest memory resources (check VMware KB2017642 for more information). The preference increases from TPS to swapping.

  • Transparent page sharing (TPS)
  • Ballooning
  • Memory Compression
  • Swapping

TPS is a technology by which redundant copies of memory pages are eliminated. You can understand TPS like some kind of memory deduplication. The hypervisor scans the memory periodically for memory pages that could be possibly  shared. For each candidate memory page a hash is calculated and it’s saved in a hash table. If a second candidate page has the same hash, a full bit-by-bit comparison for both pages is triggered. If both memory pages are identical, only one page is saved and the other memory page is reclaimed. TPS is enabled by default and shows good results, especially if you were running a lot VMs with the same OS, like in VDI or terminal server environments.

With the advent of hardware-assisted memory virtualization systems, like Intel EPT or AMD RVI, VMware changed the behaviour of TPS and how guest memory is backed to physical memory. Guest memory was now backed with larger memory pages (2MB instead of 4KB ) for better performance. But 4 KB pages were still used if there were no 2 MB continuous memory, e.g. in case of memory overcommitment or memory fragmentation. Using 2 MB memory pages has advantages, for sure, but in perspective of TPS it has two disadvantage:

  • small chance to find two identical memory pages
  • the expense of a bit-by-bit comparison is at 2 MB pages incredibly much higher than with 4 KB pages

The punchline is, that with hardware-assisted memory virtualization systems, TPS is only actively used if the host is under memory pressure. But it is still there and working.

Safety over performance

Yesterday VMware published KB2080735 (Security considerations and disallowing inter-Virtual Machine Transparent Page Sharing). The purpose of this KB:

This article acknowledges the recent academic research that leverages Transparent Page Sharing (TPS) to gain unauthorized access to data under certain highly controlled conditions and documents VMware’s precautionary measure of no longer enabling TPS in upcoming ESXi releases. At this time, VMware believes that the published information disclosure due to TPS between virtual machines is impractical in a real world deployment.

Because of this, TPS will be disabled by default with the release of:

  • ESXi 5.5 Update release (Q1/ 2015)
  • ESXi 5.1 Update release (Q4/ 2014)
  • ESXi 5.0 Update release (Q1/ 2015)
  • The next major version of ESXi (ESXi 6.0)

Prior these updates VMware will release patches that introduce additional TPS management capabilities and that WILL NOT change the existing settings for inter VM TPS (check KB2091682). As stated in KB2080735, the planned ESXi patch releases are:

  • ESXi 5.5 Patch 3
  • ESXi 5.1
  • ESXi 5.0

The patches for ESXi 5.0 and 5.1 are planned for Q4/ 2014. For ESXi 5.5 a patch the patch is already available (ESXi550-201410401-BG).

My 2 cents

Several years ago, the deactivation of TPS would have been fatal. Today, and in consideration of “safety over performance”, I think it was the right decision. If your design heavily relies on TPS, then you maybe have a bad design. ;)

Also a good read:

Frank DennemanFuture direction of disabling TPS by default and its impact on capacity planning
Magnus AnderssonChanges in ESXi Transparent Page Sharing (TPS) behaviour
Kenneth van SurksumVMware decides to disable TPS in future ESXi releases by default
Marcel van den BergVMware wil disable Transparant Page Sharing by default in future ESXi releases
Andrea MauroBye bye Transparent Page Sharing
Chris WahlTransparent Page Sharing Vulnerable, Yet Largely Irrelevant

More will follow, ping me on Twitter if you found a good one!

My lab network design

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Inspired by Chris Wahls blog post “Building a New Network Design for the Lab“, I want to describe how my lab network designs looks like.

The requirements

My lab is separated from my home network, and it’s focused on the needs of a lab. A detailed overview about my lab can be found here. My lab is a lab and therefore I divided it into a lab, and an infrastructure part. The infrastructure part of my lab consists of devices that are needed to provide basic infrastructure and management. The other part is my playground.

While planning my lab, I focused on these requirements:

  • Reuse of existing equipment
  • Separation of traffic within the lab and to the outer world
  • Scalable, robust and predictable performance

The equipment

To meet my requirements, I had the following equipment available:

  • HP 1910-24G switch
  • HP 1910-8G switch
  • Juniper 5GT firewall

The design

The HP 1910 switch is an awesome product with a very good price / performance ratio. Especially because the can do IP routing, which was important for my lab design. Each of my ESXi hosts has 4x 1 GbE interfaces, plus one interface for ILO. In sum 20 ports are necessary to connect my ESXi hosts to my network. The 1910-24G and 1910-8G were connected with a 1 GbE RJ45 SFP. The 1910-8G is used to connect the firewall and client devices, e.g. a Thin Client or a laptop. No other devices are connected to my lab. Because storage is delivered by a HP StoreVirtual VSA, no ports are needed for a NAS or similar.

To separate the traffic, I created a couple of VLANs. Unlike Chris, I’m still using VLAN 1 in my lab. In a customer environment, I would avoid the use of VLAN 1.

VLAN IDNameUsage
1Access (Default)Client connectivity
2ManagementILO, Management VMkernel ports
3InfraVMs and devices for the lab infrastructure
4Lab 1Lab VLAN
5Lab 2Lab VLAN
6Lab 3Lab VLAN
7Temptemporary connectivity
200vMotionvMotion VMkernel ports

VLAN 1 (Default) and 3 are carried to the 1910-8G. All VLANs are carried to the ESXi hosts using trunk ports on the 1910-24G. The Juniper 5GT is connected to the 1910-8G and the trusted interface is connected to an access port in VLAN 3. The untrusted port is connected to the outer world.

The routing is a bit complex on the first look. I configured a couple of switch virtual interfaces (SVI) on the 1910-24G. I configured a SVI for the VLANs 1, 2, 3, 7, 10, 11 and 100. But how do I get traffic in and out of my lab VLANs? I use a small firewall VM that is housed in VLAN 3 (Infra). It has interfaces (vNICs) in VLAN 4, 5 and 6. With this VM, I can carry traffic in and out of my lab VLANs, as long as a policy allows the traffic.

I use  /27 subnets for VLAN 1 to 7, two /28 for VLAN 100 (NFS) and 200 (vMotion), and two /24 for VLAN 10 and 11 (both iSCSI).

VLAN IDNameIP Subnet
1Access (Default)
4Lab 1192.168.200.96/27
5Lab 2192.168.200.128/27
6Lab 3192.168.200.160/27
10iSCSI 1192.168.110.0/24
11iSCSI 2192.168.111.0/24

I don’t use a routing protocol inside my lab. It looks complex, but with this design I can easily separate the traffic for my three lab VLANs. iSCSI is routed, but I don’t route iSCSI traffic. The same applies for NFS. This drawing gives you an overview about the routing.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

To simplify address assignment, I use a central DHCP on VLAN 3 with several scopes. The HP 1910-24G and my firewall VM act as DHCP relay and forward DHCP requests to my DHCP. For each VLAN only a small number of dynamic IPs are available. Usually, the servers get a fixed IP.

1Access (Default)
4Lab 1192.168.200.96/27
5Lab 2192.168.200.128/27
6Lab 3192.168.200.160/27

The VLAN 10 is used to carry iSCSI from the HP StoreVirtual VSA to my ESXi hosts. The second iSCSI VLAN (ID 11) can be used for tests, e.g. to simulate routed iSCSI traffic. The VLANs 4, 5 and 6 are used for lab work. Until I add a  rule on my firewall VM, no traffic can enter or leave VLAN 4, 5 and 6. When deploying a new VM, I add the VM to VLAN 1 or 3. The VM is installed using MDT and PXE. After applying all necessary updates (MDT uses WSUS during the setup), I can add the VM to VLAN 4, 5 or 6.

Final words

Sure, a lab network design could be easier. The IP subnets can be a pitfall, if you’re not familiar with subnetting. The routing seems to be complex, if you’re not an expert in IP routing. Until today, the network has done exactly what I expected.

HP 3PAR Peer Persistence for Microsoft Windows Servers and Hyper-V

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Some days ago I wrote two blog posts (part I and part II) about VMware vSphere Metro Storage Cluster (vMSC) with HP 3PAR Peer Persistence. Because I wrote about it in the first of the two blog posts, allow me to take a short description, what Peer Persistence is and what it does, from that blog post:

HP 3PAR Peer Persistence adds functionalities to HP 3PAR Remote Copy software and HP 3PAR OS, that two 3PAR storage systems form a nearly continuous storage system. HP 3PAR Peer Persistence allows you, to create a VMware vMSC configuration and to achieve a new quality of availability and reliability.

You can transfer the concept of a Metro Storage Cluster to Microsoft Hyper-V. There is nothing VMWare specific in that concept.

With the GA of 3PAR OS 3.2.1 in September 2014, HP announced a lot of new features. The most frequently mentioned feature is Adaptive Flash Cache. If you’re interested in more details about Adaptive Flash Cache you will like the AFC Deep dive on 3pardude.com. A little lost is the newly added support for  Peer Persistence with Hyper-V. This section is taken from the release notes of 3PAR OS 3.2.1:

3PAR Peer Persistence Software supports Microsoft Windows 2008 R2 and Microsoft Windows 2012 R2 Server and Hyper-V, in addition to the existing support for VMware. HP 3PAR Peer Persistence software enables HP 3PAR StoreServ systems located at metropolitan distances to act as peers to each other, presenting a nearly continuous storage system to hosts and servers connected to them. This capability allows to configure a high availability solution between two sites or data centers where failover and failback remains completely transparent to the hosts and applications running on those hosts.

3PAR Peer Persistence with Microsoft Windows Server and Hyper-V

Currently supported are Windows Server 2008 R2 and Server 2012 R2 and the corresponding versions of Hyper-V. This table summarizes the currently supported environments.

HP 3PAR OSHost OSHost connectivityRemote Copy connectivity
3.2.1Windows Server 2008 R2FC, FCoE, iSCSIRCIP, RCFC
3.2.1Windows Server 2012 R2FC, FCoE, iSCSIRCIP, RCFC

At first glance, it seems that Microsoft Windows Server and Hyper-V support more options in terms of Host and Remote Copy Connectivity. This is not true! With 3PAR OS 3.2.1, HP added the support for FCoE and iSCSI host connectivity, as well as the support for RCIP for VMware. At this point, there is no winner. Check HP SPOCK for the latest support statements.

With 3PAR OS 3.2.1 a new host persona (Host Persona 15) was added for Microsoft Windows Server 2008, 2008 R2, 2012 and 2012 R2. This host persona must be used in Peer Persistence configurations. This is comparable to Host Persona 11 for ESXi. The setup and requirements for VMware and Hyper-V are similar. For a transparent failover a Quorum Witness is needed and it has to be deployed onto a Windows Server 2012 R2 Hyper-V host (not 2008, 2008 R2 or 2012!). Peer Persistence operates in the same manner as with VMware: The Virtual Volumes (VV) are grouped into Remote Copy Groups (RCG), mirrored synchronously between a source and destination storage system. Source and destination volume share the same WWN. They are presented using the same LUN ID and the paths to the destination storage are marked as standby. Check part I of my Peer Persistence blog series for more detailed information about how Peer Persistence works.

Final words

It was only a question of time until HP releases the support for Hyper-V with Peer Persistence. I would have assumed that HP makes more fuss about it, but AFC seems to be the killer feature in 3PAR OS 3.2.1. I’m quite sure that there are some companies out there that have waited eagerly for the support of Hyper-V with Peer Persistence. If you have any further questions about Peer Persistence with Hyper-V, don’t hesitate to contact me.

VMware jumps on the fast moving hyper-converged train

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

The whole story began with a tweet and a picture:

This picture  in combination with rumors about Project Mystic have motivated Christian Mohn to publish an interesting blog post. Today, two and a half months later, “Marvin” or project Mystic got its final name: EVO:RAIL.

What is EVO:RAIL?

Firstly, we have to learn a new acronym: Hyper-Converged Infrastructure Appliance (HCIA). EVO:RAIL will be exactly this: A HCIA. IMHO EVO:RAIL is VMwares try to jump on the fast moving hyper-converged train. EVO:RAIL combines different VMware products (vSphere Enterprise Plus, vCenter Server, Virtual SAN and vCenter Log Insight) along with EVO:RAIL deployment, configuration and management to a hyper-converged infrastructure appliance. Appliance? Yes, an appliance. A single stock keeping unit (SKU) including hardware, software and support. To be honest: VMware will no try to sell hardware. The hardware will be provided by partners (currently Dell, EMC, Fujitsu, Inspur, NetOne and SuperMicro).

VMware Chief Technologist Duncan Epping described four advantages of EVO:RAIL in a today published blog post:

EVO:RAIL is software-defined. Based on well-known VMware products, the EVO:RAIL engine simplifies the deployment, management and configuration of the building blocks.

EVO:RAIL is simple: The EVO:RAIL engine allows you to reduce the time from rack & stack until you can power-on your first VM. You need less time for basic tasks, like creation of VMs or for the patch management of the hosts. If you need more compute or storage capacity, simply add additional 2U blocks (currently max 4 blocks > 16 nodes).

EVO:RAIL is highly resilient: A 2U block consists of four nodes. This results in a single four host vSphere cluster, with a single VSAN datastore and full support für VMware HA, DRS, FT etc. This facilitate no downtime for VMs during planned maintenance or node failures.

EVO:RAIL allows customers to choose: Customers can obtain EVO:RAIL using a single SKU from their preferred EVO:RAIL partner. The partner provides hardware, software and support for the EVO:RAIL HCIA.

Each HCIA node will provide at least:

  • 2x Intel Xeon E5-2620 v2 six-core CPUs
  • at least 192GB of memory
  • 1x SLC SATADOM or SAS HDD as boot device
  • 3x SAS 10K RPM 1.2TB HDD for the VMware Virtual SAN datastore
  • 1x 400GB MLC enterprise-grade SSD for read/ write cache
  • 1x Virtual SAN-certified pass-through disk controller
  • 2x 10GbE NIC ports (either 10GBase-T or SFP+)
  • 1x 1GbE IPMI port for out-of-band management

This results in a four node vSphere cluster with 48 cores, 768 GB RAM and 14,4 TB raw disk space on just 2U. A single block allows you to run 100 average-sized (2 vCPU, 4GB RAM, 60GB with redundancy) general-purpose VMs, or 250 View VMs (2vCPU, 2GB RAM, 32GB linked clones).

My thoughts

Looks like a Nutanix clone, isn’t it? Yes, it’s a HCIA like a Nutanix block. But it’s focused on VMware (you can’t run Microsoft Hyper-V or KVM on it) and it will be sold by EVO:RAIL partners. This allows VMware to use a much wider distribution channel. It will be fun to see how other hyper-converged companies will react to this announcement. Unfortunately HP isn’t listed as a HCIA partner company. But DELL is listed. Fun fact: DELL and Nutanix signed a contract in June 2014.

Strategic Relationship Significantly Expands Access and Distribution of Nutanix Solutions with Dell’s World-Class Hardware, Services and Marketing to Accelerate Adoption of Web-scale Converged Infrastructure in the Enterprise

Take a look into the “Introduction to VMware EVO: RAIL” whitepaper. There are other great blog posts about EVO:RAIL:

Duncan EppingMeet VMware EVO:RAIL™ – A New Building Block for your SDDC
Chris WahlVMware Announces Software Defined Infrastructure with EVO:RAIL
Marcel van den BergVMware announces EVO:RAIL , a turnkey appliance  offering SDDC in a box featuring vSphere  and Virtual SAN
Marco BroekenVMworld 2014: Introducing VMware EVO: RAIL
Vladan SEGETVMware EVO:RAIL – New Hyper-Converged Solution By VMware
Eric SloofVMware EVO: RAIL Hyper-Converged Infrastructure Appliance

Memory management: VMware ESXi vs. Microsoft Hyper-V

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Virtualization is an awesome technology. Last weeks I visited a customer and we took a walk through their data centers. While standing in one of their data centers I thought: Imagine that all server, that they are currently run as VMs, would be physical?. I’m still impressed about the influence of virtualization. The idea is so simple You share the resources of the physical hardware between multiple virtual instances. I/O, network bandwidth, CPU cycles and memory. After nearly 10 years of experience with server virtualization I can tell that especially the memory resources is one of the weak points. When a customer experiences performance problems, they were mostly caused by a  lack of storage I/O or memory.

The reason for this post

Today I like to write a bit about memory management of hypervisors, in this case the memory management of VMware ESXi (the trombone in the flutes orchestra) and Microsoft Hyper-V. They are the leading hypervisors on the market (source: Magic Quadrant for x86 Server Virtualization Infrastructure). But there is another cause, why I took a closer look at the memory management of Hyper-V: Microsofts support policies and recommendations for exchange servers in hardware virtualization environments. In the run up to a Exchange migration project I took a quick look into Microsofts TechNet, just to verify some questions. And then I stumbled over this statement, valid for Exchange 2013:

Exchange memory requirements and recommendations

Some hypervisors have the ability to oversubscribe or dynamically adjust the amount of memory available to a specific guest machine based on the perceived usage of memory in the guest machine as compared to the needs of other guest machines managed by the same hypervisor. This technology makes sense for workloads in which memory is needed for brief periods of time and then can be surrendered for other uses. However, it doesn’t make sense for workloads that are designed to use memory on an ongoing basis. Exchange, like many server applications with optimizations for performance that involve caching of data in memory, is susceptible to poor system performance and an unacceptable client experience if it doesn’t have full control over the memory allocated to the physical or virtual machine on which it’s running. As a result, using dynamic memory features for Exchange isn’t supported.

There are similar statement for Exchange 2007 and 2010. At the first moment I thought “Okay, looks like the Exchange-on-NFS thing”. Check Josh Odgers blog post if you want to know more about this Exchange-on-NFS thing. If you’re running your Exchange on NFS, don’t read it. There is reason to believe that you will go out and shoot a Microsoft engineer after reading it. After a couple of seconds I thought “What does dynamic memory feature mean?” This was the beginning of a journey into the deep of hypervisor memory management.

The derivation

Memory is the only component in a server that can’t be oversubscribed. That’s plausible, because you can schedule multiple VMs to a single CPU core using a  time-slice mechanism. But you can’t share a memory cell, if a VM has stored data in it. Now you have a number of options. You can configue a static memory size for each VM. If you have 32 GB memory in your virtualization host, you can run e.g. two VMs with 8 GB and four VMs with 4 GB memory. But what if a VM needs more memory? Either you reduce the amount of memory for the other VMs or you have to add memory to your host. Not very flexible. Now suppose that the VMs take full advantage of its configured memory only very rarely. In this case we can use the unused memory for the running or new VMs. We can oversubscribe the memory of the physical host. But this only works as long as the number of actively used memory is less or equal the memory size of the host. We have to react only in the case that a VM wants to take full advantage of its configured memory.

How does VMware ESXi manages its memory?

VMware ESXi uses four technologies to manage its memory:

  • Transparent Page Sharing (TPS)
  • Ballooning
  • Memory Compression
  • Swapping

Since the introduction of large pages (2 MB memory pages), TPS is only used under memory contention (thanks to Manfred for this hint). With TPS the memory is divided into pages and the hypervisor checks, if some of the pages are identical. If this is the case, the hypervisor stores only one copy of page and sets pointers to the identical ones. If you’re running a lot of similar VMs, then TPS can reduce the amount of used memory. Ballooning uses a special driver inside the VM. The hypervisor can use this driver to allocate memory inside of a VM. The OS inside the VM then frees up memory that isn’t used. The hypervisor then can reclaim that memory. Memory compression is used shortly before the hypervisor has to swap to disk. If a memory page can be compressed by at least 50% it’s held in the memory compression cache (10% of the memory is reserved for this). Otherwise it’s swapped to disk. Swapping is the last technology. If there is no more memory left and the other technologies are used up to their maximum, memory pages are swapped out to disk. Please note, that this is a very rough summary. For more information, please check VMware vSphere Resource Management Guide. With this techniques you can easily create four VMs with each 16 GB memory on a host with 32 GB memory. Important is, that the VMs can only allocate less then 32 GB memory, because the hypervisor also needs some memory for itself and virtualization overhead. A VM needs at least the amount of overhead memory to start on a VMware ESXi.

How does Microsoft Hyper-V manages its memory?

Until Windows Server 2008 R2 SP1 Microsoft Hyper-V was unable to do dynamic memory management. Only static memory allocation was possible. To stay with the example, it wasn’t possible to start four 16 GB VMs on a 32 GB host with Hyper-V. During the power-on, Hyper-V reserves the configured memory of the VM, which makes unused memory unavailable for other VMs. With Windows Server 2008 R2 SP1 Microsoft added dynamic memory management to Hyper-V. Since then you can enable dynamic memory on VM-level. After anabling it for a VM you can set a so called “Startup RAM”. This is the amount of memory which is assigned to the VM during startup. This is because Windows needs more memory during startup than the steady state (Source). It should be set to the amount of memory which is needed to run the server and application with the desired performance. You can also configure a “Minimum RAM”. This is the amount of memory up to which the hypervisor can reclaim memory using a ballooning technique. Because you can configure a “Minimum RAM”, you can also configure a “Maximum RAM”. This is the amount of memory up to which the hypervisor can add memory to the VM. And now comes the interesting part. Any idea how the hypervisor adds memory to the VM? No? He’s using memory hot-add! If the VMs needs more memory, it’s simply hot-added to the VM. This explains why the OS inside the VM has to support memory hot-add in the case you want to use dynamic memory. And it explains also why some applications are not supported with Hyper-V dynamic memory.


IMHO it’s not the dynamic memory managemet itself, which leads to Microsofts support statement. It’s the way how Microsoft Hyper-V manages the dynamic memory. Exchange checks the configured memory during the start of its services. If the memory size is increased after the start of the services, Exchange simply doesn’t recognizes it. On the other hand Microsoft SQL Server can profit from hot-added memory. Because of this, dynamic memory is supported with Microsoft SQL Server (check question 7 and answer 7 in the linked KB article). VMware ESXi doesn’t hot-add memory to a VM. Therefore you have to configure a suitable memory size. If you hot-add memory, then the same restrictions apply as for Hyper-V. Instead of relying on memory hot-add you can configure the suitable memory size when using VMware ESXi. But always remember: Memory oversubscription can lead to performance problems, if the VMs try to allocate the configured memory! Best practice is not to oversubscribe memory.


Memory management in ESXi and Hyper-V strongly differs. There’s no better or worse. They are too different to compare and they are developed for different use cases.

Deploying HP StoreVirtual VSA – Part II

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Part I of this series covered the deployment, part II is dedicated to the configuration of the StoreVirtual VSA cluster. I assume that the Centralized Management Console (CMC) was installed. Start the CMC. If you see no systems unter “Available Systems”, client “Find” on the menu and then choose “Find Systems…”. A dialog will appear. Click “Add…” and enter the ip address of one of the earlier deployed VSA nodes. Repeat this until all deployed VSA nodes are added. Then click “Close”. Now you should have all available VSA nodes listed under “Available Systems”.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

A management group contains virtual and physical StoreVirtual systems that are managed together. Cluster and volumes are defined per management group. Also user accounts are defined per management group. Right click a node and choose “Add to New Management Group…” from the context menu. We will add all three nodes into this new management group.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Click “Next”. On the next page of the wizard we have to enter a username and password for a administrative user, that will be added to all nodes.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

On the next page we have to provide a NTP server. You can set the time manually, but I recommend to use a NTP server. In this case it’s the Active Directory Domain Controller in my lab. Please note, that this server has to be reachable for the VSA nodes! In part I we deployed the VSA nodes with two NICs and with eth0 they can reach the NTP server.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

On the next page of the wizard, you have to provide information about the DNS: DNS domain name, additional DNS suffixes and one or more DNS servers. For the DNS servers the same applies as for the NTP server. They have to be reachable for the VSA nodes!


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

To use the e-mail notification, you have to provide a SMTP server. I don’t have one in my lab, so I left the fields empty. This results in a warning message which can safly be ignored.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Now comes a very important question: Standard or Multi-Site Cluster? A Multi-Site cluster is necessary if site fault tolerance is needed. It also takes care, that traffic from hosts is only send to the local site. A Multi-Site cluster can span multiple sites and can have cluster virtual ip addresses (cluster VIP) in different subnets. A Multi-Site cluster is needed, if you want to build a vSphere Metro Storage Cluster (vSMC) with HP StoreVirtual. I chose to create a standard cluster.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

After chosing the cluster type, we have to provide a cluster name and the number of nodes, that should be member of this new cluster.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The next step is to configure the cluster virtual ip address (cluster VIP). This ip address has to be in the same subnet as the VSA nodes. This ip address is used to access the cluster. After the initial connection to the cluster VIP, the initiator will contact a VSA node for the data transfer.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The wizard allows us to create a volume. This step can be skipped. I created a 1 TB thin-provisioned volume.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

After clicking “Finish” the management group and the cluster will be created. This steps could take some time.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

At the end you will get a summary screen. You can create further volumes or you can repeat the whole wizard to create additional management groups or cluster.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Congratulations! You have now a fully functional HP StoreVirtual VSA cluster.

Possible cluster VIP error message

Depending on your deployment, you will get this error message in the CMC:

VIP error: System is not reachable by any VIP in the cluster


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

This message occures, if you have deployed your VSA nodes and two NIC and the NIC, that is used for iSCSI, isn’t selected as the preferred SAN/iQ interface. I mentioned in part I that I would refer to the “Select the preferred SAN/iQ interface” option later. This is now. You can get rid of this message, by selecting the right interface as the preferred SAN/iQ interface. Select “Network” on a VSA node, then click the “Communication” tab and choose “Select LeftHandOS Interface…” from the “Communications Tasks” drop-down menu on the bottom of the page.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The message should disappear after changing this on each affected VSA node.

Add hosts

To present volumes to hosts, you have to add hosts. A host consits of a name, an ip address, an iSCSI IQN and, if needed, CHAP credentials. Multiple hosts can grouped to server clusters. You need at least to hosts to build a server cluster. But first of all, we will add a single host:


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

If you want to work with application-managed snapshots, you have to provide a “Controlling Server IP Address”. When working with VMware vSphere, this is the ip address of the vCenter server.

With at least two hosts, you can create a server group. A server group simplifies the volume management, because you can assign and unassign volumes to a group of hosts with a single click. This ensures the consistency of volume presentations for a group of hosts.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Presenting a volume

During the initial configuration we created a 1 TB thin-provisioned nRAID 10 volume. To assign this volume to a host, right-click the volume in the CMC and click “Assign and Unassign Servers…”. A windows will popup and you can check or uncheck the server, to which the volume should be assigned. A volume can be presented read-only or read-write.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

We are nearly at the end. We only have to add the cluster VIP to the iSCSI initiator and create a datastore out of the presented volume.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

After a rescan a new datastore can be added by using the presented volume. Have I mentioned that each VSA node has only 10 GB of data storage? Thin provisioning can be treacherous… ;)


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Final words

The deployment and configuration is really easy. But this short series only scratched the surface. You can now add more volumes, play with SmartClones and remote snapshots. Have fun!

Deploying HP StoreVirtual VSA – Part I

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

I would like to thank Calvin Zito for the donation of StoreVirtual NFR licenses to vExperts. This will help to spread the knowhow about this awesome product! If you are not a vExpert, you can download the StoreVirtual VSA for free and try it for 60 days. If you are a vExpert, ping Calvin on Twitter for a 1y NFR license.

This blog post covers the deployment of the current StoreVirtual VSA release (LeftHand OS 11). A second blog post covers the configuration using the CMC. Both posts are focused on LeftHand OS 11 and VMware vSphere. If you are searching for a deployment and configuration guide for LeftHand OS 9.x or 10 on VMware vSphere, take a look at this two blog posts from Craig Kilborn: Part 1 – How To Install & Configure HP StoreVirtual VSA On vSphere 5.1 & Part 2 – How To Install & Configure HP StoreVirtual VSA On vSphere 5.1. Another blog post that covers LeftHand OS 11 is from Hugo Strydom. Hugo wrote about what he did with his VSA (vExpert : What I did with my HP VSA). I wrote a blog post about the HP StoreVirtual VSA some weeks ago. If you are interested in some basics about the VSA, check my mentioned blog post.


The deployment process has been simplified. The setup wizard did a good job at my lab, but AFAIK there are problem if you use distributed Switches in your lab. If you are affected, please leave a comment or ping me via Twitter. But before we start the setup wizard, we have to think about the goals of our setup. There are some things, that we need to consider. The deployment process can be divided into tree steps:

  1. Planning
  2. Deployment
  3. Configuration

Planning the installation

Before you start, you should have a plan. There are some things you should consider.

vSwitches: We have to design and configure the virtual switches (vSwitches) and port groups. The vSwitches should be dedicated to the VSA cluster and the accessing hosts. You should configure at least 2x 1 GbE for performance and redundancy as vSwitch uplinks. If the iSCSI initiators and all nodes of the VSA cluster are running on the same host, you can use a vSwitch with no uplinks. If you want to use jumbo frames, you need to configure the vSwitches, port groups and VMkernel ports accordingly. I recommend to use a dedicated iSCSI VLAN to separate the traffic.

IP addresses: Each VSA needs an ip address. I recommend to use two ip addresses: One for eth0 and one for eth1. eth0 will be used for management and must attached to a port group, that makes it possible to reach the interface. Either because you client is attached to the same port group, the traffic is routed or the physical client is in the same VLAN as the VSA. eth1 will be used for iSCSI. You also need an ip address for the cluster virtual ip address (cluster VIP). This address must be in the same subnet as the eth1 ip addresses of the VSA nodes. If you want to use multipathing for your iSCSI initiators, each initiator needs two ip addresses in the same subnet as the VIP and the VSA nodes.

Hostnames: Meaningful hostnames facilitate management. I named my VSA nodes vsa01.lab.local, vsa02.lab.local and vsa03.lab.local. Feel free to name you VSAs in another fashion. :)

Storage: A VSA node has a single disk for the OS. All other disks are attached to a seperate controller (when using VMware the Paravirtual SCSI adapter is used). Storage can be added as VMDK or RDM to a VSA node, beginning with SCSI 1:0 (first device on second controller). If you want to use Adaptive Optimization (AO), you should have 10% of the total capacity on SSDs. The VMDK or RDM should be RAID protected, so you should avoid the use of RAID 0. Disks can be hot-added, but not hot-removed. You need at least 5 GB, but a VSA can scale up to 50 TB.

CPU & Memory: CPU and memory resources have to be reserved. You should have at least two 2 GHz cores reserved for each VSA node. The memory requirements depend on the virtualized storage capacity. For 4 TB up to 10 TB you should have 7 GB RAM for each VSA node. If you want to use the same capacity with AO, you should have 8 GB RAM. For 500 MB up to 4 TB, you should have 5 GB RAM. This applies also when using AO. In a productional environment I strongly recommend to use CPU and memory reservation and not to run more than one VSA on a single host. This does not apply to a lab environment.

The deployment

I took some screenshots during the deployment of a VSA using the setup wizard. I ran the wizard on a Windows 8.1 client.

The setup file (HP_StoreVirtual_VSA_2014_Installer_for_VMware_vSphere_TA688-10518.exe) is self-extracting. After the extraction a CMD comes up asking you, if you want to use the GUI or  CLI interface. I chose the GUI wizard. Unfortunately after pressing “2” for the GUI wizard, the wizard didn’t appeared. I had to run the setup file as administrator (right click the file, then choose “Run as administrator”). On the welcome page simply click “Next”.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

You have to provide hostname or ip address, and login credentials for the target ESXi host or the vCenter server. I chose a ESXi as target for my VSA deployment.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

On the third page you get a summary of the host you chosen one step earlier.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Now you can choose between deploying a VSA or a Failover Manager. The latter is a special manger used in clusters as a quorum tie-breaker. But we want to deploy a VSA.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

In the next step we have to chose a datastore in which the VSA should reside. This has no impact on the later configured storage.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The next step covers the NIC setup of the VSA. As I mentioned earlier I recommend to use two NICs for the VSA: One for management and a second one for iSCSI traffic. As you can see on the screenshot, I used eth0 for management.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The second NIC is dedicated to iSCSI traffic. Please notice the drop-down menu on the bottom “Select the preferred SAN/iQ interface”. I will refer to it later.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Now it’s time to give a name to the VM and to select the drive type. Because I had no RDMs in my lab, the option is greyed out.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Now we have to configure the data disks.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The wizard allows you to deploy more than one VSA. In the next step you can choose, if you want to deploy another VSA on the same or another host, or if you are done. I only deployed one VSA, so I was done at this point.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Before you click “Deploy”, you should check the settings. If everything is fine, hit the “Deploy” button. The deployment will start immediately.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

After a couple of minutes the deployment is finished. Hit “Finish”. Now it’s time to start the Centralized Management Console (CMC). If not already installed, you can install it manually. Usually the CMC is installed automatically by the wizard.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Part II covers the configuration of the management group, cluster etc. If you have further questions or feedback, feel free to leave a comment!

DataCore announces SANsymphony-V10

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Today DataCore announced their latest SANsymphony-V release. After the merge of SANmelody & SANsymphony, SANsymphony-V10 is the 10th generation of DataCores flagship product. Interestingly DataCore uses the terms  “software-defined” and “Virtual SAN”. Whether the product of the definition of the terms corresponds everyone should decide for themselves. But this is another story.

What is DataCore SANsymphony-V?

What DataCore definitely does is automating and simplifying storage management and provisioning. I really like it the simplicity. DataCore SANsymphony-V can deliver enterprise-class functionality, like synchronous mirroring, replication, snapshots, clones, thin-provisioning and tiering . It runs on x86 hardware with Microsoft Windows Server 2008 or 2012. Multiple servers can grouped together for load balancing and redundancy. A storage pool can created out of the internal or external flash and roting rust. Single or mirrored virtual disks can be carved out of this storage pool. Hosts can access these virtual disks using iSCSI or Fibre-Channel. Because DataCore SANsymphony-V10 can use several different technologies as backend for storage pools, it’s easy to replace backend storage. You can add or remove disks to or from storage pools. If you backend storage is an old EMC CLARiiON and you get a new HP MSA 2040 Storage, you can replance the old storage without disruption.

What’s new in SANsymphony-V10?

I took this information directly from the DataCore SANsymphony-V10 announcement page:

  • Scalability has doubled from 16 to 32 nodes; Enables Metro-wide N+1 grid data protection
  • Supports high-speed 40/56 GigE iSCSI; 16Gbps Fibre Channel; iSCSI Target NIC teaming
  • Performance visualization/Heat Map tools add insight into the behavior of Flash and disks
  • New auto-tiering settings optimize expensive resources (e.g., flash cards) in a pool
  • Intelligent disk rebalancing, dynamically redistributes load across available devices within a tier
  • Automated CPU load leveling and Flash optimizations to increase performance
  • Disk pool optimization and self-healing storage; Disk contents are automatically restored across the remaining storage in the pool; Enhancements to easily select and prioritize order of recovery
  • New self-tuning caching algorithms and optimizations for flash cards and SSDs
  • ‘Click-simple’ configuration wizards to rapidly set up different use cases (Virtual SAN; High-Availability SANs; NAS File Shares; etc.)

Along with new feature DataCore has announced is a new licensing model. Aside the traditional server license, there will be a Virtual SAN license which includes tiering, adaptive caching, storage pooling, synchronous mirroring, thin provisioning and snapshots/ clones. Both variations, the traditional SANsymphony-V10 and the Virtual SAN, running on Windows Server 2012. So the Virtual SAN will not be a virtual appliance. AFAIK it will only be a special license.

The general availability is scheduled for May 30, 2014. So stay tuned. :) I hope to get SANsymphony-V10 as fast as it’s possible into my lab.