Tag Archives: fvp

Consider the Veeam Network transport mode if you use NFS datastores

This posting is ~7 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

I’m using Veeam Backup & Replication (currently 8.0 Update 3) in my lab environment to backup some of my VMs to a HP StoreOnce VSA. The VMs reside in a NFS datastore on a Synology DS414slim NAS, the StoreOnce VSA is located in a local datastore (RAID 5 with SAS disks) on one of my ESXi hosts. The Veeam backup server is a VM and it’s also the Veeam Backup Proxy. The transport mode selection is set to “Automatic selection”.

Veeam Backup & Replication offers three different backup proxy transport modes:

  • Direct SAN Access
  • Virtual Appliance
  • Network

The Direct SAN Access transport mode is the recommended mode, if the VMs are located in shared datastores (connected via FC or iSCSI). The Veeam Backup Proxy needs access to the LUNs, so the Veeam Backup Proxy is mostly a physical machine. The data is directly read by the backup proxy from the LUNs. The Virtual Appliance mode uses the SCSI hot-add feature, which allows the attachment of disks to a running VM. In this case, the data is read by the backup proxy VM from the directly attached SCSI disk. In contrast to the Direct SAN Access mode, the Virtual Appliance mode can only be used if the backup proxy is a VM. The third transport mode is the Network transport mode. It can be used in any setup, regardless if the backup proxy is a VM or a physical machine. In this mode, the data is retrieved via the ESXi management network and travels over the network using the Network Block Device protocol (NBD or NBDSSL, latter is encrypted). This is a screenshot of the transport mode selection dialog of the backup proxy configuration.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

As you can see, the transport mode selection will happen automatically if you doesn’t select a specific transport mode. The selection will occur in the following order: Direct SAN Access > Virtual Appliance > Network. So if you have a physical backup proxy without direct access to the VMFS datastore LUNs, Veeam Backup & Replication will use the Network transport mode. A virtual backup proxy will use the Virtual Appliance transport. This explains why Veeam uses the Virtual Appliance transport mode in my lab environment.

Some days ago, I configured E-Mail notifications for some vCenter alarms. During the last nights I got alarm messages: A host has been disconnected from the vCenter. But the host reconnected some seconds later. Another observation was, that a running vSphere Client lost the connection to the vCenter Update Manager during the night. After some troubleshooting, I found indications, that some of my VMs became unresponsive. With this information, I quickly found the VMware KB article “Virtual machines residing on NFS storage become unresponsive during a snapshot removal operation (2010953)“. Therefore I switched the transport from Virtual Appliance to Network.

I recommend to use Network transport mode instead Virtual Appliance transport mode, if you have a virtual Veeam Backup Proxy and NFS datastores. I really can’t say that it’s running slower as the Virtual Appliance transport mode. It just works.

Important note for PernixData FVP customers

Remember to exclude the Veeam Backup Proxy VM from acceleration, if you use Virtual Appliance or NBD transport mode. If you use datastore policies, blacklist the VM or configure it as VADP appliance. If you use VM policies, simply doesn’t configure a policy for the Veeam Backup Proxy VM. If you use Direct SAN access, you need a pre- and a post-backup script to suspend the cache population during the backup. Check Frank Dennemans blog post about “PernixData FVP I/O Profiling PowerCLI commands“.

FVP Freedom: Get Pernix’d for free

This posting is ~7 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

PernixData is one of the presenting sponsors at the Virtualization Field Day 5 (VFD5). One of the four key announcements is FVP Freedom.

FVP Freedom will be available in the Fall of 2015 and it’s a completely free version of PernixData FVP. Of course, the functionality is limited. FVP Freedom will only support a single cluster, but with an unlimited number of VMs. Instead of SSDs, FVP Freedom will support up to 128 GB of DFTM (Distributed Fault Tolerant Memory) per cluster. FVP Freedomm will be completely community supported.

You can register for FVP Freedom following this link.

Beside the announcement of FVP Freedom, PernixData also announced important enhancements to PernixData FVP. With the upcoming release of FVP, it will support VMware vSphere 6 and VVols. PernixData also added a new “phone home” functionality and a new HTML5 based GUI.

The two other announcements are PernixData Architect, a software to monitor your infrastructure from the storage perspective and which provides recommendations for your infrastructure, and PernixData Cloud. Latter provides a kind of benchmark how your infrastrucutre does compare to other infrastructures. The data for PernixData Cloud will provided by PernixData Architect and FVP Freedom.

You can watch the whole presentation on the VFD5 website. It will be available shortly.

Selected as PernixPro

This posting is ~7 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Yesterday, at 02:13am (CET), I got an awesome e-mail:

Dear Patrick,

I am pleased to welcome you to the PernixPro program!

I’m very happy to be part of this program!

PernixData | PernixPro

This program is similar to the VMware vExpert or Microsoft MVP program. It’s designed to spread the magic of PernixData FVP. I am totally convinced of PernixData FVP. Because of this, I’m very pleased to be part of the program. Thank you for the recognition!

If you want to know more about the Pernix Pro program, make sure that you take a look at the corresponding website.


Tiering? Caching? Why it’s important to differ between them.

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Some days ago I talked to a colleague from our sales team and we discussed different solutions for a customer. I will spare you the details, but we discussed different solutions and we came across PernixData FVP, HP 3PAR Adaptive OptimizationHP 3PAR Adaptive Flash Cache and DataCore SANsymphony-V. And then the question of all questions came up: “What is the difference?”.

Simplify, then add Lightness

Lets talk about tiering. To make it simple: Tiering moves a block from one tier to another, depending on how often a block is accessed in a specific time. A tier is a class of storage with specific characteristics, for example ultra-fast flash, enterprise-grade SAS drives or even nearline drives. Characteristics can be the drive type, the used RAID level or a combination of characteristics. A 3-tier storage design can consist of only one drive type, but they can be organized in different RAID levels. Tier 1 can be RAID 1 and tier 3 can be RAID 6, but all tiers use enterprise-grade 15k SAS drives. But you can also mix drive types and RAID levels, for example tier 1 with flash, tier 2 with 15k SAS in a RAID 5 and tier 3 with SAS-NL and RAID 6. Each time a block is accessed, the block “heats up”. If it’s hot enough, it is moved one tier up. If it’s less often accessed, the block “cools down” and at a specific point, the block is moved a tier down. If a tier is full, colder blocks will to be moved down and hotter block have to be moved up. It’s a bit simplified, but products like DataCore SANsymphony-V with Auto-Tiering or HP 3PAR Adaptive Optimization are working this way.

Lets talk about caching. With caching a block is only copied to a faster region, which can be flash or even DRAM. The original block isn’t moved, only a copy of the accessed block is copied to a faster medium. If this block is accessed, the data is served from the faster medium. This also works for write I/O. If a block is written, the data is written to the faster medium and will be moved later to the underlying, slower medium. You can’t store block copies until infinity, so less accessed blocks have to be removed from cache if they are not accessed, or if the cache fills up. Examples for caching solutions are PernixData FVP, HP 3PAR Adaptive Flash Cache or NetApp Flash Pool (and also Flash Cache). I lead storage controller cache explicitly not appear in this list. All of the listed caching technologies (except NetApp Flash Cache) can do write-back caching. I wouldn’t recommend read-cache only solutions like VMware vSphere Flash Read Cache, except two situations: Your workload is focused on read I/O and/ or you already own a vSphere Enterprise Plus license, and you do not want to spend extra money.

Tiering or caching? What to choose?

Well… it depends. What is the main goal when using these techniques? Accelerate workloads and making best use of scarce and expensive storage (commonly flash storage).

Regardless of the workload, tiering will need some time to let the often accessed blocks heat up. Some vendors may anticipate this partially by writing data always to the fastest tier. But I don’t think that this is what I would call efficient. One benefit of tiering is, that you can have more then two tiers. You can have a small flash tier, a bigger SAS tier and a really big SAS-NL tier. Usually you will see a 10% flash / 40% SAS / 50% SAS-NL distribution. But as I also mentioned: You don’t have to use flash in a tiered storage design. That’s a plus. On the downside tiering can make mirrored storage designs complex. Heat maps aren’t mirrored between storage systems. If you failover your primary storage, all blocks need to be heaten up again. I know that vendors are working on that. HP 3PAR and DataCore SANsymphony-V currently have a “performance problem” after a failover. It’s only fair to mention it. Here are two examples of products I know well and both offer tiering: In a HP 3PAR Adaptive Optimization configuration, data is always written to the tier, from which the virtual volume was provisioned. This explains the best practice to provision new virtual volumes from the middle tier (Tier 1 CPG). DataCore SANsymphony-V uses the performance class in the storage profile of a virtual disk to determine where data should be written. Depending on the performance class, data is written to the highest available tier (tier affinity is taken into account). Don’t get confused with the tier numbering: Some vendors use tier 0 as the highest tier, others may start counting at tier 1.

Caching is more “spontaneous”. New blocks are written to the cache (usually flash storage, but it can also be DRAM). If a block is read from disks, it’s placed in the cache. Depending on the cache size, you can hold up a lot data. You can lose the cache, but you can’t lose the data ins this case. The cache only holds block copies (okay, okay, written blocks shouldn’t be acknowledged until they are in a second cache/ hose/ $WHATEVER). If the cache is gone, it’s relatively quickly filled up again. You usually can’t have more then two “tiers”. You can have flash and you can have rotating rust. Exception: PernixData FVP can also use host memory. I would call this as an additional half tier. ;) Nutanix uses a tiered storage desing in ther hyper-converged platform: Flash storage is used as read/ write cache, cost effective SATA drives are used to store the data. Caching is great if you have unpredictable workloads. Another interesting point: You can cache at different places in the stack. Take a look at PernixData FVP and HP 3PAR Adaptive Flash Cache. PernixData FVP is sitting next to the hypervisor kernel. HP 3PAR AFC is working at the storage controller level. FVP is awesome to accelerate VM workloads, but what if I have physical database servers? At this point, HP 3PAR AFC can play to its advantages. Because you usually have only two “tiers”, you will need more flash storage as compared to a tiered storage design. Especially then, if you mix flash and SAS-NL/ SATA.

Final words

Is there a rule when to use caching and when to use tiering? I don’t think so. You may use the workload as an indicator. If it’s more predictable you should take a closer look at a tiered storage design. In particular, if the customer wants to separate data from different classes. If you have more to do with unpredictable workloads, take a closer look at caching. There is no law that prevents combining caching and tiering. At the end, the customer requirements are the key. Do the math. Sometimes caching can outperform tiering from the cost perspective, especially if you mix flash and SAS-NL/ SATA in the right proportion.

The beginning of a deep friendship: Me & PernixData FVP 2.0

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

I’m a bit late, but better late than never. Some days ago I installed PernixData FVP 2.0 in my lab and I’m impressed! Until this installation, solutions such as PernixData FVP or VMware vSphere Flash Read Cache (vFRC) weren’t interesting for me or most of my customers. Some of my customers played around with vFRC, but most of them decieded to add flash devices to their primary storage system and use techniques like tiering or flash cache. Especially SMB customers had no chance to use flash or RAM to accelerate their workloads because of tight budgets. With decreasing costs for flash storage, solutions like PernixData FVP and VMware vSphere Flash Read Cache (vFRC) getting more interesting for my customers. Another reason was my lab. I simply hadn’t the equipment to play around with that fancy stuff. But things have changed and now I’m ready to give it a try.

The environment

For the moment I don’t have any SSDs in my lab servers, so I have to use RAM for acceleration. I will add some small SSDs later. Fortunately PernixData FVP 2.0 supports NFS and I can use host memory to accelerate my lab workloads.

The installation

I have installed PerniXata FVP 2.0 in my lab and deployed the host extension with the vSphere Update Manager to three of my lab hosts.

PernixData FVP consists of three components:

  • Host Extension
  • Management Server running on a Windows Server
  • UI Plugin for the vSphere C# and vSphere Web Client

The management server needs a MS SQL database and it installs the 64 bit version of Oracle Java SE 7. For a PoC or a small deployment, you can use the Express version of Microsoft SQL Server 2012. I installed the management server onto one of my Windows 2008 R2 servers. This server hosts also my vSphere Update Manager, so I had already a MS SQL database in place. I had some trouble right after the installation, because I missed to enable the SQL Browser service. This is clearly stated in the installation guide. So RTFM. ;)

NOTE: The Microsoft® SQL Server® instance requires an enabled TCP/IP protocol even if the database is installed locally. Additional details on enabling TCP/IP using the SQL Server Configuration Manager can be found here. If using a SQL Named Instance, as in the example above, ensure that the SQL Browser Service is enabled and running. Additional details on enabling the SQL Browser Service can be found here.

After I had fixed this, the management server service started without problems and I was able to install the vSphere C# client plugin. You need the plugin to manage FVP, but the plugin installation is only necessary, if you want to use the vSphere C# client. You don’t have to install a dedicated plugin for the vSphere Web Client.

To install the host extension, you can simply import the host extension into the vSphere Update Manager, build a host extension baseline, attach it to the hosts (or the cluster, datacenter object etc.) and remediate them. The hosts will go into the maintenance mode, install the host extension and then exit maintenance mode. A reboot of the hosts is not necessary!

Right after the installation, I created my first FVP cluster. The trial period starts with the installation of the management server. There is no special trial license to install. Simply install the management server and deploy the host extension. Then you have 30 days to evaluate PernixData FVP 2.0.

Both steps, the installation of the host extension using the vSphere Update Manager, as well as the installation of the Management server, are really easy. You can’t configure much, and you don’t need to configure much. You can customize the network configuration (what vMotion network or which ports should be used), you can blacklist VMs and select VADP VMs. Oh, and you can re-enable the “Getting started” started screen. Good for the customer, bad for the guy who’s payed to install FVP. ;) Nothing much to do. But I like it. It’s simple and you can quickly get started.

First impressions

My FVP cluster consists of three hosts. Because I don’t have any SSDs for the moment, I uses host memory to accelerate the workload. During my tests, 15 VMs were covered by FVP and they ran workloads like Microsoft SQL Server, Microsoft Exchange, some Linux VMs, Windows 7 Clients, Fileservices, Microsoft SCOM. I also played with Microsoft Exchange Jetstress 2013 in my lab. A mixed bag of different applications and workloads. A picture says more than a 1000 words. This is a screenshot of the usage tab after about one week. Quite impressive and I can confirm, that FVP accelerates my lab in a noticeable way.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

I’ve enabled FVP on Monday evening. Check the latency diagram, that I’ve taken from vCenter. See the latencies dropping on Monday evening? The peaks during the week were caused by my tests.


Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Final words

Now it’s time to convince my sales colleagues to sell PernixData FVP. Or some customers read this blog post and ask my sales colleagues for PernixData. ;) I am totally convinced of this solution. You can buy PernixData FVP in different editions:

  • FVP Enterprise: No limit on the number of hosts or VMs
  • FVP Subscription: FVP Enterprise purchased using a subscription model
  • FVP Standard: No limit on the number of hosts or VMs. Perpetual license only. No support for Fault Domains, Adaptive Resource Management and Disaster Recovery integration (only in FVP Enterprise).
  • FVP VDI: Exclusively for VDI (priced on a per VM basis)
  • FVP Essentials Plus: FVP Standard that supports 3 hosts and accelerates up to 100 VMs. This product can only be used with VMware vSphere Essentials (Plus).

If you’re interested in a PoC or demo, don’t hesitate to contact me.

I’d like to thank Patrick Schulz, Systems Engineer DACH at PernixData, for his support! I recommend to follow him on Twitter and don’t foget to take a look at his blog.