How to shrink thin-provisioned disks

Disk space is rare. I only have about 1 TB of SSD storage in my lab and I don’t like to waste too much of it. My hosts use NFS to connect to my Synology NAS, and even if I use the VAAI-NAS plugin, I use thin-provisioned disks only. Thin-provisioned disks tend to grow over time. If you copy a 1 GB file into a VM and you delete this file immediately, you will find that the VMDK is increased by 1 GB. This is caused by the guest filesystem. It marks the blocks of deleted files as free, even if it only deletes metadata and not the data itself. Later, the data is overwritten with new data, since the blocks are marked as free and the new data is written in there. VMware ESXi doesn’t know that the guest has marked blocks as free. So ESXi can’t shrink the thin-provisioned VMDK.

You can observe a similar behavior in case of VMFS and underlying thin-provisioned LUNs: If a VMDK is removed from a VMFS datastore, the underlying  thin-provisioned LUN doesn’t show more free space. In this case, the VAAI UNMAP primitive can be used to tell the storage system which blocks are free and can be reclaimed. Some storage system that doesn’t support VAAI UNMAP use contiguous regions filled with zeros to identify reclaimable storage space. Before free space can be reclaimed, the VMFS has to be filled with zeros. A similar technique can be used to shrink thin-provisioned guest hard disks. Please note that I don’t want to focus on reclaiming space from underlaying LUNs. I’m only talking about shrinking thin-provisioned disks!

To shrink a thin-provisioned VMDK the guest filesystem has to be zeroed out. If you use Windows, you can use SDelete. In case of a unixoide OS (Linux, FreeBSD, Solaris…), use dd. After you have zeroed out the guest file system, you have to move the VM with Storage vMotion to another datastore. Now it’s getting complicated: You have to make sure that the legacy datamover (fsdm) is used for the Storage vMotion. There are three different datamovers:

  • fsdm
  • fs3dm, and
  • fs3dm – hardware offload

The fsdm is the oldest and slowest datamover. The fs3dm and fs3dm with HW offload are newer. In case of the latter, the process is offloaded to the hardware using VAAI (Full Copy primitive). At this point, I’d like to refer to a blog post of Duncan Epping (Blocksize impact?) , who has highlighted the differences between the datamovers more detailed. The point is, that the fsdm doesn’t copy blocks that are filled with zeros. But how can I make sure, that the fsdm is used?

  • Move the VM to a datastore with another blocksize

This can be difficult, because VMFS5 datastores have a block size of 1 MB, except they were upgraded from VMFS3. Simply create a new VMFS3 datastore and use it as destination.

  • Move the VM from VMFS to NFS, from NFS to VMFS or from NFS to NFS

In this case fsdm will be used. Please note that fsdm will not be used if you move a VM from a VMFS5 to a VMFS5 datastore! In this case the fs3dm is used. This wouldn’t shrink the thin-provisioned VMDK. On the downside the fsdm is slow. Really slow. If you have a monster VM, a vMotion can take a looooong time (worth reading: “VMware Storage vMotion, Data Movers, Thin Provisioning, Barriers to Monster VM’s” by Michael Webster).

I wrote a PowerShell script that uses PowerShell remoting and VMwares PowerCLI cmdlets to do the following tasks:

  • get a list of all local disks using Get-WmiObject
  • zero-out filesystem on those disks
  • move the VM to a destination datastore
  • move the VM back to its source host and source datastore

For the moment, the script only works with Windows VMs. SDelete must be available in the VM. Make sure that you use the latest release of SDelete (currently 1.61). PowerShell remoting has to be enabled on the VMs. Feel free to use and/ or edit my script. To get this script working, please change the content of the variables for

  • $PathToSDelete
  • $VIServer
  • $CredFile
  • $Username
  • $DstDS
  • $DstDSHost and
  • $ClusterName

according to your environment. The script skips VMs with active snapshots and VMs that have one or more ZeroedThick or EagerZeroedThick disks attached. Because the script use all local disks, it will also zero-out disks that were attached using in-guest iSCSI. So please be test the script in your lab until you try it in production.

This is an example for the output of the script:

Patrick Terlisten/ vcloudnine.de/ Creative Commons CC0

Patrick Terlisten/ vcloudnine.de/ Creative Commons CC0

In this picture you can see, that the script processes one disk after another:

Patrick Terlisten/ vcloudnine.de/ Creative Commons CC0

Patrick Terlisten/ vcloudnine.de/ Creative Commons CC0

This script is provided “AS IS” with no warranty expressed or implied. Run at your own risk. Please test the script in your lab.