Tag Archives: homelab

vSphere Lab Storage: Synology DS414slim Part 4 – VAAI-NAS Plugin

Chris Wahl wrote a good blog post about the VAAI-NAS plugin some days ago. I really recommend to read this posting. Because of his article, I will only describe the installation of the plugin. You can download the plugin on the Synology homepage for free.

There are two ways to install the plugin: With the vSphere Update Manager (VUM) and a host extension baseline, or with ESXCLI.

Plugin installation using the vSphere Update Manager

First of all, we need to import the plugin (host extension) to the patch repository. Open the vSphere C# client, switch to the “Home” screen and click “Update Manager” under “Solutions and Applications”. Switch to the “Patch Repository” tab and click “Import Patches”.

vaai-nas_plugin_installation_vum_01

Import the SYN-ESX-5.5.0-NasVAAIPlugin-1.0-offline_bundle-2092790.zip file. The next step is to create a new baseline, in this case a “Host Extension” baseline.

vaai-nas_plugin_installation_vum_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Scroll down and add the plugin to the baseline (click the down arrow button). Click “Next”.

vaai-nas_plugin_installation_vum_03

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Check the settings and finish the creation of the baseline.

vaai-nas_plugin_installation_vum_04

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Now attach the baseline to your hosts or cluster.

vaai-nas_plugin_installation_vum_05

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

As you can see, the VUM detected that my hosts are non-compliant, because the host extension is missing.

vaai-nas_plugin_installation_vum_06

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

During the installation process, the plugin is installed and a host reboot is triggered. After a reboot and a scan, all hosts should be compliant.

vaai-nas_plugin_installation_vum_07

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

In addition to the now compliant host status, the NFS datastores should now support hardware acceleration. You can check this in the vSphere C# or vSphere Web Client.

vaai-nas_plugin_installation_vum_08

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Another way to install the plugin is using the ESXCLI.

Install via ESXCLI

Upload the esx-nfsplugin.vib to a local or shared datastore. I placed the file in one of my NFS datastores. Then use ESXCLI to install the VIB.

Do enable the plugin, a host reboot is necessary. This ways is suitable for standalone hosts. I recommend to use the VUM whenever it’s possible.

Final words

I strongly recommend to install the plugin. Using the vSphere Update Manager, the installation is really easy. If you have a single host, try the installation using ESXCLI.

vSphere Lab Storage: Synology DS414slim Part 3 – Storage

This blog post covers the setup of the volumes and shares. Depending on your disk config, variuos volume configurations are possible. The DS414slim supports all important RAID levels (Synology Hybrid RAID, Basic, JBOD, RAID 0, 1, 5, 6 and 10). I recommend to use RAID 5, if you use more then two disks. I decided to create a RAID 5 with my three Crucial M550 SSDs and use the Seagate Momentus XT as a single disk.

Volume1: RAID 5

nas_volume_setup_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Volume2: Single disk

nas_volume_setup_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Create a NFS share

This disk setup gave me about 880 GB of SSD and 450 GB of SATA storage. To use this storage, we need to create at least one NFS share. Voume1 contains only a single NFS share. Volume2 contains a NFS share and an additional CIFS share, that I use for my Veeam backups. Since I use the Volume2 only for VM templates, I put both shares, the CIFS and NFS share, on the a single volume and a single disk.

To create a new NFS share, open the Control Panel > Shared Folders and click “Create”. Enter a name, a description and select a volume. Then click “OK”.

nas_setup_share_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Grant the local admin account “Read/ Write” permissions on the new share and click “NFS Permissions”.

nas_setup_share_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Enter the subnet or the IP address of your ESXi host to grant the host(s) access to the NFS share. Select “Map root to admin” and ensure that asynchronous transfer mode is enabled. Click “OK”.

nas_setup_share_03

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

That’s it. Now you can mount the NFS share to your ESXi hosts. You can mount the NFS share using ESXCLI, the vSphere C# client or with the vSphere Web Client. The latter provides the very handy NFS multimount feature. This allows you to mount a NFS share at multiple hosts at the same time. With ESXCLI, you can mount a datastore with this command:

To mount a NFS datastore with the vSphere Web Client, simply right-click a cluster and select “New Datastore”. Provide the needed information and in step 4 you can select one or multiple hosts, to which the NFS share should be mounted. Very handy!

Final words

Depending on your disk configuration, you have multiple options to configure volumes. I decided to go for a RAID 5. I strongly recommend to use SSDs, because rotating rust would be too slow. I also recommend to use NFS instead of iSCSI in a lab environment. It’s easier to setup and faster.

Part 4 of this series covers the installation of the Synology VAAI-NFS plugin: vSphere Lab Storage: Synology DS414slim Part 4 – VAAI-NAS Plugin

vSphere Lab Storage: Synology DS414slim Part 2 – Networking

The next step is to connect the Synology DS414slim to my lab network. I use two HP 1910 Switches in my lab, a 8 Port and a 24 Port model. The Synology DS414slim has two 1 GbE ports, which can configured in different ways. I wanted to use both ports actively, to I decided to create a bond.

Create a bond

Browse to the admin website and go to Control Panel > Network > Network Interfaces and select “Create”. Then select “Create Bond”.

nas_networking_settings_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

To utilize both NICs, select the first option: “IEEE 802.3ad Dynamic Link Aggregation”. This option requires switches that are capable to create a LACP LAG! I will show the configuration of a LACP LAG on one of my HP 1910 switches later.

nas_networking_settings_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Click “IPv4”. I have a dedicated VLAN and subnet for NFS. This subnet is routed in my lab, so I that the can reach the DS414slim for management. Make sure that you enable Jumbo Frames and that every component in the network path can handle Jumbo Frames! Switch to the “IPv6” tab.

nas_networking_settings_03

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

I don’t want to use IPv6, so I decided to disable it.

nas_networking_settings_04

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Click “OK” and wait until the configuration is finished.

Create a LAG

Now it’s time to create the LAG on the switch. As I already mentioned, I use two HP 1910 switches in my lab. Both are great home lab switches! They are cheap and they can do L3 routing. Browse to the web management, log in and select Network > Link Aggregation and click “Create”.

1910-24g_create_lag_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Enter a interface ID for the LAG. In my case there were no LAGs before, so the ID is 1. Select “Dynamic (LACP Enabled)” and select two ports on the figure of the switch. Check the settings in the “Summary” section and click on “Apply”.

1910-24g_create_lag_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Now we need to place the LAG in the correct VLAN. Select Network > VLAN and select “Modify Ports”. Select “BAGG1” from “Aggregation ports” and place the LAG as an untagged member in the NFS VLAN (in my case this is VLAN 100). Finish this task by clicking “Apply”.

1910-24g_create_lag_03

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

You can check the success of this operation by switching to the “Details” pageand then select the NFS VLAN.

1910-24g_create_lag_04

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Connect the DS414slim with the two patch cables to the ports that were now configured as a LAG. If everything is configured correctly, the DS414slim should be reachable, with its new IP and in the NFS VLAN.

VMkernel configuration

Make sure that you have at least one VMkernel port configured, that is in the same subnet and VLAN as you DS414slim. You can see that the VMkernel port is placed in VLAN 100 and that is has a IP from my NFS subnet.

nas_esxi_vmk_setup_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

You should also make sure, that the VMkernel port and the vSwitch can handle Jumbo Frames. The HP 1910 switch series has enabled Jumbo Frames by default.

Final words

The network setup depends on your needs. I strongly recommend to use a dedicated VLAN and IP subnet for NFS. I also recommend the use of Jumbo Frames. Make sure that all componentens in the network path can handle Jumbo Frames and that the VLAN membership is correctly set. If possible, use a bond on the Synology and a LAG on the switch.

Part 3 of this series covers the creation of NFS shares: vSphere Lab Storage: Synology DS414slim Part 3 – Storage

vSphere Lab Storage: Synology DS414slim Part 1 – Unboxing and initial setup

A VMware vSphere cluster is nothing without shared storage. Most of the functions, like VMware HA or VMware vMotion (okay, vMotion is possible without shared storage), can only be used with a shared storage. The servers in my lab have Fibre Channel Host Bus Adapters (HBA), but buying an old and cheap Fibre Channel storage system wasn’t an option in my case. This left two options when choosing the right storage protocol: iSCSI or NFS. I tried to virtualize the local storage in my ProLiants with the HP StoreVirtual VSA and DataCore SANsymphony-V, but both were too complex for my needs and a lab environment. Because of this I decided to move the local storage into a small storage system and use iSCSI or NFS. I searched for a while for a suiteable system until Chris Wahl started blogging about the Synology DS414slim.

Like Chris, I’m a fan of NFS. His blog posts encouraged me that, the DS414slim would be a good choice. In addition, the DS414slim is relatively cheap (~ 250 € incl. taxes in Germany) and Chris showed, that the system can achieve a good performance when used with SSDs. Fortunately I already had three Crucial M550 SSDs (each with a capacity of 480 GB) and a single Seagate Momentus XT with a capacity of 500 GB, so I bought the DS414slim without disks.

I shot the DS414slim for ~ 250 € at the end of 2014. The price varies between 230 € and 260 € in Germany for model without disks.

synology_unboxing_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The box contains the DS414slim itself, a stand, two patch cables, screws for the disk trays and a power supply. So it contains everything you need to bring the DS414slim to life.

synology_unboxing_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The system is really small as you can see on this picture (take the2,5″ disks as reference). It goes without saying that you only can use 2,5″ hard disks.

synology_unboxing_03

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The disks were quickly mounted into the disk trays, the needed screws are included. The initial setup is really easy. Simply power it on, open a browser and go to http://find.synology.com. My DS414slim was running DSM 4.1, but you can update the DSM during the installation process. Simply download DSM 5.1 at the Synology Download Center and provide the update file to the installer. The rest of the setup process is not very spectecular. I will not explain the installation process here in more detail – it’s too simple. :)

The next part of this series covers the network connectivity: vSphere Lab Storage: Synology DS414slim Part 2 – Networking.

VM deployment fails with “Authenticity of the host’s SSL certificate is not verified”

When you want to go fast, go slow. Otherwise you will get into trouble… Today I tried to quickly deploy a VM from a template and customize this VM with a customization specification. The codeword is “quickly”. The fun started with this error message:

vcsa_deploy_vm_error_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Fortunately I asked the VMware Knowledge Base, which lead me to VMware KB2086930 (Deploying a template with customization fails with the error: Authenticity of the host’s SSL certificate is not verified). This KB article is all you need to know to fix this error.

1. Make a snapshot of your vCenter Server appliance.

2. Stop the vCenter Server service using the appliance management website (port 5480).

vcsa_ssl_thumbprint_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

3. Connect with SSH to your vCenter Server appliance and run these commands. I tried to deploy the VM to esx2.lab.local. As you can see, there is no expected SSL thumbprint for this host (and also it’s missing for host esx1.lab.local). The solution is to set the host_ssl_thumbprint == expected_ssl_thumbprint.

This solved this issue for me. According to VMware KB2086930  only VMware vCenter Server Appliance 5.5.x is affected. If you are running VMware vCenter Server on Windows, you are not affected. If you get this error (or a similar error), it might be another problem.

Power on HP ProLiant servers with iLO, SSH & Plink

Some weeks ago, Frank Denneman wrote a short blog post about accessing his Supermicro IPMI with SSH. He used this access to power on his lab servers.I don’t use Supermicro boards in my lab, but I have four HP ProLiants with iLO and iLO has a also a SSH interface. This way to power on my servers seemed very practical, especially because the iLO web interface isn’t the fastest. But I wanted it a bit more automated, so I decided to use Plink to send commands via SSH.

Create a new user account

I created a new user account in the iLO user database. This user has only the rights to change the power state of the server. Login into the iLO web interface. Click on “Administration”, then “User Administration” and “New”.

ilo_create_sshlogin_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Fill in the required fields. You have to enter a password, even if you later login with SSH public key authorization. Only allow “Virtual Power and Reset”. All other rights should be disallowed. Click “Save User Information”.

ilo_create_sshlogin_2

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Create SSH key pair

I used the PuTTY Key Generator to create the necessary SSH key pair. Click “Generate” and move the mouse in the blank field.

ilo_create_sshlogin_3

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Enter the username of the new created user in the “Key comment” field. Copy the public key into a textfile. You need this file for the key import into iLO. Then save the public and private key.

ilo_create_sshlogin_4

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Key import

To import the key, login into the iLO web interface again. Click “Administration”, then “Security” in the “Settings” area on the left. Click “Browse…” and select the text file with the SSH public key. The key that is shown in the “Key” area of the PuTTY Key Generator differs from the saved public key. Both are public keys, but they have a different format. You have to import the key, that is shown in the “Key” area.

ilo_create_sshlogin_5

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

If you have imported the right key, the key is automatically assigned to the new user.

ilo_create_sshlogin_6

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The test

Open a CMD and change to the directory with the Plink executable and the SSH private key. The following command turns the server on.

To turn off, simply use this command:

A warm reset can be requested by using this command:

A cold reset can be requested by using this command:

You can put these commands into a batchfile to power on/ off a couple of servers with a single click.

Homelabs: It’s evolution, baby

A discourse is going on in the community. I can’t say who has started the discourse, but the number of blog postings to this discourse is an indication for the interest at this topic. But what’s the topic?

Homelabs

A homelab is the datacenter of the poor man. Some people have the luck to use a fully populated data center for test and study purposes. Our job requires to work with the latest technology and products, so we need an environment for test- and study purposes. Back in the days it was sufficient to have some VMs on you computer or laptop. But as virtualization moved into the data center, it was necessary to have this in the lab. At this point homelabs began to explode.

Why homelabs began to explode?

Lets assume that you wish to play with VMware vSphere. Playing with a single host is lame. So you need at least two hosts to build a cluster. If you want to use the cool features like HA, vMotion, DRS etc., you need shared Storage. Virtulization without VMs is also lame. So you need some CPU power and memory. Wow. At least two hosts and a shared storage. That escalated quickly… Okay, lets look at Microsoft Hyper-V. Mhh… at least two hosts and a shared storage if you seriously want to work with it. Now you have two options:

  • Physical Equipment
    • real server HW
    • Whitebox HW
  • Nested Enviroment

Physical HW has some benefits, because nothing is shared or virtualized. If it’s server HW, the chance is high that it’s on the HCL and you will not face issuses due to unsupported HW. But there are disadvantages: Think about space and power consumption, heat or the WAF (Wife Acceptance Factor – higher is better). Real server HW will violate requirements to space and power consumption, heat and WAF. You can go for whitebox HW, which means that you build your own server out of different components, that are not necessarily supported. But it’s cheap (if I look at Franks and Eriks homelabs this is not necessarily true…), you can focus on power consumption, noise and WAF. But what if you get in trouble because the HW is unsupported? What if HW currently works, but with the next release of VMware ESXi or Microsoft Hyper-V not? You can skip dedicated HW and go for nested environments. In this case you virtualize virtualization environments. Sounds spooky? Yes, sometimes it is. And it has some disadvantages, especially in case of performance or things that simply didn’t work (VMware FT with 64 bit guests). But it’s easy, and that is a big advantage. All you need is VMware Workstation, Fusion or ESXi and AutoLab. An awesome source for nested environments is virtuallyGhetto, William Lams blog.

The “scientific” discourse

There are some really nice blog posts came up in the last days. Take a look into the comments section!

Frank Denneman – vSphere 5.5 Home lab
Erik Bussink – The homelab shift…
Erik Bussink – Homelab 2014 upgrade
Vladan SEGET – vSphere Homelabs in 2014 – scale up or scale out?

It’s evolution, baby…

… and sometimes there are different, at the same time extending developments. Time will show which architecture will make the race. I chose server equipment, because due to some circumstances I came to four HP ProLiants. But I will not run them at home. ;)

Enable VMware Fault Tolerance in nested enviroments

While playing around in my lab, I wanted to enable VMware Fault Tolerance (FT)  for a VM. In the absence of physical HW I use a nested enviroment, which is running on a HP ProLiant DL160 G6 (2x Intel Xeon E5520, 32G RAM, a RAID 0 with 4 SATA drives). FT isn’t available in nested enviroments, because HW virtualization features are required. This screenshot was taken from the web client.

vmware_ft_nested_host

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

But “isn’t available” doesn’t mean that you can’t enable it. ;) As always this isn’t supported by VMware. It’s for lab enviroments, trainings etc., but not for production. You have to set three configuration parameters for the VM that you want to use with FT. If you use the web client, you can set the configuration parameters as follows:

Edit Settings… > VM Options > Advanced > Edit Configuration…

vmware_ft_guest_parameter

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

For easy copy ‘n paste here the parameters:

After setting the parameters you can enable FT and start the VM.

vmware_ft_running

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Please note that you can only run 32-bit guests! This is due to the binary translation (replay.allowBTOnly). A FT-secured 64-bit Windows 2008 R2 won’t be possible. But to show the configuration and use of FT a 32-bit Windows 2003 should be sufficient. :) I have configured FT in my lab and I took the screenshots from the vSphere 5.5 Web Client. I use VM HW 9 for the nested ESXi and VHV is enabled in the CPU section. If you are searching for more: virtuallyGhetto is an awesome source, especially for nested virtualization and everything around automation using various API/ SDK and CLI. Kudos to William Lam for the work he puts in there.