Tag Archives: automation

Hey infrastructure guy, you should learn Python!

This posting is ~7 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

I’m not a developer. I’m an infrastructure guy. All I ever needed was to write some scripts. Therefore, I never needed more than DOS batches, BASH/ CSH/ KSH, Visual Basic Script and nowadays PowerShell. So why should I learn another programming language?

One to rule them all?

I don’t think that there is a single programming language that is perfect for all use cases. The spread and acceptance of a language shows a positive correlation with the number of available frameworks, tools and libraries. That’s why I love the Microsoft PowerShell. Nearly all vendors offer a PowerShell module for their products (think about VMware PowerCLI, Rubrik, Veeam, DataCore and much more). The downside: The PowerShell code has to run on a Windows box. I think the time of writing DOS batches is over. UNIX shell scripts are still awesome, but focused on UNIX.

Different problems require different tools. I think it’s better to know a few, general-purpose tools well, as every conceivable special tool. Don’t get me wrong: PowerShell is awesome powerful! It’s quite easy to learn and you will have quick success.

Why Python?

Python is easy to learn (I can confirm this, at least for what I’ve seen). Python was developed from scratch by Guido van Rossum in the early 1990s. Python is an interpreted and dynamic programming language, which supports multiple paradigms, like the object-oriented or the functional programming. Python features a dynamic type system and automatic memory management. It uses only 35 keywords, what makes it easy to lern. It’s underlying philosophy is The Zen of Python.

Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.

These rules lead to code with a high legibility, and it is possible to solve problems with fewer lines of code. Python is highly extensible. It comes with a large standard library and you can choose from 72.000 packages, that are available using the official 3rd party repository.

For me, as an infrastructure guy, the VMware vSphere API Python Bindings, the 3PAR Python client or the module for the Alcatel-Lucent Enterprise OmniSwitch RESTful API are reasons enough to start with Python. It’s the extensibility and platform independence of Python, what makes it so interesting. Like PowerShell, Python is an awesome language to automate things.

First steps with Python

Currently, the stabled releases are 2.7 and 3.5. I recommend to start with the 3.5 release. You can get the latest release from python.org. They offer packages for Windows, MacOS X and Linux/ UNIX. Python comes with an IDE called IDLE (Integrated Development and Learning Environment). Make sure that you take a look into the official documentation! If you want something more comfortable, try JetBrain PyCharm. JetBrains offer a free community edition for Windows, MacOS X and Linux. But it’s not the worst idea to start with IDLE. I use both IDEs, IDLE and PyCharm.

Where can you get help? YouTube is full of videos about Python. If you have a Pluralsight subscription, checkout the courses on Pluralsight. There are many good books out there, as well as some good howtos. Just use Google. It depends on what type of learner you are.

Learn the basics and try to strengthen them during a small project. Buy a Raspberry Pi. Raspberry Pi and Python are the biggest friends. If you are focused on VMware vSphere, take a closer look at the VMware vSphere API Python Bindings. Create yourself a project to learn.

I just started to learn Python, but I think that this wasn’t the worst idea in my life.

PowerCLI: Get-LunPathState

This posting is ~7 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Careful preparation is a key element to success. If you restart a storage controller, or even the whole storage, you should be very sure that all ESXi hosts have enough paths to every datstore. Sure, you can use the VMware vSphere C# client or the Web Client to check every host and every datastore. But if you have a large cluster with a dozen datastores and some Raw Device Mappings (RDMs), this can take a looooong time. Checking the path state of each LUN is a task, which can be perfectly automated. Get a list of all hosts, loop through every host and every LUN, output a list of all hosts with all LUNs and all paths for each LUN. Sounds easy, right?

For a long time, I used this PowerCLI script for checking the LUN path state. But now I decided to give something back and I tweaked it a bit for my needs.

Feel free to use and/ or modify it.

Certificate-based authentication of Azure Automation accounts

This posting is ~7 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Before you can manage Azure services with Azure Automation, you need to authenticate the Automation account against a subscription. This authentication process is part of each runbook. There are two different ways to authenticate against an Azure subscription:

  • Active Directory user
  • Certificate

If you want to use an Active Directory account, you have to create a credential asset in the Automation account and provide username and password for that Active Directory account. You can retrieve the credentials using the Get-AzureAutomationCredential cmdlet. This cmdlet returns a System.Management.Automation.PSCredential object, which can be used with Add-AzureAccount to connect to a subscription. If you want to use a certificate, you need four assets in the Automation account: A certificate and variables with the certificate name, the subscription ID and the subscription name. The values of these assets can be retrieved with Get-AutomationVariable and Get-AutomationCertificate.

Prerequisites

Before you start, you need a certificate. This certificate can be a self- or a CA-signed certificate. Check this blog post from Alice Waddicor if you want to start with a self-signed certificate. I used a certificate, that was signed by my lab CA.

At a Glance:

  • self- or CA-signed certificate
  • Base64 encoded DER format (file name extension .cer) to upload it as a management certificate
  • PKCS #12 format with private key (file name extension .pfx or .cer) to use it as an asset inside the Automation account

Upload the management certificate

First, you must upload the certificate to the management certificates. Login to Azure and click “Settings”.

creat_automation_account_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Click on “Management Certificates”

creat_automation_account_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

and select “Upload” at the bottom of the website.

creat_automation_account_03

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Make sure that the certificate has the correct format and file name extension (.cer).

creat_automation_account_04

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Finish the upload dialog. After a few seconds, the certificate should appear in the listing.

creat_automation_account_06

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Create a new Automation account

Now it’s time to create the Automation account. Select “Automation” from the left panel.

creat_automation_account_07

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Click on “Create an Automation account”.

creat_automation_account_08

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Give your Automation account a descriptive name and select a region. Please note that an Automation account can manage Azure services from all regions!

creat_automation_account_09

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Click on the newly created account and click on “Assets”.

creat_automation_account_10

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Select “Add setting” from the bottom of the website.

creat_automation_account_11

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Add a credential asset by choosing “Add credential” and select “Certificate” as “Credential type”.

creat_automation_account_12

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Enter a descriptive name for the certificate. You should remember this name. You will need it later. Now you have to upload the certificate. The certificate must have the file name extension .pfx or .cer and it must include the private key!

creat_automation_account_13

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Finish the upload of the certificate. Now add three additional assets (variables).

creat_automation_account_14

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Select the name, the value and the type from the  table below. The name of the certificate is the descriptive name, you’ve previously entered when uploading the certificate.

NameValueType
AutomationCertificateNameName of your certificateString
AzureSubscriptionNameName of your subscriptionString
AzureSubscriptionID36 digit ID of the subscriptionString

Done. You’ve uploaded and created all the required certificates and variables.

How to use it

To use the certificate and the variables to connect to an Azure subscription, you have to use the two cmdlets Get-AutomationCertificate and Get-AutomationVariable. I use this code block in my runbooks:

$AzureSubscriptionName = Get-AutomationVariable -Name "AzureSubscriptionName" 
$AzureSubscriptionID = Get-AutomationVariable -Name "AzureSubscriptionID" 
$AutomationCertificateName = Get-AutomationVariable -Name "AutomationCertificateName"
$CertificateName = Get-AutomationCertificate -Name $AutomationCertificateName

Set-AzureSubscription -SubscriptionName $AzureSubscriptionName -SubscriptionId $AzureSubscriptionID -Certificate $CertificateName
Select-AzureSubscription $AzureSubscriptionName

Works like a charm.

Summary

Certificate-based authentication is an easy way to authenticate an Automation account against an Azure subscription. It’s easy to implement and you don’t have to maintain users and passwords. You can use different certificates for different Automation accounts. I really recommend this, especially if you have separate accounts for dev, test and production.

All you need is to upload a certificate as a management certificates, and as a credential asset in the Automation account.  You can use a self- or CA-signed certificate. The subscription ID, the subscription name and the name of the certificate are stored in variables.

At the beginning of each runbook, you have to insert a code block. This code block takes care of authentication.

A brief introduction into Azure Automation

This posting is ~7 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Automation is essential to reduce friction and to streamline operational processes. It’s indispensable when it comes to the automation of manual, error-prone and frequently repeated tasks in a cloud or enterprise environment. Automation is the key to IT industrialization. Azure Automation is used to automate operational processes withing Microsoft Azure.

Automation account

The very first thing you have to create is an Automation account. You can have multiple Automation accounts per subscription. An Automation account allows you so separate automation resources from other Automation accounts. Automation resources are runbooks and assets (credentials, certificates, connection strings, variables, scheudles etc.). So each Automation account has its own set of runbooks and assets. This is perfect to separate production from development. An Automation account is associated with an Azure region, but the Automation account can manage Azure services in all regions.

Runbooks

A runbook is a collection of PowerShell script or PowerShell workflows. You can automate nearly everything with it. If something provides an API, you can use a runbook and PowerShell to automate it. A runbook can run other runbooks, so you can build really complex automation processes. A runbook can access any services that can be accessed by Microsoft Azure, regardless if it’s an internal or external service.

There are three types of runbooks:

  • Graphical runbooks
  • PowerShell Workflow runbooks
  • PowerShell runbooks

Graphical runbooks can be created and maintained with a graphical editor within the Azure portal. Graphical runbooks use PowerShell workflow code, but you can’t directly view oder modify this code. Graphical runbooks are great for customers, that don’t have much automation and/ or PowerShell knowledge. Once you created a graphical runbook with an automation account, you can export and import this runbook into another automation accounts, but you can modify the runbook only with the account which was used during the creation of the runbook.

PowerShell Workflow runbooks doesn’t have a graphical presentation of the workflow. You can use a text editor to create and modify PowerShell Workflow runbooks. But you need to know how to deal with the logic of PowerShell Workflow code.

PowerShell runbooks are plain PowerShell code. Unlike PowerShell Workflows, a PowerShell runbook is faster, because it doesn’t have to be compiled before the run. But you have to be familiar with PowerShell. There is no parallel processing and you can’t use checkpoints (if a snapshot fails, it will be suspended. With a checkpoint, the workflow can started at the last sucessful checkpoint).

Schedule

Schedules are used to run runbooks to a specific point in time. Runbooks and schedules have a M:N relationship. A schedule can be associated with one or more runbooks, and a runbook can be linked to one or more schedules.

Summary

This is only a brief introduction into Azure Automation. Azure Automation uses Automation accounts to execute runbooks. A runbook consists of PowerShell Workflow or plain PowerShell code. You can use runbooks to automate nearly all operations of Azure services. To execute runbooks to a specific point in time, you can use schedules Runbooks, schedules and automation assets, like credentials, certificates etc., are associated with a specific Automation account. This helps you to separate between different Automation accounts, e.g. accounts for development and for production.

Starting and stopping Azure VMs with Azure PowerShell

This posting is ~7 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

To be honest: I’m lazy and I have a wife and two kids. Therefore I have to minimize the costs of my lab. I have a physical lab at the office and some VMs running on Microsoft Azure. Azure is nice, because I only have to pay what I really use. And because I’m only paying the actual use, I start the VMs only when I need them. Inspired by this very handy Azure VM wakeup & shutdown script, I decided to write my own script (yes, I invented a wheel again…). Very simple, nothing fancy. Feel free to use and modify the script according to your needs.

Automating ESXi configuration for DataCore SANsymphony-V

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

DataCore describes in their Host Configuration Guide for VMware ESXi some settings that must be adjusted before storage from DataCore SANsymphony-V storage servers will be assigned to the ESXi hosts. Today, for ESXi 5.x and 6.0, you have to add a custom rule and adjust the advanced setting DiskMaxIOSize. For ESX(i) 4 more parameters had to be adjusted. But I will focus on  ESXi 5.x and 6.0. You need to adjust these settings for each host that should get storage mapped from a DataCore storage server. If you have more then one host, you may have the wish to automate the necessary steps. The check the current value of DiskMaxIOSize, you can use this lines of PowerCLI code.

$esxCluster = 'LAB'
$esxHosts = Get-Cluster $esxCluster | Get-VMHost | Where { $_.PowerState -eq "PoweredOn" -and $_.ConnectionState -eq "Connected" } | Sort Name

foreach($esx in $esxHosts) {

Get-AdvancedSetting -Entity $esx -Name Disk.DiskMaxIOSize | Select Entity,Name,Value 

}

So set DiskMaxIOSize to the recommended value of 512 and to create the custom SATP rule, use this lines:

$esxCluster = 'LAB'
$esxHosts = Get-Cluster $esxCluster | Get-VMHost | Where { $_.PowerState -eq "PoweredOn" -and $_.ConnectionState -eq "Connected" } | Sort Name

foreach ($esx in $esxHosts) {

$esxcli = Get-EsxCli -VMHost $esx
$esxcli.storage.nmp.satp.rule.add($null,"tpgs_on","DataCore SANsymphony-V Custom ALUA Rule",$null,$null,$null,"Virtual Disk",$null,"VMW_PSP_RR",$null,"VMW_SATP_ALUA",$null,$null,"DataCore")

Get-AdvancedSetting -Entity $esx -Name Disk.DiskMaxIOSize | Set-AdvancedSetting -Value 512 -Confirm:$false | Out-Null

Write-Host "Finished $esx"

}

Please make sure that you adjust the $esxCluster variable.  Nothing fancy, but it can save some time – even if you only have two hosts. ;)

How to shrink thin-provisioned disks

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Disk space is rare. I only have about 1 TB of SSD storage in my lab and I don’t like to waste too much of it. My hosts use NFS to connect to my Synology NAS, and even if I use the VAAI-NAS plugin, I use thin-provisioned disks only. Thin-provisioned disks tend to grow over time. If you copy a 1 GB file into a VM and you delete this file immediately, you will find that the VMDK is increased by 1 GB. This is caused by the guest filesystem. It marks the blocks of deleted files as free, even if it only deletes metadata and not the data itself. Later, the data is overwritten with new data, since the blocks are marked as free and the new data is written in there. VMware ESXi doesn’t know that the guest has marked blocks as free. So ESXi can’t shrink the thin-provisioned VMDK.

You can observe a similar behavior in case of VMFS and underlying thin-provisioned LUNs: If a VMDK is removed from a VMFS datastore, the underlying  thin-provisioned LUN doesn’t show more free space. In this case, the VAAI UNMAP primitive can be used to tell the storage system which blocks are free and can be reclaimed. Some storage system that doesn’t support VAAI UNMAP use contiguous regions filled with zeros to identify reclaimable storage space. Before free space can be reclaimed, the VMFS has to be filled with zeros. A similar technique can be used to shrink thin-provisioned guest hard disks. Please note that I don’t want to focus on reclaiming space from underlaying LUNs. I’m only talking about shrinking thin-provisioned disks!

To shrink a thin-provisioned VMDK the guest filesystem has to be zeroed out. If you use Windows, you can use SDelete. In case of a unixoide OS (Linux, FreeBSD, Solaris…), use dd. After you have zeroed out the guest file system, you have to move the VM with Storage vMotion to another datastore. Now it’s getting complicated: You have to make sure that the legacy datamover (fsdm) is used for the Storage vMotion. There are three different datamovers:

  • fsdm
  • fs3dm, and
  • fs3dm – hardware offload

The fsdm is the oldest and slowest datamover. The fs3dm and fs3dm with HW offload are newer. In case of the latter, the process is offloaded to the hardware using VAAI (Full Copy primitive). At this point, I’d like to refer to a blog post of Duncan Epping (Blocksize impact?) , who has highlighted the differences between the datamovers more detailed. The point is, that the fsdm doesn’t copy blocks that are filled with zeros. But how can I make sure, that the fsdm is used?

  • Move the VM to a datastore with another blocksize

This can be difficult, because VMFS5 datastores have a block size of 1 MB, except they were upgraded from VMFS3. Simply create a new VMFS3 datastore and use it as destination.

  • Move the VM from VMFS to NFS, from NFS to VMFS or from NFS to NFS

In this case fsdm will be used. Please note that fsdm will not be used if you move a VM from a VMFS5 to a VMFS5 datastore! In this case the fs3dm is used. This wouldn’t shrink the thin-provisioned VMDK. On the downside the fsdm is slow. Really slow. If you have a monster VM, a vMotion can take a looooong time (worth reading: “VMware Storage vMotion, Data Movers, Thin Provisioning, Barriers to Monster VM’s” by Michael Webster).

I wrote a PowerShell script that uses PowerShell remoting and VMwares PowerCLI cmdlets to do the following tasks:

  • get a list of all local disks using Get-WmiObject
  • zero-out filesystem on those disks
  • move the VM to a destination datastore
  • move the VM back to its source host and source datastore

For the moment, the script only works with Windows VMs. SDelete must be available in the VM. Make sure that you use the latest release of SDelete (currently 1.61). PowerShell remoting has to be enabled on the VMs. Feel free to use and/ or edit my script. To get this script working, please change the content of the variables for

  • $PathToSDelete
  • $VIServer
  • $CredFile
  • $Username
  • $DstDS
  • $DstDSHost and
  • $ClusterName

according to your environment. The script skips VMs with active snapshots and VMs that have one or more ZeroedThick or EagerZeroedThick disks attached. Because the script use all local disks, it will also zero-out disks that were attached using in-guest iSCSI. So please be test the script in your lab until you try it in production.

This is an example for the output of the script:

VMDK-thin-reclaim_script_output

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

In this picture you can see, that the script processes one disk after another:

VMDK-thin-reclaim_performance_zero_out

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

This script is provided “AS IS” with no warranty expressed or implied. Run at your own risk. Please test the script in your lab.

Replacing SSL certificates for vRealize Orchestrator Appliance

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

It’s a common practice to replace self-signed certificates, that are used in several VMware products, with CA signed certificates. I did this in my lab for my vCenter Server Appliance and my VMware Update Manager. While I was working with vRealize Orchestrator I noticed, that it is also using self-signed certificates (what else?). For completeness, I decided to replace the self-signed certificates with CA signed.

My lab environment

  1. VMware vSphere 5.5 environment running a vCenter Server appliance (already using CA signed certificates)
  2. vRealize Orchestrator Appliance 5.5.2 (not version 5.5.2.1,  because I had problems with this release)
  3. Microsoft Windows CA running on a Windows 2012 R2 Standard server

You don’t need a Microsoft Windows CA. You can use any other CA. There is no need to use a special vendor. I use a windows-based CA in my lab, so the screenshots reflect this fact. The way how certificates are replaced differs between vRealize Orchestrator Appliance and the windows-based standalone or vCenter Server embedded version. If you use the in the vCenter Server embedded or Standalone Orchestrator check Derek Seamans VMware vSphere 5.5 SSL Toolkit. I used the Orchestrator appliance.

I will only highlight the necessary steps to replace the certificates. I assume that you have a running Orchestrator appliance.

Create the package signing certificate

This certificate is used to sign packages. This certificate is NOT used with HTTPS.

1. Log into the Orchestrator Configuration website using the username “vmware” and click “Server Certificate” on the left navigation page. On the right side appears the server package signing certificate.

vco_package_sign_certificate_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

2. Create a new certificate. Otherwise, if you directly export the CSR, the CSR would include the organization, common name, OU etc. from the self-signed certificate. Choose the fourth option “Create a certificate database and self-signed server certificate”.

vco_package_sign_certificate_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

3. Enter at least the common name (FQDN of your Orchestrator appliance) and click “Create” (on the right at the end of the page).

vco_package_sign_certificate_03

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

4. Now the CSR can be exported. The CSR is saved into a file called “vCO_SigningRequest.csr”.

vco_package_sign_certificate_04

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

5. Take the CSR and submit a certificate request at your CA. In my case I took the content of the file and copied it into the corresponding text box of my CA. Make sure that you only use the content between “—–BEGIN NEW CERTIFICATE REQUEST—–” and “—–END NEW CERTIFICATE REQUEST—–“. I used a customized certificate template (check Derek Seamans blog for more information about VMware and SSL certificates!).

vco_package_sign_certificate_05

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

6. Download the Base 64 encoded certificate and give it a meaningful name (certnew.cer is NOT meaningful…).

vco_package_sign_certificate_06

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Import the CA certificate

1. Now we have to import the CA certificate. Otherwise we would get an error message when we try to import the CA signed certificate. If you use a Microsoft CA, you can get the CA certificate from the “Active Directory Certificate Services” website. Simply click “Download a CA certificate, certificate chain, or CRL” from the “Select a task:” list. Then save the Base 64 encoded certificate file by choosing “Download CA certificate”. Give the file a meaningful name.

import_ca_certificate_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

2. Start the Orchestrator Client and login with an account, that has administrator privileges. In my case this is my domain-admin account (Administrator@lab.local) which is member of the Orchestrator administrator group.

import_ca_certificate_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

3. Select “Tools” > “Certificate manager…” from the right top of the Orchestrator client.

import_ca_certificate_03

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

4. Click “Import certificate…”, choose the certificate file you saves some seconds ago and import it.

import_ca_certificate_04

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

That’s it. Now we can move forward and replace the package signing certificate.

Replace the package signing certificate

1. Switch back to the Orchestrator Configuration website and choose the third option: “Import a certificate signing request signed by a certificate authority”.

vco_package_sign_certificate_07

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

2. Choose the saved certificate for your Orchestrator appliance and click “Import” (on the right at the end of the page).

vco_package_sign_certificate_08

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

That’s it! The package signing certificate is now replaced by a CA signed one.

vco_package_sign_certificate_09

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

As I already wrote: This certificate is not used to secure HTTPS. To get rid of the certificate warning when using the Orchestrator Client or the Orchestrator Configuration website, we need some additional steps.

Replace the client certificate

This certificate is used to for HTTPS. After replacing this certificate, the certificate warning for the Orchestrator configuration page (port 8283), the application page (port 8281) and the appliance management page (port 5480) should disappear.

These steps can’t be done using the Orchestrator Configuration, or the appliance management website. Let’s start a SSH session to the Orchestrator appliance.

1. Use SSH, connect to the Orchestrator appliance and login with root credentials. Change to the directory /etc/vco/app-server/security and take a backup of the Java Keystore (JKS).

vco:~ # cd /etc/vco/app-server/security
vco:/etc/vco/app-server/security # cp -a jssecacerts jssecacerts.old

2. Stop the Orchestrator service

vco:/etc/vco/app-server/security # service vco-server stop
Stopping tcServer
Instance is running as PID=3454, shutting down...
Instance is running PID=3454, sleeping for up to 60 seconds waiting for shutdown
Instance shut down gracefully

3. The utility “keytool” is used to manage the Java Keystore. The certificate we want to replace has the alias “dunes”. The password for the Java Keystore is “dunesdunes”. This password is valid for every Orchestrator installation! Before we can create a new keypair and export the CSR, the old key needs to be removed from the Java Keystore.

vco:/etc/vco/app-server/security # keytool -delete -alias dunes -keystore jssecacerts -storepass dunesdunes

4. Now a new keypair must be created.

vco:/etc/vco/app-server/security # keytool -keystore jssecacerts -storepass dunesdunes -genkey -alias dunes -keyalg RSA -sigalg SHA512withRSA
What is your first and last name?
[Unknown]: vco.lab.local
What is the name of your organizational unit?
[Unknown]: Lab
What is the name of your organization?
[Unknown]: vcloudnine.de
What is the name of your City or Locality?
[Unknown]:
What is the name of your State or Province?
[Unknown]:
What is the two-letter country code for this unit?
[Unknown]: DE
Is CN=vco.lab.local, OU=Lab, O=vcloudnine.de, L=Unknown, ST=Unknown, C=DE correct?
[no]: yes

Enter key password for <dunes>
(RETURN if same as keystore password):

Make sure that you hit RETURN keytool asks for the password! Just accept, that the same password is used as for the Java Keystore. btw: “dunes” is a hint to the company who originally developed the Orchestrator. This compay was bought by VMware some years ago.

5. Export the CSR to a file.

vco:/etc/vco/app-server/security # keytool -keystore jssecacerts -storepass dunesdunes -certreq -alias dunes -file vco-dunes.csr

You can copy the file to your CA by using SCP. Otherwise use a simple cat and copy the content between “—–BEGIN NEW CERTIFICATE REQUEST—–” and “—–END NEW CERTIFICATE REQUEST—–” directly into the corresponding text box of the CA.

vco:/etc/vco/app-server/security # cat vco-dunes.csr
-----BEGIN NEW CERTIFICATE REQUEST-----
MIICpDCCAmICAQAwbzELMAkGA1UEBhMCREUxEDAOBgNVBAgTB1Vua25vd24xEDAOBgNVBAcTB1Vu
a25vd24xFjAUBgNVBAoTDXZjbG91ZG5pbmUuZGUxDDAKBgNVBAsTA0xhYjEWMBQGA1UEAxMNdmNv
---snip---
WU/mcQcQgYC0SRZxI+hMKBYTt88JMozIpuE8FnqLVHyNKOCjrh4rs6Z1kW6jfwv6ITVi8ftiegEk
O8yk8b6oUZCJqIPf4VrlnwaSi2ZegHtVJWQBTDv+z0kqA4GFAAKBgQCU1o/X0gdOU7AECclXb7FM
WJlxNSNs2mJfvvsXjh+hinY+SNA3k1QnZ2oLdlW/BM81KlQgO3i4tS1R08WC9UJ0VeEDPhNWkD0V
LRPPCfjT1jTdo+UFsWIJz/XAX0ATiVyDVnktToRTXuFaKUYTz7eU80H0Wp67pr1L0oV5mr5Q5aAw
MC4GCSqGSIb3DQEJDjEhMB8wHQYDVR0OBBYEFMWW1WR7ropcYC5jzBxicMi/LL3lMAsGByqGSM44
BAMFAAMvADAsAhRJYkO/JB7xnjYGlC9b5TYJJIQDXAIUCpCX4p/MV85glncQpc2deDlUfyY=
-----END NEW CERTIFICATE REQUEST-----

6. Use the CSR to issue a new certificate.

vco_client_certificate_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

7. Download the Base 64 encloded certificate.

vco_client_certificate_02

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

8. Copy the certificate (using SCP) to the Orchestrator appliance, e.g. to /root or /etc/vco/app-server/security. Depending on the path, you have to change the “-file” parameter! I’ve copied the certificate to /etc/vco/app-server/security.

vco:/etc/vco/app-server/security # keytool -keystore jssecacerts -storepass dunesdunes -importcert -alias dunes -file vco-client-cert.cer
Certificate reply was installed in keystore

Please note that you also have to import the CA certificate into the Java Keystore! In my case, the CA certificate was already imported during the initial certificate import from my vCenter Server Appliance, where I also use CA signed certificates. You can import the CA certificate using the “SSL Tab” on the Orchestrator Configuration website.

9. Start the Orchestrator service.

service vco-server start

10. Navigate to the Orchestrator website and check the success of the certificate import.

vco_client_certificate_03

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

I still got a certificate warning when starting the Orchestrator client. But I am sure that this behavior is due to Java, because Java doesn’t know the CA.

Replace the appliance management website certificate

The appliance management (port 5480) is also secured with HTTPS. By default the certificate and private key are stored in a PEM file (the file is not protected by a passphrase), which is located at /opt/vmware/etc/lighttpd/server.pem. The PEM file includes the certificate AND the private key. It’s a bit tricky to export a PEM file with the private key from the Java Keystore.

1. First of all: Backup the old PEM file. I assume that you are still logged in on the Orchestrator appliance and still located at /etc/vco/app-server/security.

vco:/etc/vco/app-server/security # cp -a /opt/vmware/etc/lighttpd/server.pem /opt/vmware/etc/lighttpd/server.pem.old

2. Export the dunes key from the Java Keystore to a PKCS#12 store.

vco:/etc/vco/app-server/security # keytool -importkeystore -srckeystore jssecacerts -destkeystore dunes.p12 -deststoretype PKCS12 -srcalias dunes

3. Export a PEM file from the PCKS12 keystore. Make sure that you add the “-nodes” parameter.

vco:/etc/vco/app-server/security # openssl pkcs12 -in dunes.p12 -passin pass:dunesdunes -out server.pem -nodes
MAC verified OK

4. Copy the PEM file to /opt/vmware/etc/lighttpd/server.pem.

vco:/etc/vco/app-server/security # cp server.pem /opt/vmware/etc/lighttpd/server.pem

5. Restart the lighttpd.

vco:/etc/vco/app-server/security # service vami-lighttp restart
Shutting down vami-lighttpd:done.
Starting vami-lighttpd:2014-11-30 14:57:09: (/build/mts/release/bora-1191928/vadk/src/vami/apps/lighttpd/1.4.29/src/network.c.239) warning: please use server.use-ipv6 only for hostnames, not without server.bind / empty address; your config will break if the kernel default for IPV6_V6ONLY changes
done.

You can safly ignore the warning. Check the state of the daemon using this command:

vco:/etc/vco/app-server/security # service vami-lighttp status
Checking vami-lighttpd status: 5838 ? 00:00:00 vami-lighttpd

Lighttpd is running.

6. Check the status of the appliance management website.

vco_mgmt_cert_01

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Congratulations! The certificate is working.

Final words

As always, working with certificates is challenging. My first attempts have cost me an entire Sunday, especially because the documentation didn’t cover all aspects. I hope this blog post helps you to get through the certificate jungle. Feel free to provide feedback!

Install VMware Tools from VMware repository

This posting is ~9 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Today I stumbled over a nice workaround. While installing a CentOS 6 VM, I needed to install the VMware Tools. I don’t know why, but I got an error message, regarding a non accessible VMware Tools ISO.

vmware_tools_iso_not_accessable

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

I remembered a blog post I read a few months ago, about a VMware online repository, from which VMware tools can be installed. You can download the repository information here. The RPM for RHEL can also be used for CentOS. Simply download and install the RPM:

[[email protected] ~]# wget http://packages.vmware.com/tools/esx/latest/repos/vmware-tools-repo-RHEL6-9.4.6-1.el6.x86_64.rpm
--2014-07-09 11:55:58-- http://packages.vmware.com/tools/esx/latest/repos/vmware-tools-repo-RHEL6-9.4.6-1.el6.x86_64.rpm
Resolving packages.vmware.com... 92.197.129.9, 92.197.129.24
Connecting to packages.vmware.com|92.197.129.9|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2548 (2.5K)

undefined

Saving to: “vmware-tools-repo-RHEL6-9.4.6-1.el6.x86_64.rpm”

100%[======================================================================================================>] 2,548 --.-K/s in 0.009s

2014-07-09 11:55:58 (265 KB/s) - “vmware-tools-repo-RHEL6-9.4.6-1.el6.x86_64.rpm” saved [2548/2548]

[[email protected] ~]# rpm -Uvh vmware-tools-repo-RHEL6-9.4.6-1.el6.x86_64.rpm
warning: vmware-tools-repo-RHEL6-9.4.6-1.el6.x86_64.rpm: Header V3 RSA/SHA1 Signature, key ID 66fd4949: NOKEY
Preparing... ########################################### [100%]
package vmware-tools-repo-RHEL6-0:9.4.6-1.el6.x86_64 is already installed

Now you can use the repository information to install the VMware Tools.

[[email protected] ~]# yum install -y vmware-tools-esx-nox

That’s it.

centos_with_vmwaretools

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Automating updates during MDT 2013 Lite-Touch deployments

This posting is ~9 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

I use Microsofts Deployment Toolkit (MDT) in my lab to deploy Windows VMs with Windows Server 2008 and Windows Server 2012. I described the installation and configuration of MDT in a small blog post series. Take a look into the intro post, if you’re a new to MDT. But the OS installation isn’t the time consuming part of a deployment: It’s the installation of patches. Because of this, I decided to automate the patch installation and make it part of the OS installation.

The requirements

To automate the installation of patches, we need

To save resources, I’ve installed WSUS on the server I also use for MDT. In Windows Server 2008 R2 and Server 2012 (R2) WSUS is a installable role. Because I use a Windows 2008 R2 host for MDT, I could simply add the role to the server. I will not describe the installation of the WSUS role, because this is really easy.

Configuration of MDT 2013

In principle, there are two changes:

  • Enableing Windows update in the task sequence
  • Adding WSUS server to the CustomSettings.ini file

First of all you need to enable the Windows update part in the task sequence. Start the Deployment Workbench and navigate to the task sequences. Go into the properties, switch to the “Task Sequence” tab and enable the “Windows Update (Post-Application Installation)” task by unchecking the “Disable this step” box on the “Options” tab.

configure_mdt_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Click “OK” and switch to the deployment share. Go into the “Control” directory and open the CustomSettings.ini. Add this line to the end of the [Default] section:

WSUSServer=http://FQDN:8530

Make sure that you change the FQDN to your WSUS host and save the file.

If everything went fine, you should see this during the deployment process:

configure_mdt_3

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The host, that is currently deployed, should also appear in the WSUS console.

configure_mdt_4

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

How it works

During the deployment process the script ZTIWindowsUpdate.wsf is called. This script connects to the WSUS server and installs all appropriate updates, servicepacks etc. This includes the latest version of the Windows Update API and the Microsoft Update binaries. Because the script install ALL appropriate updates, service packs etc., there is no way to exclude updates from being installed. Really no way? However, there is a way. You can use the WUMU_ExcludeKB switch in the CustomSettings.ini to exclude updates. Simply add one line for each KB that you want to suppress.

WUMU_ExcludeKB1=47110214
WUMU_ExcludeKB2=47110215