Category Archives: Server

Using Let’s Encrypt DNS-01 challenge validation with local BIND instance

I’m using Let’s Encrypt certificates for a while now. In the past, I used the standalone plugin (TLS-SNI-01) to get or renew my certificates. But now I switched to the DNS plugin. I run my own name servers with BIND, so it was a very low hanging fruit to get this plugin to work.

Clker-Free-Vector-Images/ pixabay.com/ Creative Commons CC0

To get or renew a certificate, you need to provide some kind of proof that you are requesting the certificate for a domain that is under your control. No certificate authority (CA) wants to be the CA, that hands you out a certificate for google.com or amazon.com…

The DNS-01 challenge uses TXT records in order to validate your ownership over a certain domain. During the challenge, the Automatic Certificate Management Environment (ACME) server of Let’s Encrypt will give you a value that uniquely identifies the challenge. This value has to be added with a TXT record to the zone of the domain for which you are requesting a certificate. The record will look like this:

This record is for a wildcard certificate. If you want to get a certificate for a host, you can add one or more TXT records like this:

There is a IETF draft about the ACME protocol. Pretty interesting read!

Configure BIND for DNS-01 challenges

I run my own name servers with BIND on FreeBSD. The plugin for certbot automates the whole DNS-01 challenge process by creating, and subsequently removing, the necessary TXT records from the zone file using RFC 2136 dynamic updates.

First of all, we need a new TSIG (Transaction SIGnature) key. This key is used to authorize the updates.

This key has to be added to the named.conf. The key is in the .key file.

The key is used to authroize the update of certain records. To allow the update of TXT records, which are needed for the challenge, add this to the zone part of you named.con.

The records start always with _acme-challenge.domainname.

Now you need to create a config file for the RFC2136 plugin. This file also includes the key, but also the IP of the name server. If the name server is running on the same server as the DNS-01 challenge, you can use 127.0.0.1 as name server address.

Now we have everything in place. This is a --dry-run  from on of my FreeBSD machines.

This is a snippet from the name server log file at the time of the challenge.

You might need to modify the permissons for the directory which contains the zone files. Usually the name server is not running as root. In my case, I had to grant write permissions for the “bind” group. Otherwise you might get “permission denied”.

 

Powering on a VM with shared VMDK fails after extending a EagerZeroedThick VMDK

I hope that you are not reading this blog post while searching for a solution for a failed cluster. If so, feel free to leave a comment if this blog post saved your evening or weekend. :)

Last friday, a change at one of my customers went horribly wrong. I was not onsite, but they contacted me during the night from friday to saturday, because their most important Windows Server Failover Cluster was unable to start after extending a shared VMDK.

cripi/ pixabay.com/ Creative Commons CC0

They tried something pretty simple: Extending an virtual disk of a VM. That is something most of us doing pretty often. The customer did this also pretty often. It was a well known task… Except the fact, that the VM was part of a Windows Server Failover Cluster. With shared VMDKs. And the disks were EagerZeroedThick, because this is a requirement for shared VMDKs.

They extended the disk using the vSphere Web Client. And at this point, the change was doomed to fail. They tried to power-on the VMs, but all they got was this error:

VMware ESX cannot open the virtual disk, “/vmfs/volumes/4c549ecd-66066010-e610-002354a2261b/VMNAME/VMDKNAME.vmdk” for clustering. Please verify that the virtual disk was created using the ‘thick’ option.

A shared VMDK is a VMDK in multiwriter mode. This VMDK has to be created as Thick Provision Eager Zeroed. And if you wish to extend this VMDK, you must use  vmkfstools  with the option -d eagerzeroedthick. If you extend the VMDK using the Web Client, the extended portion of the disk will become LazyZeroed!

VMware has described this behaviour in the KB1033570 (Powering on the virtual machine fails with the error: Thin/TBZ disks cannot be opened in multiwriter mode). There is also a blog post by Cormac Hogan at VMware, who has described this behaviour.

That’s a screenshot from the failed cluster. Check out the type of the disk (Thick-Provision Lazy-Zeroed).

Patrick Terlisten/ vcloudnine.de/ Creative Commons CC0

You must use vmkfstools  to extend a shared VMDK – but vmkfstools is also the solution, if you have trapped into this pitfall. Clone the VMDK with option -d eagerzeroedthick.

Another solution, which was new to me, is to use Storage vMotion. You can migrate the “broken” VMDK to another datastore and change the the disk format during Storage vMotion. This solution is described in the “Notes” section of KB1033570.

Both ways will fix the problem. The result will be a Thick Provision Eager Zeroed VMDK, which will allow the VMs to be successfully powered on.

CloudFlare API v4 and Fail2ban: Fixing the unban action

In January 2017, I wrote an article about how to protect your WordPress blog using the WP Fail2Ban plugin, fail2ban on your Linux/ FreeBSD host, and CloudFlare. Back then, the fail2ban was using the CloudFlare API V1, which was already deprecated since November 2016.

Free-Photos/ pixabay.com/ Creative Commons CC0

Although the actions were updated later to use CloudFlare API V4, I still had problems with the unbaning of IP addresses. IP addresses were banned, but the unban action failed. 

This is the unban action, which is included in fail2ban (taken from fail2ban-0.10.3.1 which is shipped with FreeBSD 11.1-RELEASE-p10):

And this is the unban action, which finally solved this issue:

I found the solution at serverfault.com. The only difference is an additional tr -d '\n'  in the last line of the statement. Kudos to Jake for fixing this!

To prevent the action file to being overwritten, you should copy the original cloudflare.conf  located in the  action.d  directory, e.g. to mycloudflare.conf , and use the copied action file in your fail definition.

Windows Network Policy Server (NPS) server won’t log failed login attempts

This is just a short, but interesting blog post. When you have to troubleshoot authentication failures in a network that uses Windows Network Policy Server (NPS), the Windows event log is absolutely indispensable. The event log offers everything you need. The success and failure event log entries include all necessary information to get you back on track. If failure events would be logged…

geralt/ pixabay.com/ Creative Commons CC0

Today, I was playing with Alcatel-Lucent Enterprise OmniSwitches and Access Guardian in my lab. Access Guardian refers to the some OmniSwitch security functions that work together to provide a dynamic, proactive network security solution:

  • Universal Network Profile (UNP)
  • Authentication, Authorization, and Accounting (AAA)
  • Bring Your Own Device (BYOD)
  • Captive Portal
  • Quarantine Manager and Remediation (QMR)

I have planned to publish some blog posts about Access Guardian in the future, because it is a pretty interesting topic. So stay tuned. :)

802.1x was no big deal, mac-based authentication failed. Okay, let’s take a look into the event log of the NPS… okay, there are the success events for my 802.1x authentication… but where are the failed login attempts? Not a single one was logged. A short Google search showed me the right direction.

Failed logon/ logoff events were not logged

In this case, the NPS role was installed on a Windows Server 2016 domain controller. And it was a german installation, so the output of the commands is also in german. If you have an OS installed in english, you must replace “Netzwerkrichtlinienserver” with “Network Policy Server”.

Right-click the PowerShell Icon and open it as Administrator. Check the current settings:

As you can see, only successful logon and logoff events were logged.

The option /success:enable /failure:enable activeates the logging of successful and failed logon and logoff attempts.

Single Sign On (SSO) with RemoteApps on Windows Server 2012 (R2)

A RemoteApp is an application, that is running on a Remote Desktop Session Host (RDSH), and only the display output is sent to the client. Because the application is running on a RDSH, you can easily deliver applications to end users. Another benefit is, that data is not leaving the datacenter. Software and data are kept inside the datacenter. RemoteApps can be used and deployed in various ways:

  • Users can start RemoteApps through the Remote Desktop Web Access
  • Users can start RemoteApps using a special RDP file
  • Users can simply start a link on the desktop or from the start menu (RemoteApps and Desktop connections deployed by an MSI or a GPO)
  • or they can click on a file that is associated with a RemoteApp

Even in times of VDI (LOL…), RemoteApps can be quite handy. You can deploy virtual desktops without any installed applications. Application can then delivered using RemoteAPps. This can be handy, if you migrate from RDSH/ Citrix published desktops to  VMware Horizon View. Or if you are already using RDSH, and you want to try VMware Horizon View.

But three things can really spoil the usage of RemoteApps:

  • certificate warnings
  • warnings about an untrusted publisher
  • asking for credentials (no Single Sign On)

Avoid certificate warnings

As part of the RDS reployment, the assistant kindly asks for certificates. Sure, you can deploy self signed certificates, but that’s not a good idea. You should deploy certificates from your internal certificate authority.

RDS Certificate Settings

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

This is a screenshot from my tiny single server RDS farm. Make sure that you use the correct names for the certificates! If you are using a RDS farm, make sure that you include the DNS name of the RD Connection Broker HA cluster.

If you want to make the RD Web Access publicly available, make sure that you include the public DNS name into the certificate.

Untrusted Publisher

When you try to open a RemoteApp, you might get this message:

RDS Connection Warning

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Annoying, isn’t it? But easy to fix. Remember the certificates you deployed during the RDS deployment? You need the certificate thumbprint of the publisher certificate (check the screenshot from the deployment properties > “RD Connection Broker – Publishing”). This is a screenshot from my lab:

RDS Certificate Thumbprint

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Take this thumbprint, open a PowerShell windows and convert the thumbprint into a format, that can be used with the GPO we have to build.

The result is a string without spaces and only with uppercase letters. Now we need to create a GPO. This GPO has to be linked to the OU in which the computers or users reside, that should use the RemoteApp. In my example, I use the user part of a GPO. So this GPO has to be linked to the OU, in which the users reside. The necessary GPO setting can be found here:

User Configuration > Policies >Administrative Templates > Windows Components > Remote Desktop Services > Remote Desktop Connection Client > Specify SHA1 thumbprints of certificates representing trusted .rdp publishers

RemoteApp GPO Settings SHA1 Thumbprint

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

I use the same GPO to publish the default connection URL. With this setting configured, the users automatically get the published RemoteApps to their start menu. You can find the setting here:

User Configuration > Policies >Administrative Templates > Windows Components > Remote Desktop Services > RemoteAppe and Desktop Connections > Specify default connection URL

RemoteApp Connection URL

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Credentials dialog

At this point, you will still get a “Asking for credentials” dialog. To allow the client to pass the current user login information to the RDS host, we need to configure an additional setting. Create a new GPO and link this GPO to the OU, in which the computers reside, on which the RemoteApps should be used. The setting can be found here:

Computer Configuration > Policies >Administrative Templates > System > Credentials Delegation > Allow delegating default credentials

RemoteApp Credential Delegation

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

You have to add the FQDN of your RD Connection Broker server or farm. Please make sure that you add the “TERMSRV” prefix! Because I use a single server deployment, my RD Connection Broker is also my RDS host.

The final test

Make sure that all group policies were applied. Open the Remote Desktop Connection Client and enter the RDS farm name. If everything is configured properly, you should connected without asked for credentials.

RDS Connection

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The same should happen, if you try to start a RemoteApp.

RDS Connection Notepad++

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Troubleshooting

If you are still getting asked for credentials, something  is wrong with the credentials delegation. Check the GPO and if it is linked to the correct OU. If you are getting certificate warnings, check the names that you have included in the certificates. Warnings about untrusted publishers may be caused by a wrong SHA1 thumbprint (or wrong format).

How to monitor ESXi host hardware with SNMP

The Simple Network Management Protocol (SNMP) is a protocol for monitoring and configuration of network-attached devices. SNMP exposes data in the form of variables and values. These variables can then be queried or set. A query retrieves the value of a variable, a set operation assigns a value to a variable. The variables are organized in a hierarchy and each variable is identified by an object identifiers (OID). The management information base (MIB ) describes this hierarchy. MIB files (simple text files) contain metadata for each OID. These are necessary for the translation of a numeric OID into a human-readable format.  SNMP knows two devices types:

  • the managed device which runs the SNMP agent
  • the network management station (NMS) which runs the management software

The NMS queries the SNMP agent with GET requests. Configuration changes are made using SET requests. The SNMP agent can inform the NMS about state changes using a SNMP trap message. The easiest way for authentication is the SNMP community string.

SNMP is pretty handy and it’s still used, especially for monitoring and managing networking components. SNMP has the benefit, that it’s very lightweight. Monitoring a system with WBEM or using an API can cause slightly more load, compared to SNMP. Furthermore, SNMP is a internet-protocol standard. Nearly every device supports SNMP.

Monitoring host hardware with SNMP

Why should I monitor my ESXi host hardware with SNMP? The vCenter Server can trigger an alarm and most customers use applications like VMware vRealize Operations, Microsoft System Center Operations Manager, or HPE Systems Insight Manager (SIM). There are better ways to monitor the overall health of an ESXi host. But sometimes you want to get some stats about the network interfaces (throughput), or you have a script that should do something, if a NIC goes down or something else happens. Again, SNMP is very resource-friendly and widely supported.

Configure SNMP on ESXi

I focus on ESXi 5.1 and beyond. The ESXi host is called “the SNMP Agent”. We don’t configure traps or trap destinations. We just want to poll the SNMP agent using SNMP GET requests. The configuration is done using esxcli . First of all, we need to set a community string and enable SNMP.

That’s it! The necessary firewall ports and services are opened and started automatically.

Querying the SNMP agent

I use a CentOS VM to show you some queries. The Net-SNMP package contains the tools snmpwalk  and  snmpget. To install the Net-SNMP utils, simply use yum .

Download the VMware SNMP MIB files, extract the ZIP file, and copy the content to to /usr/share/snmp/mibs.

Now we can use snmpwalk  to “walk down the hierarchy “. This is only a small part of the complete output. The complete snmpwalk  output has more than 4000 lines!

Now we can search for interesting parts. If you want to monitor the link status of the NICs, try this:

As you can see, I used a subtree of the whole hierarchy (IF-MIB::ifDescr). This is the “translated” OID. To get the numeric OID, you have to add the option  -O fn to snmpwalk .

You can use snmptranslate  to translate an OID.

So far, we have only the description of the interfaces. With a little searching, we find the status of the interfaces (I stripped the output).

ifOperStatus.1  corresponds with ifDescr.1 , ifOperStatus.2  corresponds with ifDescr.2  and so on. The ifOperStatus corresponds  with the status of the NICs in the vSphere Web Client.

nic_status_web_client

If you want to monitor the fans or power supplies, use these these OIDs.

Many possibilities

SNMP offers a simple and lightweight way to monitor a managed device. It’s not a replacement for vCenter, vROps or SCOM. But it can be an addition, especially because SNMP is an internet-protocol standard.

How to dramatically improve website load times

Over the last weeks, I’ve tried to improve the performance of my blog. The side was very slow and the page load times varied between 5 and 10 seconds. Much too long! I’ve reduced time consuming plugins, checked the size of pictures, checked CSS and HTML for misconfiguration/ slow clode and tuned the database. The page load times have not really improved.

Yesterday, I checked the httpd.conf on my webserver and found a little typo (accidentally commented line). After a restart of the Apache webserver, the page load times have dramatically improved (down to 2 – 3 seconds). What had happened?

HTTP keep-alive

HTTP keep-alive, sometimes also called “HTTP persistent connection”, was designed to transfer multiple HTTP requests and responses over a single TCP connection. This is much better as opening a new connection for every single request/ response pair. The benefits of HTTP keep-alive are:

  • lower CPU usage
  • lower memory usage
  • reduced latency due to reduced requests/ handshaking

These benefits are even more important, if you use HTTPS connections (and vcloudnine.de is HTTPS-only…), because each new HTTP connection needs much more CPU time and round-trips compared to an unsecure HTTP connection. This little picture clarifies the differences.

HTTP_persistent_connection

Wikipedia/ wikipedia.org/ Public domain image resources

If you’re using Apache, you can enable HTTP keep-alive with a single line in the httpd.conf.

Further information can be found in the documentation of Apache (Apache webserver 2.2 and 2.4).

HPE Hyper Converged 380 – A look under the hood

In March 2016, HPE CEO Meg Whitman announced a ProLiant-based HCI solution, that should be easier to use and cheaper than Nutanix.

This isn’t HPEs first dance on this floor. In August 2015, HP launched the Hyper Converged 250 System (HC250), which is based on the Apollo server platform. The HW design of the HC250 comes close to a Nutanix Block, because the Apollo platform supports up to four nodes in 2U. Let me say this clear: The Hyper Converged 380 (HC380) is not a replacement for the HC250! And before the HC250, HPE offered the Converged System 200-HC StoreVirtual and 200-HC EVO:RAIL (different models).

The HC380 is based on the ProLiant DL380 Gen9 platform. The DL380 Gen9 is one of the, if not the best selling x86 server on the market. Instead of developing everything from scratch, HPE build their new HC380 from different already available HPE products. With one exception: HPE OneView User Experience (UX). IT was developed from scratch and consolidates all management and monitoring tasks into a single console. The use of already available components was the reason for the low time-to-market (TTM) of the HC380.

Currently, the HC380 can only run VMware vSphere (HPE CloudSystem uses VMware vSphere). Support for Microsoft Hyper-V and Citrix XenServer will be added later. If you wish to run Microsoft Hyper-V, check the HC250 or wait until it’s supported with the HC380.

What flavor would you like?

The HC380 is available in three editions (use cases):

  • HC380 (Virtualization)
  • HC380 (HPE CloudSystem)
  • HC380 (VDI)

All three use cases are orderable using a single SKU and include two DL380 Gen9 nodes (2U). You can add up to 14 expansion nodes, so that you can have up to 16 dual-socket DL380 Gen9.

Each node comes with two Intel Xeon E5 CPUs. The exact CPU model has to be selected before ordering. The same applies to the memory (128 GB or 256 GB per node, up to 1,5 TB) and disk groups (up to three disk groups, each with 4,5 to 8 TB usable capacity per block, 8 drives either SSD/ HDD or all HDD with a maximum of 25 TB usable per node). The memory and disk group configuration depends on the specific use case (virtualization, CloudSystem, VDI). The same applies to the number of network ports (something between 8x 1 GbE and 6x 10 GbE plus 4x 1 GbE). For VDI, customers can add NVIDIA GRID K1, GRID K2 or Telsa M60 cards.

VMware vSphere 6 Enterprise or Enterprise Plus are pre-installed and licences can be bought from HPE. Interesting note from the QuickSpecs:

NOTE: HPE Hyper Converged 380 for VMware vSphere requires valid VMware vSphere Enterprise or higher, and vCenter licenses. VMware licenses can only be removed from the order if it is confirmed that the end-customer has a valid licenses in place (Enterprise License Agreement (ELA), vCloud Air Partner or unused Enterprise Purchasing Program tokens).

Hewlett Packard Enterprise supports VMware vSphere Enterprise, vSphere Enterprise Plus and Horizon on the HPE Hyper Converged 380.

No support for vSphere Standard or Essentials (Plus)! Let’s see how HPE will react on the fact, that VMware will phase out vSphere Enterprise licenses.

The server includes 3y/ 3y/ 3y onsite support with next business day response. Nevertheless, at least 3-year HPE Hyper Converged 380 solution support is requires according to the latest QuickSpecs.

What’s under the hood?

As I already mentioned, the HC380 was built from well known HPE products. Only HPE OneView User Experience (UX) was developed from scratch. OneView User Experience (UX) consolidates the following tasks into a single console (source QuickSpecs):

  • Virtual machine (VM) vending (create, edit, delete)
  • Hardware/driver and appliance UI frictionless updates
  • Advanced capacity and performance analytics (optional)
  • Backup and restore of appliance configuration details
  • Role-based access
  • Integration with existing LDAP or Active Directory
  • Physical and virtual hardware monitoring

Pretty cool fact: HPE OneView User Experience (UX) will be available for the HC250 later this year. Part of a 2-node cluster are not only the two DL380 Gen9 servers, but also three VMs:

  • HC380 Management VM
  • HC380 OneView VM
  • HC380 Management UI VM

The Management VM is used for VMware vCenter (local install) and HPE OneView for vCenter. You can use a remote vCenter (or a vCenter Server Appliance), but you have to make sure that the remote vCenter has HPE Oneview for vCenter integrated. The OneView VM running HPE OneView for for HW/ SW management. The Management UI VM is running HPE OneView User Experience.

The shared storage is provided by HPE StoreVirtual VSA. A VSA is running on each node. As you might know, StoreVirtual VSA comes with an all-inclusive license. No need to buy additional licenses. You can have it all: Snapshots, Remote Copy, Clustering, Thin Provisioning, Tiering etc. The StoreVirtual VSA delivers sustainable performance, a good VMware vSphere integration and added value, for example support for Veeam Storage Snapshots.

When dealing with a 2-node cluster, the 25 TB usable capacity per node means in fact 25 TB usable for the whole 2-node cluster. This is because of the Network RAID 1 between the two StoreVirtual VSA. The data is mirrored between the VSAs. When adding more nodes, the data is striped accross the nodes in the cluster (Network RAID 10+2).

Also important in case of the 2-node cluster: The quorum. At least two StoreVirtual VSA build a cluster. As in every cluster, you need some kind of quorum. StoreVirtual 12.5 added support for a NFSv3 based quorum witness. This is in fact a NFS file share, which has to be available for both nodes. This is only supported in 2-node clusters and I highly recommend to use this. I have a customer that uses a Raspberry Pi for this…

Start the engine

You have to meet some requirements before you can start.

  • 1 GbE connections for each nodes iLO and 1 GbE ports
  • 1 GbE or 10 GbE connections for each node FlexLOM ports
  • Windows-based computer directly connected to a node (MacOS X or Linux should also work)
  • VMware vSphere Enterprise or Enterprise Plus licenses
  • enough IP addresses and VLANs (depending on the use case)

For general purpose server virtualization, you need at least three subnets and three VLANs:

  • Management
  • vMotion
  • Storage (iSCSI)

Although you have the choice between a flat (untagged) and a VLAN-tagged network design, I would always recomment a VLAN-tagged approach. It’s highly recommended to use multiple VLANs to get the traffic seperated. The installation guide includes worksheets and examples to help you planning the deployment. For a 2-node cluster you need at least:

  • 5 IP addresses for the management network
  • 2 IP addresses for the vMotion network
  • 8 IP addresses for the iSCSI storage network

You should leave space for expansion nodes. A proper planning saves you later trouble.

HP OneView InstantOn is used for the automated deployment. It guides you through the necessary configuration steps. HPE says that the deployment requires less than 60 minutes and all you need to enter are

  • IP addresses
  • credentials
  • VMware licenses

After the deployment, you have to install the StoreVirtual VSA licenses. Then you can create datastores and, finally, VMs.

hpehc380_ux

HPE/ hpw.com

Summary

Hyper-Converged has nothing to do with the form factor. Despite the fact that a 2-node cluster comes in 4U, the HC380 has everything you would expect from a HCIA. The customers will decide if HPE held promise. The argument for the HC380 shouldn’t be the lower price compared to Nutanix or other HCI players. Especially, HPE should not repeat the mistake of the HC200 EVO:RAIL: To buggy and to expensive. The HC380 combines known and mature products (ProLiant DL380 Gen9, StoreVirtual VSA, OneView). It’s now up to HPE.

I have several small and mid-sized customers that are running two to six nodes VMware vSphere environments. Also the HC380 for VDI can be very interesting.

HP ProLiant BL460c Gen9: MicroSD card missing during ESXi 5.5 setup

Today, I was at a customer to prepare a two node vSphere cluster for some MS SQL server tests. Nothing fancy, just two HP ProLiant BL460c Gen9 blades and two virtual volumes from a HP 3PAR. Each blade had two 400 GB SSDs, two 64 GB M.2 SSDs and a 1 GB MicroSD card. Usually, I install ESXi to a SD card. In this case, a MicroSD card. The SSDs were dedicated for PernixData FVP. Although I saw the MicroSD card in the boot menu, ESXi doesn’t showed it as a installation target.

bl460_no_sdcard

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

I’ve read a lot about similar observations of other users, but no solution seemed to be plausible. It’s not a solution to switch from UEFI to legacy BIOS or play with power management settings. BIOS and ILO firmware were up to date. But disabling USB3 support seemed to be a possible solution.

You can disable USB3 support in the RBSU.

bl460_rbsu

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

After a reboot, the MicroSD card appeared as installation target during the ESXi setup.

bl460_with_sdcard

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

I never noticed this behaviour with ProLiant DL360 Gen9, 380 Gen9, 560 Gen9 or 580 Gen9. I only saw this with BL460c Gen9. The affected blade servers had System ROM I36 05/06/2015 and I36 09/24/2015, as well as ILO4 2.30 installed.

Exchange Management Shell (EMS) and new PowerShell releases

Some day ago, I installed a new Exchange 2013 CU11 for some test ins my lab. Nothing fancy, just a single server deployment on a Windows Server 2012 R2 VM. I deployed this Windows Server from a template, which was updated with the latest Windows Patches and WMF some days ago. The Exchange setup went smooth. I updated the SSL certificates and the internal and external URLs for the virtual directories. Then I started the Exchange Management Shell (EMS), to update the Autodiscover URL in the service connection point (SCP) of the Active Directory.

Well… that doesn’t look successful. I quickly switched to a PowerShell windows and imported the Exchange snap-in manually.

Looks better, isn’t it?

I compared my lab setup to a running Exchange 2013 single server deployment and I stumbled over the PowerShell version. In addition, I found the Windows Management Framework 5 Production Preview (KB3066437) on my freshly deployed Windows Server 2012 R2 VM.

After checking the Exchange Server Supportability Matrix, it was clear what had happened: WMF 5 is not supported (Source). Not supported with Exchange 2013, and also not supported with Exchange 2016.

exchange_supported_wmf

After I had removed KB3066437 from my Exchange server, the EMS loaded successfully.

You should ALWAYS check if installed applications are supported with newer version of PowerShell/ WMF! Currentyl, no Exchange version is supported with PowerShell 5/ WMF 5.