Tag Archives: proliant

VMware ESXi 6.7 memory health warnings after ProLiant SPP

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

During the deployment of a vSAN cluster consisting of multiple HPE ProLiant DL380 Gen10 hosts, I noticed a memory health warning after updating the firmware using the Support Pack for ProLiant. The error was definitely not shown before the update, so it was clear, that this was not a real issue with the hardware. Furthermore: All hosts showed this error.

Memory health status after SPP

The same day, a customer called me and asked me about a strange memory health error after he has updated all of his hosts with the latest SPP…

My first guess, that this was not caused by a HW malfunction was correct. HPE published a advisory about this issue:

The Memory Sensor Status Reported in the vSphere Web Client Is Not Accurate For HPE ProLiant Gen10 and Gen10 Plus Servers Running VMware ESXi 6.5/6.7/7.0 With HPE Integrated Lights-Out 5 (iLO 5) Firmware Version 2.30

To fix this issue, you have to update the ILO5 firmware to version 2.31. You can do this manually using the ILO5 interface, or you can add the file to the SPP. I’ve added the BIN file to the USB stick with the latest SPP.

If you want to update the firmware manually, simply upload the BIN file using the built-in firmware update function.

  1. Navigate to Firmware & OS Software in the navigation tree, and then click Update Firmware
  2. Select the Local file option and browse to the BIN file
  3. To save a copy of the component to the iLO Repository, select the Also store in iLO Repository check box
  4. To start the update process, click Flash

You can download the latest ILO5 2.31from HPE using this link. After the FW update, the error will resolve itself.

Only ESXi 6.7 is affected, and only ESXi 6.7 running on HPE ProLiant hosts, regardless if ML, DL or BL series.

Fan health sensors report false alarms on HPE Gen10 Servers with ESXi 6.7

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

I’ve got several mails and comments about this topic. It looks like that the latest ESXi 6.7 updates are causing some trouble on HPE ProLiant Gen10 servers.

I’ve blogged about recurring host hardware sensor state alarm messages some weeks ago. A customer noticed them after an update. Last week, I got the first comments under this blog post abot fan failure messages after applying the latest ESXi 6.7 updates. Then more and more customers asked me about this, because they got these messages too in their environment after applying the latest updates.

Last Saturday I tweeted my blog post to give a hint to my followers who may be experiencing the same problem.

Fortunately one of my followers (Thanks Markus!) pointed me to a VMware KB article with a workaround: Fan health sensors report false alarms on HPE Gen10 Servers with ESXi 6.7 (78989).

This is NOT a solution, but a workaround. Keep that in Mind.

Thanks again to Markus. Make sure to visit his awesome blog (MY CLOUD-(R)EVOLUTION) , especially if you are interested in vSphere, Veeam and automation!

VMware ESXi 6.7: Recurring host hardware sensor state alarm

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

If you found this blog post because you are searchting for a solution for a FAN FAILURE on your ProLiant Gen10 HW after applying the latest ESXi 6.7 patches, then use this shortcut for the workaround: Fan health sensors report false alarms on HPE Gen10 Servers with ESXi 6.7


I had a really annoying problem at one of my customers. After deploying new VMware ESXi hosts (HPE ProLiant DL380 Gen10) along with an upgrade of the vCenter Server Appliance to 6.7 U2, the customer reported recurring host hardware sensor state alarm messages in the vCenter for all hosts.

After acknowledging the alarm, it recurred after a couple of minutes or hours. The hardware was finde, no errors or warnings were noticed in the ILO Management Log. But the vCenter reported periodically a Sensor -1 type error in the Events window. The /var/log/syslog.log contained messages like this:

2019-11-29T04:39:48Z sfcb-vmw_ipmi[4263212]: IpmiIfcSelGetInfo: IPMI_CMD_GET_SEL_INFO cc=0xc1
 2019-11-29T04:39:49Z sfcb-vmw_ipmi[4263212]: IpmiIfcSelGetInfo: IPMI_CMD_GET_SEL_INFO cc=0xc1
 2019-11-29T04:39:50Z sfcb-vmw_ipmi[4263212]: IpmiIfcSelGetInfo: IPMI_CMD_GET_SEL_INFO cc=0xc1
 2019-11-29T04:39:51Z sfcb-vmw_ipmi[4263212]: IpmiIfcSelGetInfo: IPMI_CMD_GET_SEL_INFO cc=0xc1
 2019-11-29T04:39:52Z sfcb-vmw_ipmi[4263212]: IpmiIfcSelGetInfo: IPMI_CMD_GET_SEL_INFO cc=0xc1

Sure, you can ignore this. But you shouldn’t ignore this, because these events can result in the vCenter database increasing in size. vCenter can crash once the SEAT partition size goes above the 95% threshold. So you better fix this!

Long story short: This bug is fixed with the latest November updates for ESXi 6.7 U3. A workaround is to disable the WBEM service. The WBEM service might be enabled after a reboot. In this case you have to disable the sfcbd-watchdog service.

But the best way to solve this is to install the latest patches (VMware ESXi 6.7, Patch Release ESXi670-201911001)

The Meltdown/ Spectre shortcut blogpost for Windows, VMware and HPE

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

TL;DR

Jump to

I will try to update this blog post regularly!

Change History

01-13-2018: Added information regarding VMSA-2018-0004
01-13-2018: HPE has pulled Gen8 and Gen9 system ROMs
01-13-2018: VMware has updated KB52345 due to issues with Intel microcode updates
01-18-2018: Updated VMware section
01-24-2018: Updated HPE section
01-28-2018: Updated Windows Client and Server section
02-08-2018: Updated VMware and HPE section
02-20-2018: Updated HPE section
04-17-2018: Updated HPE section


Many blog posts have been written about the two biggest security vulnerabilities discovered so far. In fact, we are talking about three different vulnerabilities:

  • CVE-2017-5715 (branch target injection)
  • CVE-2017-5753 (bounds check bypass)
  • CVE-2017-5754 (rogue data cache load)

CVE-2017-5715 and CVE-2017-5753 are known as “Spectre”, CVE-2017-5754 is known as “Meltdown”. If you want to read more about these vulnerabilities, please visit meltdownattack.com.

Multiple steps are necessary to be protected, and all necessary information are often repeated, but were distributed over several websites, vendor websites, articles, blog posts or security announcements.

Two simple steps

Two (simple) steps are necessary to be protected against these vulnerabilities:

  1. Apply operating system updates
  2. Update the microcode (BIOS) of your server/ workstation/ laptop

If you use a hypervisor to virtualize guest operating systems, then you have to update your hypervisor as well. Just treat it like an ordinary operating system.

Sounds pretty simple, but it’s not. I will focus on three vendors in this blog post:

  • Microsoft
  • VMware
  • HPE

Let’s start with Microsoft. Microsoft has published the security advisory ADV180002  on 01/03/2018.

Microsoft Windows (Client)

The necessary security updates are available for Windows 7 (SP1), Windows 8.1, and Windows 10. The January 2018 security updates are ONLY offered in one of theses cases (Source: Microsoft):

  • An supported anti-virus application is installed
  • Windows Defender Antivirus, System Center Endpoint Protection, or Microsoft Security Essentials is installed
  • A registry key was added manually

To add this registry key, please execute this in an elevated CMD. Do not add this registry key, if you are running an unsupported antivirus application!! Please contact your antivirus application vendor! This key has to be added manually, only in case if NO antivirus application is installed, otherwise your antivirus application will add it!

reg ADD HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\QualityCompat /f /v cadca5fe-87d3-4b96-b7fb-a231484277cc /t REG_DWORD /d 0
OSUpdate
Windows 10 (1709)KB4056892
Windows 10 (1703)KB4056891
Windows 10 (1607)KB4056890
Windows 10 (1511)KB4056888
Windows 10 (initial)KB4056893
Windows 8.1KB4056898
Windows 7 SP1KB4056897

Please note, that you also need a microcode update! Reach out to your vendor. I was offered automatically to update the microcode on my Lenovo ThinkPad X250.

Update 01-28-2018

Microsoft has published an update to disable mitigation against Spectre (variant 2) (Source: Microsoft). KB4078130 is available for Windows 7 SP1, Windows 8.1 and Windows 10, and it disables the mitigation against Spectre Variant 2 (CVE 2017-5715) independently via registry setting changes. The registry changed are described in KB4073119.

reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management" /v FeatureSettingsOverride /t REG_DWORD /d 1 /f

reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management" /v FeatureSettingsOverrideMask /t REG_DWORD /d 1 /f

A reboot is required to disable the mitigation.

Windows Server

The necessary security updates are available for Windows Server 2008 R2, Windows Server 2012 R2, Windows Server 2016 and Windows Server Core (1709). The security updates are NOT available for Windows Server 2008 and Server 2012!. The January 2018 security updates are ONLY offered in one of theses cases (Source: Microsoft):

  • An supported anti-virus application is installed
  • Windows Defender Antivirus, System Center Endpoint Protection, or Microsoft Security Essentials is installed
  • A registry key was added manually

To add this registry key, please execute this in an elevated CMD. Do not add this registry key, if you are running an unsupported antivirus application!! Please contact your antivirus application vendor! This key has to be added manually, only in case if NO antivirus application is installed, otherwise your antivirus application will add it!

reg ADD HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\QualityCompat /f /v cadca5fe-87d3-4b96-b7fb-a231484277cc /t REG_DWORD /d 0
OSUpdate
Windows Server, version 1709 (Server Core Installation)KB4056892
Windows Server 2016KB4056890
Windows Server 2012 R2KB4056898
Windows Server 2008 R2KB4056897

After applying the security update, you have to enable the protection mechanism. This is different to Windows Windows 7, 8.1 or 10! To enable the protection mechanism, you have to add three registry keys:

reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management" /v FeatureSettingsOverride /t REG_DWORD /d 0 /f

reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management" /v FeatureSettingsOverrideMask /t REG_DWORD /d 3 /f

reg add "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization" /v MinVmVersionForCpuBasedMitigations /t REG_SZ /d "1.0" /f

The easiest way to distribute these registry keys is a Group Policy. In addition to that, you need a microcode update from your server vendor.

Update 01-28-2018

The published update for Windows 7 SP1, 8.1 and 10 (KB4073119) is not available for Windows Server. But the same registry keys apply to Windows Server, so it is sufficient to change the already set registry keys to disable the mitigation against Spectre Variant 2 (CVE 2017-5715).

reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management" /v FeatureSettingsOverride /t REG_DWORD /d 1 /f

reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management" /v FeatureSettingsOverrideMask /t REG_DWORD /d 1 /f

A reboot is required to disable the mitigation.

VMware vSphere

VMware has published three important VMware Security Advisories (VMSA):

VMware Workstation Pro, Player, Fusion, Fusion Pro, and ESXi are affected by CVE-2017-5753 and CVE-2017-5715. VMware products seems to be not affected by CVE-2017-5754. On 09/01/2017, VMware has published VMSA-2018-0004, which also addresses CVE-2017-5715. Just to make this clear:

  • Hypervisor-Specific Remediation (documented in VMSA-2018-0002.2)
  • Hypervisor-Assisted Guest Remediation (documented in VMSA-2018-0004)

I will focus von vCenter and ESXi. In case of VMSA-2018-002, security updates are available for ESXi 5.5, 6.0 and 6.5. In case of VMSA-2018-0004, security updates are available for ESXi 5.5, 6.0, 6.5, and vCenter 5.5, 6.0 and 6.5. VMSA-2018-0007 covers VMware Virtual Appliance updates against side-channel analysis due to speculative execution.

Before you apply any security updates, please make sure that you read this:

  • Deploy the updated version of vCenter listed in the table (only if vCenter is used).
  • Deploy the ESXi security updates listed in the table.
  • Ensure that your VMs are using Hardware Version 9 or higher. For best performance, Hardware Version 11 or higher is recommended.

For more information about Hardware versions, read VMware KB article 1010675.

VMSA-2018-0002.2

OSUpdate
ESXi 6.5ESXi650-201712101-SG
ESXi 6.0ESXi600-201711101-SG
ESXi 5.5ESXi550-201709101-SG

In case of ESXi550-201709101-SG it is important to know, that this patch mitigates CVE-2017-5715, but not CVE-2017-5753! Please see KB52345 for important information on ESXi microcode patches.

VMSA-2018-0004

OSUpdate
ESXi 6.5ESXi650-201801401-BG, and
ESXi650-201801402-BG
ESXi 6.0ESXi600-201801401-BG, and
ESXi600-201801402-BG
ESXi 5.5ESXi550-201801401-BG
vCenter 6.56.5 U1e
vCenter 6.06.0 U3d
vCenter 5.55.5 U3g

The patches ESXi650-201801402-BG, ESXi 6.0 ESXi600-201801401-BG, and
ESXi550-201801401-BG will patch the microcode for supported CPUs. And this is pretty interesting! To enable hardware support for branch target mitigation (CVE-2017-5715 aka Spectre) in vSphere, three steps are necessary (Source: VMware):

  • Update to one of the above listed vCenter releases
  • Update the ESXi 5.5, 6.0 or 6.5 with
    • ESXi650-201801401-BG
    • ESXi600-201801401-BG
    • ESXi550-201801401-BG
  • Apply microcode updates from your server vendor, OR apply these patches for ESXi
    • ESXi650-201801402-BG
    • ESXi600-201801402-BG
    • ESXi550-201801401-BG

In case of ESXi 5.5, the hypervisor and microcode updates are delivered in a single update (ESXi550-201801401-BG).

Update 01-13-2018

Please take a look into KB52345 if you are using Intel Haswell and Broadwell CPUs! The KB article includes a table with affected CPUs.

All you have to do is:

  • Update your vCenter to the latest update release, then
  • Update your ESXi hosts with all available security updates
  • Apply the necessary guest OS security updats and enable the protection (Windows Server)

For the required security updates:

Make sure that you also apply microcode updates from your server vendor!

VMSA-2018-0007

This VMSA, published on 08/02/2018, covers several VMware Virtual appliances. Relevant appliances are:

  • vCloud Usage Meter (UM)
  • Identity Manager (vIDM)
  • vSphere Data Protection (VDP)
  • vSphere Integrated Containers (VIC), and
  • vRealize Automation (vRA)
ProductPatch pending?Mitigation/ Workaround
UM 3.xyesKB52467
vIDM 2.x and 3.xyesKB52284
VDP 6.xyesNONE
VIC 1.xUpdate to 1.3.1
vRA 6.xyesKB52497
vRA 7.xyesKB52377

 

HPE ProLiant

HPE has published a customer bulletin (document ID a00039267en_us) with all necessary information:

HPE ProLiant, Moonshot and Synergy Servers – Side Channel Analysis Method Allows Improper Information Disclosure in Microprocessors (CVE-2017-5715, CVE-2017-5753, CVE-2017-5754)

CVE-2017-5715 requires that the System ROM be updated and a vendor supplied operating system update be applied as well. For CVE-2017-5753, CVE-2017-5754 require only updates of a vendor supplied operating system.

Update 01-13-2018

The following System ROMs were previously available but have since been removed from the HPE Support Site due to the issues Intel reported with the microcode updates included in them. Updated revisions of the System ROMs for these platforms will be made available after Intel provides updated microcodes with a resolution for these issues.

Update 01-24-2018

HPE will be releasing updated System ROMs for ProLiant and Synergy Gen10, Gen9, and Gen8 servers including updated microcodes that, along with an OS update, mitigate Variant 2 (Spectre) of this issue. Note that processor vendors have NOT released updated microcodes for numerous processors which gates HPE’s ability to release updated System ROMs.

I will update this blog post as soon as HPE releases new system ROMs.

For most Gen9 and Gen10 models, updated system ROMs are already available. Check the bulletin for the current list of servers, for which updated system ROMs are available. Please note that you don’t need a valid support contract to download this updates!

Under Software Type, select “BIOS-(Entitlement Required”) – (Note that Entitlement is NOT required to download these firmware versions.

Update 02-09-2018

Nothing new. HPE has updates the bulletin on 31-01-2018 with an updated timeline for new system ROMs.

Update 02-25-2018

HPE hast published Gen10 system ROMs. Check the advisory: HPE ProLiant, Moonshot and Synergy Servers – Side Channel Analysis Method Allows Improper Information Disclosure in Microprocessors (CVE-2017-5715, CVE-2017-5753, CVE-2017-5754).

Update 04-17-2018

HPE finally published updated System ROMS for several Gen10, Gen9, Gen8, G7 and even G6 models, which also includes bread-and-butter servers like the ProLiant DL360 G6 to Gen10, and DL380 G6 to Gen10.

If you are running Windows on your ProLiant, you can use the online ROM flash component for Windows x64. If you are running VMware ESXi, you can use the systems ROMPaq firmware upgrade for for USB key media.

Wrong iovDisableIR setting on ProLiant Gen8 might cause a PSOD

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

TL;DR: There’s a script at the bottom of the page that fixes the issue.

Some days ago, this HPE customer advisory caught my attention:

Advisory: (Revision) VMware – HPE ProLiant Gen8 Servers running VMware ESXi 5.5 Patch 10, VMware ESXi 6.0 Patch 4, Or VMware ESXi 6.5 May Experience Purple Screen Of Death (PSOD): LINT1 Motherboard Interrupt

And there is also a corrosponding VMware KB article:

ESXi host fails with intermittent NMI PSOD on HP ProLiant Gen8 servers

It isn’t clear WHY this setting was changed, but in VMware ESXi 5.5 patch 10, 6.0  patch 4, 6.0 U3 and, 6.5 the Intel IOMMU’s interrupt remapper functionality was disabled. So if you are running these ESXi versions on a HPE ProLiant Gen8, you might want to check if you are affected.

To make it clear again, only HPE ProLiant Gen8 models are affected. No newer (Gen9) or older (G6, G7) models.

Currently there is no resolution, only a workaround. The iovDisableIR setting must set to FALSE. If it’s set to TRUE, the Intel IOMMU’s interrupt remapper functionality is disabled.

To check this setting, you have to SSH to each host, and use esxcli  to check the current setting:

[[email protected]:~] esxcli system settings kernel list -o iovDisableIR

Name          Type  Description                                 Configured  Runtime  Default
------------  ----  ---------------------------------------     ----------  -------  -------
iovDisableIR  Bool  Disable Interrupt Routing in the IOMMU...   FALSE       FALSE    TRUE

I have written a small PowerCLI script that uses the Get-EsxCli cmdlet to check all hosts in a cluster. The script only checks the setting, it doesn’t change the iovDisableIR setting.

Here’s another script, that analyzes and fixes the issue.

HPE ProLiant PowerShell SDK

This posting is ~7 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Some days ago, my colleague Claudia and I started to work on a new project: A greenfield deployment consisting of some well known building blocks: HPE ProLiant, HPE MSA, HPE Networking (now Aruba) and VMware vSphere. Nothing new for us, because we did this a couple times together. But this led us to the idea, to automate some tasks. Especially the configuration of the HPE ProLiants: Changing BIOS settings and configuring the iLO.

Do not automate what you have not fully understood

Some of the wisest words I have ever said to a customer. Modifying the BIOS and iLO settings is a well understood task. But if you have to deploy a bunch of ProLiants, this is a monotonous, and therefore error prone process. Perfect for automation!

Scripting Tools for Windows PowerShell

To support the automation of HPE ProLiant deployments, HPE offers the Scripting Tools for Windows PowerShell. HPE offers the PowerShell modules free for charge. There are three different downloads:

  • iLO cmdlets
  • BIOS cmdlets
  • Onboard Administrator (OA) cmdlets

The iLO cmdlets include PowerShell cmdlets to configure and manage iLO on HPE ProLiant G7, Gen8 or Gen9 servers. The BIOS cmdlets does not support G7 servers, so you can only configure and manage legacy and UEFI BIOS for Gen8 (except DL580) and all Gen9 models. The OA cmdlets support the configuration and management of the HPE Onboard Administrator, which is used with HPEs well known ProLiant BL blade servers. The OA cmdlets need at least  OA v3.11, whereby v4.60 is the latest version available.  All you need to get started are

  • Microsoft .NET Framework 4.5, and
  • Windows Management Framework 3.0 or later

If you are using Windows 8 or 10, you already have PowerShell 4 respectively PowerShell 5.

Support for HPE ProLiant Gen9 iLO RESTful API

If you have ever seen a HPE ProLiant Gen9 booting up, you might have noticed the iLO RESTful API icon down right. Depending on the server model, the BIOS cmdlets utilize the ILO4 RESTful API. But the iLO RESTful API ecosystem is it worth to be presented in an own blog post. Stay tuned.

Documentation and examples

HPE offers a simple documentation for the BIOS, iLO and OA cmdlets. You can find the documentation in HPEs Information Library. Documentation is important, but sometimes example code is necessary to quickly ramp up code. Check HPEs PowerShell SDK GitHub repository for examples.

Time to code

I’m keen on it and curious to automate some of my regular deployment tasks with these PowerShell modules. Some of these tasks are always the same:

  • change the power management and other BIOS settings
  • change the network settings of the iLO
  • change the initial password of the iLO administrator account and create additional iLO user accounts

Further automation tasks are not necessarily related to the HPE ProLiant PowerShell SDK, but to PowerShell, respectively VMware PowerCLI. PowerShell is great to automate the different aspects and modules of an infrastructure deployment. You can use it to build your own tool box.

HPE Hyper Converged 380 – A look under the hood

This posting is ~7 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

In March 2016, HPE CEO Meg Whitman announced a ProLiant-based HCI solution, that should be easier to use and cheaper than Nutanix.

This isn’t HPEs first dance on this floor. In August 2015, HP launched the Hyper Converged 250 System (HC250), which is based on the Apollo server platform. The HW design of the HC250 comes close to a Nutanix Block, because the Apollo platform supports up to four nodes in 2U. Let me say this clear: The Hyper Converged 380 (HC380) is not a replacement for the HC250! And before the HC250, HPE offered the Converged System 200-HC StoreVirtual and 200-HC EVO:RAIL (different models).

The HC380 is based on the ProLiant DL380 Gen9 platform. The DL380 Gen9 is one of the, if not the best selling x86 server on the market. Instead of developing everything from scratch, HPE build their new HC380 from different already available HPE products. With one exception: HPE OneView User Experience (UX). IT was developed from scratch and consolidates all management and monitoring tasks into a single console. The use of already available components was the reason for the low time-to-market (TTM) of the HC380.

Currently, the HC380 can only run VMware vSphere (HPE CloudSystem uses VMware vSphere). Support for Microsoft Hyper-V and Citrix XenServer will be added later. If you wish to run Microsoft Hyper-V, check the HC250 or wait until it’s supported with the HC380.

What flavor would you like?

The HC380 is available in three editions (use cases):

  • HC380 (Virtualization)
  • HC380 (HPE CloudSystem)
  • HC380 (VDI)

All three use cases are orderable using a single SKU and include two DL380 Gen9 nodes (2U). You can add up to 14 expansion nodes, so that you can have up to 16 dual-socket DL380 Gen9.

Each node comes with two Intel Xeon E5 CPUs. The exact CPU model has to be selected before ordering. The same applies to the memory (128 GB or 256 GB per node, up to 1,5 TB) and disk groups (up to three disk groups, each with 4,5 to 8 TB usable capacity per block, 8 drives either SSD/ HDD or all HDD with a maximum of 25 TB usable per node). The memory and disk group configuration depends on the specific use case (virtualization, CloudSystem, VDI). The same applies to the number of network ports (something between 8x 1 GbE and 6x 10 GbE plus 4x 1 GbE). For VDI, customers can add NVIDIA GRID K1, GRID K2 or Telsa M60 cards.

VMware vSphere 6 Enterprise or Enterprise Plus are pre-installed and licences can be bought from HPE. Interesting note from the QuickSpecs:

NOTE: HPE Hyper Converged 380 for VMware vSphere requires valid VMware vSphere Enterprise or higher, and vCenter licenses. VMware licenses can only be removed from the order if it is confirmed that the end-customer has a valid licenses in place (Enterprise License Agreement (ELA), vCloud Air Partner or unused Enterprise Purchasing Program tokens).

Hewlett Packard Enterprise supports VMware vSphere Enterprise, vSphere Enterprise Plus and Horizon on the HPE Hyper Converged 380.

No support for vSphere Standard or Essentials (Plus)! Let’s see how HPE will react on the fact, that VMware will phase out vSphere Enterprise licenses.

The server includes 3y/ 3y/ 3y onsite support with next business day response. Nevertheless, at least 3-year HPE Hyper Converged 380 solution support is requires according to the latest QuickSpecs.

What’s under the hood?

As I already mentioned, the HC380 was built from well known HPE products. Only HPE OneView User Experience (UX) was developed from scratch. OneView User Experience (UX) consolidates the following tasks into a single console (source QuickSpecs):

  • Virtual machine (VM) vending (create, edit, delete)
  • Hardware/driver and appliance UI frictionless updates
  • Advanced capacity and performance analytics (optional)
  • Backup and restore of appliance configuration details
  • Role-based access
  • Integration with existing LDAP or Active Directory
  • Physical and virtual hardware monitoring

Pretty cool fact: HPE OneView User Experience (UX) will be available for the HC250 later this year. Part of a 2-node cluster are not only the two DL380 Gen9 servers, but also three VMs:

  • HC380 Management VM
  • HC380 OneView VM
  • HC380 Management UI VM

The Management VM is used for VMware vCenter (local install) and HPE OneView for vCenter. You can use a remote vCenter (or a vCenter Server Appliance), but you have to make sure that the remote vCenter has HPE Oneview for vCenter integrated. The OneView VM running HPE OneView for for HW/ SW management. The Management UI VM is running HPE OneView User Experience.

The shared storage is provided by HPE StoreVirtual VSA. A VSA is running on each node. As you might know, StoreVirtual VSA comes with an all-inclusive license. No need to buy additional licenses. You can have it all: Snapshots, Remote Copy, Clustering, Thin Provisioning, Tiering etc. The StoreVirtual VSA delivers sustainable performance, a good VMware vSphere integration and added value, for example support for Veeam Storage Snapshots.

When dealing with a 2-node cluster, the 25 TB usable capacity per node means in fact 25 TB usable for the whole 2-node cluster. This is because of the Network RAID 1 between the two StoreVirtual VSA. The data is mirrored between the VSAs. When adding more nodes, the data is striped accross the nodes in the cluster (Network RAID 10+2).

Also important in case of the 2-node cluster: The quorum. At least two StoreVirtual VSA build a cluster. As in every cluster, you need some kind of quorum. StoreVirtual 12.5 added support for a NFSv3 based quorum witness. This is in fact a NFS file share, which has to be available for both nodes. This is only supported in 2-node clusters and I highly recommend to use this. I have a customer that uses a Raspberry Pi for this…

Start the engine

You have to meet some requirements before you can start.

  • 1 GbE connections for each nodes iLO and 1 GbE ports
  • 1 GbE or 10 GbE connections for each node FlexLOM ports
  • Windows-based computer directly connected to a node (MacOS X or Linux should also work)
  • VMware vSphere Enterprise or Enterprise Plus licenses
  • enough IP addresses and VLANs (depending on the use case)

For general purpose server virtualization, you need at least three subnets and three VLANs:

  • Management
  • vMotion
  • Storage (iSCSI)

Although you have the choice between a flat (untagged) and a VLAN-tagged network design, I would always recomment a VLAN-tagged approach. It’s highly recommended to use multiple VLANs to get the traffic seperated. The installation guide includes worksheets and examples to help you planning the deployment. For a 2-node cluster you need at least:

  • 5 IP addresses for the management network
  • 2 IP addresses for the vMotion network
  • 8 IP addresses for the iSCSI storage network

You should leave space for expansion nodes. A proper planning saves you later trouble.

HP OneView InstantOn is used for the automated deployment. It guides you through the necessary configuration steps. HPE says that the deployment requires less than 60 minutes and all you need to enter are

  • IP addresses
  • credentials
  • VMware licenses

After the deployment, you have to install the StoreVirtual VSA licenses. Then you can create datastores and, finally, VMs.

hpehc380_ux

HPE/ hpw.com

Summary

Hyper-Converged has nothing to do with the form factor. Despite the fact that a 2-node cluster comes in 4U, the HC380 has everything you would expect from a HCIA. The customers will decide if HPE held promise. The argument for the HC380 shouldn’t be the lower price compared to Nutanix or other HCI players. Especially, HPE should not repeat the mistake of the HC200 EVO:RAIL: To buggy and to expensive. The HC380 combines known and mature products (ProLiant DL380 Gen9, StoreVirtual VSA, OneView). It’s now up to HPE.

I have several small and mid-sized customers that are running two to six nodes VMware vSphere environments. Also the HC380 for VDI can be very interesting.

HP ProLiant BL460c Gen9: MicroSD card missing during ESXi 5.5 setup

This posting is ~7 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Today, I was at a customer to prepare a two node vSphere cluster for some MS SQL server tests. Nothing fancy, just two HP ProLiant BL460c Gen9 blades and two virtual volumes from a HP 3PAR. Each blade had two 400 GB SSDs, two 64 GB M.2 SSDs and a 1 GB MicroSD card. Usually, I install ESXi to a SD card. In this case, a MicroSD card. The SSDs were dedicated for PernixData FVP. Although I saw the MicroSD card in the boot menu, ESXi doesn’t showed it as a installation target.

bl460_no_sdcard

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

I’ve read a lot about similar observations of other users, but no solution seemed to be plausible. It’s not a solution to switch from UEFI to legacy BIOS or play with power management settings. BIOS and ILO firmware were up to date. But disabling USB3 support seemed to be a possible solution.

You can disable USB3 support in the RBSU.

bl460_rbsu

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

After a reboot, the MicroSD card appeared as installation target during the ESXi setup.

bl460_with_sdcard

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

I never noticed this behaviour with ProLiant DL360 Gen9, 380 Gen9, 560 Gen9 or 580 Gen9. I only saw this with BL460c Gen9. The affected blade servers had System ROM I36 05/06/2015 and I36 09/24/2015, as well as ILO4 2.30 installed.

Reset the HP iLO Administrator password with hponcfg on ESXi

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Sometimes you need to reset the ILO Administrator password. Sure, you can reboot the server, press F8 and then reset the Administrator password. If you have installed a HP customized ESXi image, then there is a much better way to reset the password: HPONCFG.

Check the /opt/hp/tools directory. You will find a binary called hponcfg.

~ # ls -l /opt/hp/tools/
total 5432
-r-xr-xr-x 1 root root 5129574 Oct 28 2014 conrep
-r--r--r-- 1 root root 108802 Oct 28 2014 conrep.xml
-r-xr-xr-x 1 root root 59849 Jan 16 2015 hpbootcfg
-r-xr-xr-x 1 root root 251 Jan 16 2015 hpbootcfg_esxcli
-r-xr-xr-x 1 root root 232418 Jul 14 2014 hponcfg
-r-xr-xr-x 1 root root 12529 Oct 31 2013 hptestevent
-r-xr-xr-x 1 root root 250 Oct 31 2013 hptestevent_esxcli

All you need is a simple XML file. You can use the VI editor or you can copy the necessary file with WinSCP to the root home directory on your ESXi host. I prefer VI. Change the directory to /opt/hp/tools. Then open the pwreset.xml.

~ # vi pwreset.xml

Press i to switch to the insert mode. Then paste this content into the file. You don’t have to know the current password!

<RIBCL VERSION="2.0">
<LOGIN USER_LOGIN="Administrator" PASSWORD="unknown">
<USER_INFO MODE="write">
<MOD_USER USER_LOGIN="Administrator">
<PASSWORD value="password"/>
</MOD_USER>
</USER_INFO>
</LOGIN>
</RIBCL>

Press ESC and then :wq<ENTER> to save the file and leave the VI. Now use HPONCFG together with the XML file to reset the password.

~ # /opt/hp/tools/hponcfg -f pwreset.xml
HP Lights-Out Online Configuration utility

Version 4.4-0 (c) Hewlett-Packard Company, 2014
Firmware Revision = 1.85 Device type = iLO 3 Driver name = hpilo
iLO IP Address: 172.16.1.52
Script succeeded

That’s it! You can now login with “Administrator” and “password”.

HP Service Pack for ProLiant 2015.04

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Some weeks ago, HP has published an updated version of their HP Service Pack for ProLiant (SPP). The SPP 2015.04.0 has added support for

  • new HP ProLiant servers and options,
  • support for Red Had Enterprise Linux 6.6, SUSE Linux Enterprise Server 12, VMware vSphere 5.5 U2 and (of course) VMware vSphere 6.0,
  • HP Smart Update Manager v7.2.0 was added,
  • the HP USB Key Utility for Windows v2.0.0.0 can now handle downloads greater than 4GB (important, because this release may not fit on a standard DVD media…)
  • select Linux firmware components is now available in rpm format

In addition, the SPP covers two important customer advisories:

  • ProLiant Gen9 Servers – SYSTEM ROM UPDATE REQUIRED to Prevent Memory Subsystem Anomalies on Servers With DDR4 Memory Installed Due to Intel Processor BIOS Upgrades (c04542689)
  • HP Virtual Connect (VC) – Some VC Flex-10/10D Modules for c-Class BladeSystem May Shut Down When Running VC Firmware Version 4.20 or 4.30 Due to an Erroneous High Temperature Reading (c04459474)

Two CAs fixed, but another CA arised (and it’s an ugly one..):

  • HP OneView 1.20 – Upgrading Virtual Connect Version 4.40 with Service Pack for ProLiant (SPP) 2015.04.0 Will Result in a Configuration Error and an Inability to Manage the HP Virtual Connect 8Gb 24-Port Fibre Channel Module (c04638459)

If you are using HP OneView >1.10 and 1.20.04, you will be unable to manage HP Virtual Connect 8Gb 24-port Fibre Channel Modules after updating the module to firmware version 3.00 or later. This is also the case, if you use the smart components from Virtual Connect  firmware version 4.40! After the update, the VC module will enter a “Configuration Error” state. Currently there is no fix. The only workaround is not to update to HP Virtual Connect 8Gb 24-port Fibre Channel Module firmware version 3.00. This will be fixed in a future HP OneView 1.20 patch.

Important to know: With this release, the SPP may not fit on standard DVD media! But to be honest: I’ve never burned the SPP to DVD, I always used USB media.

Check the release notes for more information about this SPP release. You can download the latest SPP version from the HP website. You need an active warranty or HP support agreement to download the SPP.