Tag Archives: hpe

Upgrade to ESXi 7.0: Missing dependencies VIBs Error

This posting is ~1 year years old. You should keep this in mind. IT is a short living business. This information might be outdated.

This error gets me from time to time, regardless which server vendor, mostly on hosts that were upgraded a couple of times. In this case it was a ESXi host currently running a pretty old build of ESXi 6.7 U3 and my job was the upgrade to 7.0 Update 3c.

If you add a upgrade baseline to the cluster or host, and you try to remediate the host, the task fails with a dependency error. When taking a closer look into the taks details, you were getting told that the task has failed because of a failed dependency, but not which VIB it caused.

You can find the name if the causing VIB on the update manager tab of the host which you tried to update. The status of the baseline is “incompatible”, and not “non-compliant”.

To resolve this issue you have to remove the causing VIB. This is no big deal and can be done with esxcli. Enable SSH and open a SSH connection to the host. Then remove the VIB.

[[email protected]:~] esxcli software vib list | grep -i ssacli
ssacli                         4.17.6.0-6.7.0.7535516.hpe          HPE        PartnerSupported  2020-06-18
[[email protected]:~] esxcli software vib remove -n ssacli
Removal Result
   Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
   Reboot Required: true
   VIBs Installed:
   VIBs Removed: HPE_bootbank_ssacli_4.17.6.0-6.7.0.7535516.hpe
   VIBs Skipped:
[[email protected]:~]

You need to reboot the host after the removal of the VIB. Then you can proceed with the update. The status of the upgrade baseline should be now “not-compliant”.

VMware ESXi 6.7 memory health warnings after ProLiant SPP

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

During the deployment of a vSAN cluster consisting of multiple HPE ProLiant DL380 Gen10 hosts, I noticed a memory health warning after updating the firmware using the Support Pack for ProLiant. The error was definitely not shown before the update, so it was clear, that this was not a real issue with the hardware. Furthermore: All hosts showed this error.

Memory health status after SPP

The same day, a customer called me and asked me about a strange memory health error after he has updated all of his hosts with the latest SPP…

My first guess, that this was not caused by a HW malfunction was correct. HPE published a advisory about this issue:

The Memory Sensor Status Reported in the vSphere Web Client Is Not Accurate For HPE ProLiant Gen10 and Gen10 Plus Servers Running VMware ESXi 6.5/6.7/7.0 With HPE Integrated Lights-Out 5 (iLO 5) Firmware Version 2.30

To fix this issue, you have to update the ILO5 firmware to version 2.31. You can do this manually using the ILO5 interface, or you can add the file to the SPP. I’ve added the BIN file to the USB stick with the latest SPP.

If you want to update the firmware manually, simply upload the BIN file using the built-in firmware update function.

  1. Navigate to Firmware & OS Software in the navigation tree, and then click Update Firmware
  2. Select the Local file option and browse to the BIN file
  3. To save a copy of the component to the iLO Repository, select the Also store in iLO Repository check box
  4. To start the update process, click Flash

You can download the latest ILO5 2.31from HPE using this link. After the FW update, the error will resolve itself.

Only ESXi 6.7 is affected, and only ESXi 6.7 running on HPE ProLiant hosts, regardless if ML, DL or BL series.

Virtually reseated: Reset blade in a HPE C7000 enclosure

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

After a reboot, a VMware ESXi 6.7 U3 told me that he has no compatible NICs. Fun fact: Right before the reboot everything was fine.

The ILO also showed no NICs. Unfortunately, I wasn’t onsite to pull the blade server and put it back in. But there is a way to do this “virtually”.

You have to connect to the IP address of the Onboard Administrator via SSH. Then issue the reset server command with the bay of the server you want to reset and an argument.

OA1-C7000> reset server 13

WARNING: Resetting the server trips its E-Fuse. This causes all power to be momentarily removed from the server. This command should only be used when physical access to the server is unavailable, and the server must be removed and
reinserted.

Any disk operations on direct attached storage devices will be affected. I/O
will be interrupted on any direct attached I/O devices.

Entering anything other than 'YES' will result in the command not executing.

Do you want to continue ? yes

Successfully reset the E-Fuse for device bay 13.

The server will power up automatically. Please note, that the OBA is unable to display certain information right after this operation. It will take a couple of minutes until all information, like serial number or device bay name are visible again.

VMware ESXi 6.7: Recurring host hardware sensor state alarm

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

If you found this blog post because you are searchting for a solution for a FAN FAILURE on your ProLiant Gen10 HW after applying the latest ESXi 6.7 patches, then use this shortcut for the workaround: Fan health sensors report false alarms on HPE Gen10 Servers with ESXi 6.7


I had a really annoying problem at one of my customers. After deploying new VMware ESXi hosts (HPE ProLiant DL380 Gen10) along with an upgrade of the vCenter Server Appliance to 6.7 U2, the customer reported recurring host hardware sensor state alarm messages in the vCenter for all hosts.

After acknowledging the alarm, it recurred after a couple of minutes or hours. The hardware was finde, no errors or warnings were noticed in the ILO Management Log. But the vCenter reported periodically a Sensor -1 type error in the Events window. The /var/log/syslog.log contained messages like this:

2019-11-29T04:39:48Z sfcb-vmw_ipmi[4263212]: IpmiIfcSelGetInfo: IPMI_CMD_GET_SEL_INFO cc=0xc1
 2019-11-29T04:39:49Z sfcb-vmw_ipmi[4263212]: IpmiIfcSelGetInfo: IPMI_CMD_GET_SEL_INFO cc=0xc1
 2019-11-29T04:39:50Z sfcb-vmw_ipmi[4263212]: IpmiIfcSelGetInfo: IPMI_CMD_GET_SEL_INFO cc=0xc1
 2019-11-29T04:39:51Z sfcb-vmw_ipmi[4263212]: IpmiIfcSelGetInfo: IPMI_CMD_GET_SEL_INFO cc=0xc1
 2019-11-29T04:39:52Z sfcb-vmw_ipmi[4263212]: IpmiIfcSelGetInfo: IPMI_CMD_GET_SEL_INFO cc=0xc1

Sure, you can ignore this. But you shouldn’t ignore this, because these events can result in the vCenter database increasing in size. vCenter can crash once the SEAT partition size goes above the 95% threshold. So you better fix this!

Long story short: This bug is fixed with the latest November updates for ESXi 6.7 U3. A workaround is to disable the WBEM service. The WBEM service might be enabled after a reboot. In this case you have to disable the sfcbd-watchdog service.

But the best way to solve this is to install the latest patches (VMware ESXi 6.7, Patch Release ESXi670-201911001)

Veeam and StoreOnce: Wrong FC-HBA driver/ firmware causes Windows BSoD

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

One of my customers bought a very nice new backup solution, which consists of a

  • HPE StoreOnce 5100 with ~ 144 TB usable capacity,
  • and a new HPE ProLiant DL380 Gen10 with Windows Server 2016

as new backup server. StoreOnce and backup server will be connected with 8 Gb Fibre-Channel and 10 GbE to the existing network and SAN. Veeam Backup & Replication 9.5 U3a is already in use, as well as VMware vSphere 6.5 Enterprise Plus. The backend storage is a HPE 3PAR 8200.

This setup allows the usage of Catalyst over Fibre-Channel together with Veeam Storage Snapshots, and this was intended to use.

I wrote about a similar setup some month ago: Backup from a secondary HPE 3PAR StoreServ array with Veeam Backup & Replication.

The OS on the StoreOnce was up-to-date (3.16.7), Windows Server 2016 was installed using HPE Intelligent Provisioning. Afterwards, a drivers and firmware were updated using the latest SPP 2018.11 was installed. So all drivers and firmware were also up-to-date.

After doing zoning and some other configuration tasks, I installed Veeam Backup and Replication 9.5 U3, configured my Catalyst over Fibre-Channel repository. I configured a test backup… and the server failed with a Blue Screen of Death… which is pretty rare since Server 2008 R2.

geralt / pixabay.com/ Creative Commons CC0

I did some tests:

  • backup from 3PAR Storage Snapshots to Catalyst over FC repository – BSoD
  • backup without 3PAR Storage Snapshots to Catalyst over FC repository – BSoD
  • backup from 3PAR Storage Snapshots to Catalyst over LAN repository – works fine
  • backup without 3PAR Storage Snapshots to Catalyst over LAN repository – works fine
  • backup from 3PAR Storage Snapshots to default repository – works fine
  • backup without 3PAR Storage Snapshots to default repository – works fine

So the error must be caused by the usage of Catalyst over Fibre-Channel. I filed a case at HPE, uploaded gigabytes of memory dumps and heard pretty less during the next week.

HPE StoreOnce Support Matrix FTW!

After a week, I got an email from the HPE support with a question about the installed HBA driver and firmware. I told them the version number and a day later I was requested to downgrade (!) drivers and firmware.

The customer has got a SN1100Q (P9D93A & P9D94A) HBA in his backup server, and I was requested to downgrade the firmware to version 8.05.61, as well as the driver to 9.2.5.20. And with this firmware and driver version, the backup was running fine (~ 750 MB/s hroughput).

I found the HPE StoreOnce Support Matrix on the SPOCK website from HPE. The matrix confirmed the firmware and driver version requirement (click to enlarge).

Fun fact: None of the listed HBAs (except the Synergy HBAs) is supported with the latest StoreOnce G2 products.

Lessons learned

You should take a look at those support matrices – always! HPE confirmed that the first level recommendation “Have you trieed to update to the latest firmware” can cause similar problems. The fact, that the factory ships the server with the latest firmware does not make this easier.

Backup from a secondary HPE 3PAR StoreServ array with Veeam Backup & Replication

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

When taking a backup with Veeam Backup & Replication, a VM snapshot is created to get a consistent state of the VM. The snapshot is taken prior the backup, and it is removed after the successful backup of the VM. The snapshot grows during its lifetime, and you should keep in mind, that you need some free space in the datastore for snapshots. This can be a problem, especially in case of multiple VM backups at a time, and if the VMs share the same datastore.

Benefit of storage snapshots

If your underlying storage supports the creation of storage snapshots, Veeam offers an additional way to create a consistent state of the VMs. In this case, a storage snapshot is taken, which is presented to the backup proxy, and is then used to backup the data. As you can see: No VM snapshot is taken.

Now one more thing: If you have a replication or synchronous mirror between two storage systems, Veeam can do this operation on the secondary array. This is pretty cool, because it takes load from you primary storage!

Backup from a secondary HPE 3PAR StoreServ array

Last week I was able to try something new: Backup from a secondary HPE 3PAR StoreServ array. A customer has two HPE 3PAR StoreServ 8200 in a Peer Persistence setup, a HPE StoreOnce, and a physical Veeam backup server, which also acts as Veeam proxy. Everything is attached to a pretty nice 16 Gb dual Fabric SAN. The customer uses Veeam Backup & Replication 9.5 U3a. The data was taken from the secondary 3PAR StoreServ and it was pushed via FC into a Catalyst Store on a StoreOnce. Using the Catalyst API allows my customer to use Synthetic Full backups, because the creation is offloaded to StoreOnce. This setup is dramatically faster and better than the prior solution based on MicroFocus Data Protector. Okay, this last backup solution was designed to another time with other priorities and requirements. it was a perfect fit at the time it was designed.

This blog post from Veeam pointed me to this new feature: Backup from a secondary HPE 3PAR StoreServ array. Until I found this post, it was planned to use “traditional” storage snapshots, taken from the primary 3PAR StoreServ.

With this feature enabled, Veeam takes the snapshot on the 3PAR StoreServ, that is hosting the synchronous mirrored virtual volume. This graphic was created by Veeam and shows the backup workflow.

Veeam/ Backup process/ Copyright by Veeam

My tests showed, that it’s blazing fast, pretty easy to setup, and it takes unnecessary load from the primary storage.

In essence, there are only three steps to do:

  • add both 3PARs to Veeam
  • add the registry value and restart the Veeam Backup Server Service
  • enable the usage of storage snapshots in the backup job

How to enable this feature?

To enable this feature, you have to add a single registry value on the Veeam backup server, and afterwards restart the Veeam Backup Server service.

  • Location: HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication\
  • Name: Hp3PARPeerPersistentUseSecondary
  • Type: REG_DWORD (0 False, 1 True)
  • Default value: 0 (disabled)

Thanks to Pierre-Francois from Veeam for sharing his knowledge with the community. Read his blog post Backup from a secondary HPE 3PAR StoreServ array for additional information.

DOT1X authentication failed on HPE OfficeConnect 1920 switches

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

The last two days, I have supported a customer during the implementation of 802.1x. His network consisted of HPE/ Aruba and some HPE Comware switches. Two RADIUS server with appropriate policies was already in place. The configuration and test with the ProVision based switches was pretty simple. The Comware based switches, in this case OfficeConnect 1920, made me more headache.

blickpixel/ pixabay.com/ Creative Commons CC0

The customer had already mac authentication running, so all I had to do, was to enable 802.1x on the desired ports of the OfficeConnect 1920. The laptop, which I used to test the connection, was already configured and worked flawless if I plugged it into a 802.1x enabled port on a ProVision based switch. The OfficeConnect 1920 simply wrote a failure to its log and the authentication failed. The RADIUS server does not logged any failure, so I was quite sure, that the switch caused the problem.

DOT1X/6/DOT1X_AUTH_FAILURE: -IfName=GigabitEthernet1/0/1-UserName=DOM\USERNAME; DOT1X authentication failed

After double-checking all settings using the web interface of the switch, I used the CLI to check some more settings. Unfortunately, the OfficeConnect 1920 is a smart-managed switch and provides only a very, very limited CLI. Fortunately, there is a developer access, enabling the full Comware CLI. You can enable the full CLI by entering

_cmdline-mode on

after logging into the limited CLI. You can find the password using your favorite internet search engine. ;)

Solution

While poking around in the CLI, I stumbled over this option, which is entered in the interface context:

[1920-GigabitEthernet1/0/1] dot1x mandatory-domain RADIUS

RADIUS is the authentication domain, which was used on this switch. The command specifies, that the authentication domain RADIUS has to be for 802.1x authentication requests. Otherwise the switch would use the default authentication domain SYSTEM, which causes, that the switch tries to authenticate the user against the local user database.

I have not found any way to specify this setting using the web GUI! If you know how, of if you can provide additional information about this “issue”, please leave a comment.

HPE Networking expert level certifications

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

A couple of days ago, I took the HP0-Y47 exam “Deploying HP FlexNetwork Core Technologies”. It was one of two required exams to achive the HPE ASE – Data Center Network Integrator V1, and the HP ASE – FlexNetwork Integrator V1 certification. It was a long planned upgrade to my HP ATP certification, and it is a necessary certification for the HPE partner status of my employer.

You might find it confusing that I’m talking about an HP ASE and a HPE ASE. That is not a typo. The HP ASE was released prior the HP/ HPE split. The HPE ASE was released after the split in HP and HPE.

The HP/ HPE ATP is a professional level certification, comparable to the Cisco Certified Network Associate (CCNA). The HP/ HPE ASE is an expert level certification, so the typical candidate for a HP/ HPE ASE certification is a professional with three to five years experience in designing and architecting complex enterprise-level networks.

Requirements

There are different ways to achieve this certification. Regardless of the way you chose, you need a certification from which you can upgrade. This does not have to be a HP/ HPE certification! If you hold a valid CCNA/ CCNP or JNCIP-ENT, you can upgrade from this certification without the need of a valid HP/ HPE ATP Networking certification.

If you want to earn the HPE ASE – Data Center Network Integrator V1, and the HP ASE – FlexNetwork Integrator V1 certification in a single step, you need at least one of these certifications:

  • HP ATP – FlexNetwork Solutions V3
  • HPE ATP – Data Center Solutions V1

Or if you want to upgrade from a non-HP/ HPE certification:

  • Cisco – CCNP (any CCNP regardless of technology)
  • Cisco – Certified Design Professional (CCDP)
  • Juniper – JNCIP-ENT

Now you need to pass two exams:

HP2-Z34 (Building HP FlexFabric Data Centers)

The HP2-Z34 exam focuses on deployment and implementation of HPE FlexFabric Data Center solutions. Therefore, the exams covers topics like

  • Multitenant Device Context (MDC)
  • Datacenter Bridging (DCB)
  • Multiprotocol Label Switching (MPLS)
  • Fibre Channel over Ethernet (FCoE)
  • Ethernet Virtual Interconnect (EVI),
  • Multi-Customer Edge (MCE),
  • Transparent Interconnection of Lots of Links (TRILL), and
  • Shortest Path Bridging Mac-in-Mac mode (SPBM).

HPE offers a study guide to prepare for this exam: Building HP FlexFabric Data Centers (HP2-Z34 and HP0-Y51). I used this guide to prepare for the exam (eBook). The guide was of an average quality. Its sufficient to prepare for the exam, but I used other materials to get a better understanding of some topics.

HP2 exams are web-based exams. To pass the HP2-Z34 exam, I had to answer 60 questions in 105 minutes, with a passing score of 70%. The exam was quite demanding, especially if you don’t have much real-world experience with some of the covered topics.

HP0-Y47 (Deploying HP FlexNetwork Core Technologies)

The HP0-Y47 exam covers the configuration, implementation, and the troubleshoot enterprise level HPE FlexNetwork solutions. The exam covers different topics, e.g.

  • Quality of Service (QoS)
  • redundancy (VRRP, Stacking)
  • multicast routing (IGMP, PIM)
  • dynamic routing (OSPF, BGP)
  • ACLs, and
  • port authentication/ port security (Mac-auth, Web-auth, 802.1x)

I used the HP ASE FlexNetwork Solutions Integrator (HP0-Y47) study guide to prepre myself for the exam. Unfortunately, it had the same average quality as the HP2 Z34 guide: Good enough to pass the exam, but don’t expect to much.

HP0-Y47 is a proctored exam. I had to answer 55 questions in 150 minutes, with a passing score of 65%. The exam is not very hard, if you were familiar with the covered topics. Experience with ProVision and Comware is absolutely necessary, because both platforms have their peculiarities, e.g. processing of ACLs, differences in Stacking technologies, commands, STP support etc.

It took me some time to prepare for both exams, despite the fact that I work with ProVision and Comware Switches every day. So I’m pretty happy that I passed both exams on the first try.

vSphere Distributed Switch health check fails on HPE Comware switches

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

During the replacement of some VMware ESXi hosts at a customer, I discovered a recurrent failure of the vSphere Distributed Switch health checks. A VLAN and MTU mismatch was reported. On the physical side, the ESXi hosts were connected to two HPE 5820 switches, that were configured as an IRF stack. Inside the VMware bubble, the hosts were sharing a vSphere Distributed Switch.

cre8tive / pixelio.de

The switch ports of the old ESXi hosts were configured as Hybrid ports. The switch ports of the new hosts were configured as Trunk ports, to streamline the switch and port configuration.

Some words about port types

Comware knows three different port types:

  • Access
  • Hybrid
  • Trunk

If you were familiar with Cisco, you will know Access and Trunk ports. If you were familiar with HPE ProCurve or Alcatel-Lucent Enterprise, these two port types refer to untagged and tagged ports.

So what is a Hybrid port? A Hybrid port can belong to multiple VLANs where they can be untagged and tagged. Yes, multiple untagged VLANs on a port are possible, but the switch will need additional information to bridge the traffic into correct untagged VLANs. This additional information can be  MAC addresses, IP addresses, LLDP-MED etc. Typically, hybrid ports are used for in VoIP deployments.

The benefit of a Hybrid port is, that I can put the native VLAN of a specific port, which is often referred as Port VLAN identifier (PVID), as a tagged VLAN on that port. This configuration allows, that all dvPortGroups have a VLAN tag assigned, even if the VLAN tag represents the native VLAN of a switch port.

Failing health checks

A failed health check rises a vCenter alarm. In my case, a VLAN and MTU alarm was reported. In both cases, VLAN 1 was causing the error. According to VMware, the three main causes for failed health checks are:

  • Mismatched VLAN trunks between a vSphere distributed switch and physical switch
  • Mismatched MTU settings between physical network adapters, distributed switches, and physical switch ports
  • Mismatched virtual switch teaming policies for the physical switch port-channel settings.

Let’s take a look at the port configuration on the Comware switch:

#
interface Ten-GigabitEthernet1/0/9
 port link-mode bridge
 description "ESX-05 NIC1"
 port link-type trunk
 port trunk permit vlan all
 stp edged-port enable
#

As you can see, this is a normal trunk port. All VLANs will be passed to the host. This is an except from the display interface Ten-GigabitEthernet1/0/9  output:

 PVID: 1
 Mdi type: auto
 Port link-type: trunk
  VLAN passing  : 1(default vlan), 2-3, 5-7, 100-109
  VLAN permitted: 1(default vlan), 2-4094
  Trunk port encapsulation: IEEE 802.1q

The native VLAN is 1, this is the default configuration. Traffic, that is received and sent from a trunk port, is always tagged with a VLAN id of the originating VLAN – except traffic from the default (native) VLAN! This traffic is sent without a VLAN tag, and if frames were received with a VLAN tag, this frames will be dropped!

If you have a dvPortGroup for the default (native) VLAN, and this dvPortGroup is sending tagged frames, the frames will be dropped if you use a “standard” trunk port. And this is why the health check fails!

Ways to resolve this issue

In my case, the dvPortGroup was configured for VLAN 1, which is the default (native) VLAN on the switch ports.

There are two ways to solve this issue:

  • Remove the VLAN tag from the dvPortGroup configuration
  • Change the PVID for the trunk port

To change the PVID for a trunk port, you have to enter the following command in the interface context:

[ToR-Ten-GigabitEthernet1/0/9] port trunk pvid vlan 999

You have to change the PVID on all ESXi facing switch ports. You can use a non-existing VLAN ID for this.

vSphere Distributed Switch health check will switch to green for VLAN and MTU immediately.

Please note, that this is not the solution for all VLAN-related problems. You should make sure that you are not getting any side effects.

Meltdown & Spectre: What about HPE Storage and Citrix NetScaler?

This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

In addition to my shortcut blog post about Meltdown and Spectre with regard of Microsoft Windows, VMware ESXi and vCenter, and HPE ProLiant, I would like to add some additional information about HPE Storage and Citrix NetScaler.

When we talk about Meltdown and Spectre, we are talking about three different vulnerabilities:

  • CVE-2017-5715 (branch target injection)
  • CVE-2017-5753 (bounds check bypass)
  • CVE-2017-5754 (rogue data cache load)

CVE-2017-5715 and CVE-2017-5753 are known as “Spectre”, CVE-2017-5754 is known as “Meltdown”. If you want to read more about these vulnerabilities, please visit meltdownattack.com.

Due to the fact that different CPU platforms are affected, one might can guess that also  other devices, like storage systems or load balancers, are affected. Because of my focus, this blog post will focus on HPE Storage and Citrix NetScaler.

HPE Storage

HPE has published a searchable and continously updated list with products, that might be affected (Side Channel Analysis Method allows information disclosure in Microprocessors). Interesting is, that a product can be affected, but not vulnerable.

ProductImpactedComment
Nimble StorageYesFix under investigation
StoreOnceYESNot vulnerable – Product doesn’t allow arbitrary code execution.
3PAR StoreServYESNot vulnerable – Product doesn’t allow arbitrary code execution.
3PAR Service ProcessorYESNot vulnerable – Product doesn’t allow arbitrary code execution.
3PAR File ControllerYESVulnerable- further information forthcoming.
MSAYESNot vulnerable – Product doesn’t allow arbitrary code execution.
StoreVirtualYESNot vulnerable – Product doesn’t allow arbitrary code execution.
StoreVirtual File ControllerYESVulnerable- further information forthcoming.

The File Controller are vulnerable, because they are based on Windows Server.

So if you are running 3PAR StoreServ, MSA, StoreOnce or StoreVirtual: Relax! If you are running Nimble Storage, wait for a fix.

Citrix NetScaler

Citrix has also published an article with information about their products (Citrix Security Updates for CVE-2017-5715, CVE-2017-5753, CVE-2017-5754).

The article is a bit spongy in its statements:

Citrix NetScaler (MPX/VPX): Citrix believes that currently supported versions of Citrix NetScaler MPX and VPX are not impacted by the presently known variants of these issues.

Citrix believes… So nothing to do yet, if you are running MPX or VPX appliances. But future updates might come.

The case is a bit different, when it comes to the NetScaler SDX appliances.

Citrix NetScaler SDX: Citrix believes that currently supported versions of Citrix NetScaler SDX are not at risk from malicious network traffic. However, in light of these issues, Citrix strongly recommends that customers only deploy NetScaler instances on Citrix NetScaler SDX where the NetScaler admins are trusted.

No fix so far, only a recommendation to check your processes and admins.