Category Archives: Networking

Citrix Certified Professional – Networking (CCP-N) exam experience

This posting is ~2 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Last friday I passed the 1Y0-351 (Citrix NetScaler 10.5 Essentails and Networking) exam with a pretty good score. The exam was necessary, not only because I will do much more NetScaler projects in the future, but also because Citrix has made it mandatory to have a CCP-N in your company to to sell Citrix NetScaler.

Preparation

My employer booked me a 5-day course (CNS-220 Citrix NetScaler Essentials and Traffic Management). Very nice, although I already had experience with NetScaler deployments. This training was designed for NetScaler 12.0, not for 10.5.

A training might be recommended to prepare for an exam, but usually it is not sufficient to pass it. But I want to pass the exam in the first try, so I took a closer look into the Citrix NetScaler 10.5 Essentials and Networking Preparation Guide.

In addition to the student and lab material, I deployed three NetScaler VPX (10.5,11.1 and 12.0) in my lab. I really recommend this! Especially to learn the CLI and how to read the log files.

The exam

S. Hofschlaeger / pixelio.de

The exam 1Y0-351 is focused on NetScaler 10.5, and will be not available after January 19, 2018. The sucessor of this exam is 1Y0-340, which is based on NetScaler 12.0. It is available since October 20, 2017. You might have noticed that my course was designed for 12.0, but I took the 10.5 exam. Well, I could not identify a question that would have had to be answered differently for NetScaler 12.0. But I really recommend to take the exam matching your course.

You have to answer 72 questions in 120 minutes. I got 30 minutes extra, because I’m a non-native english speaker. I had to answer two survey before the exam. One of them was a self-assessment about my NetScaler skills.

The questions were pretty fair, no trick questions, or questions were multiple answers seemed to be correct. The exam met the exam objectives from the prep guide. And because I already wrote it: You really should work with the CLI, and you really should know the important logs.

In sum: A challenging, but pretty fair exam. No marketing, no factual knowledge from spec sheets etc. When you are quite familiar with NetScalers, there is a good chance to pass the exam in the first attempt.

Notes about 802.1x and MAC authentication

This posting is ~2 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Open network ports in offices, waiting rooms and entrance halls make me curious. Sometimes I  want to plugin a network cable, just to see if I get an IP address. I know many companies that does not care about network access control. Anybody can plugin any device to the network. When talking with customers about network access control, or port security, I often hear their complains about complexity. It’s too complex to implement, to hard to administrate. But it is not sooo complex. In the easiest setup (with mac authentication), you need a switch, that can act as authenticator, and a authentication server. But IEEE 802.1x is not much more complicated.

A brief overview over IEEE 802.1x

IEEE 802.1X offers authentication and authorization in wired or wireless networks. The supplicant (client) requests access to the network by providing a username/ password, or a digital certificate to the authenticator (switch). The authenticator forwards the provided credentials to the authentication server (mostly RADIUS or DIAMETER). The authentication server verifies the credentials and decides, if the supplicant is allowed to access the network.

802.1x uses the Extensible Authentication Protocol (EAP RFC5247) for authentication. Because EAP is a framework, there are different implementations, like EAP Transport Layer Security (EAP-TLS), or EAP with pre-shared key (EAP-PSK). Because it is only a framework, each protocol, that uses EAP, has to encapsulate it. Typical encapsulations are EAP over LAN (that is what 802.1x uses), RADIUS/ DIAMETER can use also use EAP. Protected EAP (PEAP) encapsulates EAP traffic into a TLS tunnel. PEAP is typically used as a replacement for EAP in EAPOL, or with with RADIUS or DIAMETER.

IEEE 802.1x EAP

Wikipedia/ wikipedia.org/ Public domain image resources

So far nothing special. It’s more a security thing, but an important one, if you ask me. But many customers avoid 802.1x, because of complexity. It’s perfect to keep you out of your own network, if something fails. And not all devices can act as supplicant.

But there is another benefit of 802.1x: RADIUS-Access-Accept messages can be used to dynamically assign VLAN memberships (RADIUS Extensions, RFC6929). To assign a VLAN membership to a port, to which a supplicant is connected, the RADIUS server adds three attributes to the Access-Accept message:

  • Tunnel-Type (VLAN)
  • Tunnel-Medium-Type (802)
  • Tunnel-Private-Group-Id (VLAN ID)

The authenticator uses these attributes to dynamically assign a VLAN to the port, to which the supplicant is connected.

MAC authentication

How does MAC authentication fit into this? If a client does not support 802.1x, the authenticator can use the mac-address of the connected device as username and password. The RADIUS server can use these credentials to authenticate the connected device. If you use a windows-based NAP (Windows Server NPS role), you have to create a user object in your Active Directory or local user database, that uses the mac-address as username and password. Depending on the switch configuration, the format of the username differes (xx:xx:xx:xx:xx:xx or xxxxxx-xxxxxx etc.). It’s a security fail, right? Yes, it is. So please:

  • Use MAC authentication only when needed, and
  • make sure that your authenticator uses PEAP

PEAP uses a TLS tunnel to protect the CHAP messages.

Another important part is your authentication server, mostly a RADIUS or DIAMETER server. Make sure that it is highly available. You should have at least two authentication server. I would not load balance them through a load balancer (Citrix NetScaler etc.). Simply add two authentication servers to your switch configuration. If your authentication server uses a user database, like Microsoft Active Directory, make sure that this database is also highly available. As I said: It is perfect to keep you out of your own network.

Sample config for ArubaOS (HPE ProVision based switches)

Here’s a sample config for a Aruba 2920 switch, running ArubaOS WB.16.04. 802.1x and MAC authentication are configured for the ports 1 to 5. If the authentication failes, VLAN 999 will be assigned to the port. VLAN 999 is used as unauth VLAN, which is used for unauthenticated clients.

If 802.1x fails, the authenticator, will try MAC authentication. If this fails too, VLAN 999 is assigned to the switch port.

In this case, the client was authenticated by 802.1x.

This is the output for MAC authentication.

In both cases, VLAN 1 was dynamically assigned by RADIUS-Access-Accept messages.

NetScaler ADC – Hidden vServer for HTTPS redirect

This posting is ~2 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Starting with release 11.1, NetScaler ADC offers an easy way to redirect traffic from HTTP to HTTPS within the configuration of a load-balanced vServer. With 11.1, Citrix introduced the paramter  -redirectFromPort and -redirectURL.

While playing with a NetScaler ADC in my lab, I discovered a strange error message as I tried to configure the redirect.

NetScaler HTTP Redirect Error Message

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Internal vserver couldn’t be set?! Okay, there was already a vServer, that was listening on port 80. After removing the vServer, I was able to setup the redirection and it was working as expected.

A hidden vServer

Later, I was really suprised to find a hidden vServer in the output of the “stat lb vserver” command.

The name of the vServer is always the same (name of the vServer plus suffix _httpredir_##). Sometimes, the vServer has an other ending number after a reboot. There is no hint to this vServer in the config of the NetScaler. The behaviour is the same for NetScaler ADC 11.1 and 12.0.

I don’t think that this some kind of a hack or an issue. But I think that’s something you should know when working with HTTPS redirection, or for troubleshooting purposes.

Enable IPv6 SLAAC on HPE OfficeConnect 1920 switches

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

The HPE OfficeConnect 1920 switch series is designed for SMBs. The switch is perfect for small environments, that require features like VLANs, routing or 802.1x. This switch is smart-managed, so it has “only” a web interface and only a limited CLI.

I have two switches in my lab: A 1910-8G and the successor, a 1920-24G. Although the device supports IPv6, it doesn’t support SLAAC (Stateless Address Autoconfiguration) by default. The switch does not send router advertisements (RA). I’m using IPv6 in my lab (Stateless DHCPv6 + SLAAC), so the missing RAs were a problem for me, or at least, annoying. Fortunately you can change the default behaviour.

Enable router advertisements (RA)

To change the default behaviour of the HPE 1920, you have to use the CLI. The CLI is very limited, but there’s a hidden CLI command, which enables access to nearly all available features. If you are familiar with HPEs Comware based switches, you will notice, that the switch is a Comware-based device.

After switching to the system-view, we can change the default behaviour for each VLAN interface. I have multiple VLAN interfaces and each VLAN interface has an IPv4 and an unique local address (ULA) IPv6 address.

The first command enables router advertisements. The second command adds the prefix which should be announced. That’s it. Don’t forget to save the changed configuration with “save force”. If you have more than one VLAN interface, enter this command in each VLAN interface context you wish to change.

I’m routing on the edge…

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

In my last post (Routed Port vs. Switch Virtual Interface (SVI)), I have mentioned a consequence of using routed ports to interconnect access and core switches:

You have to route the traffic on the access switches.

Routing on the network access, the edge of the network, is not a question of performance. It is more of a management issue. Depending on the size of your network, and the number of subnets, you have to deal with lots of routes. And think about the effort, if you add, change or remove subnets from your network. This is not what you want to do with static routes. You need a routing protocol.

The experiment of the week

We have a core switch C1, consisting of two independent switches (C1-1 and C1-2) forming a virtual chassis. S1 and S2 are two switches at the network access. This is a core-edge design. There is no distribution layer. Each switch at the network access has two uplinks: One uplink to C1-1 and one uplink to C1-2. The ports on each end of the links are configured as routed ports.

Please ignore the 40 GbE ports (FGE) between C1-1 and C1-2. These ports are used for the Intelligent Resilient Framework (IRF), which is used to create a virtual chassis.

routed_links_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

These are the interfaces on the core switch, that are working in route mode. GE1/0/1 and GE2/0/1 are the uplinks to S1, and GE1/0/2 and GE2/0/2 are the uplinks to S2.

These are the interfaces on the access switch S1, that are working in route mode. GE1/0/1 and GE1/0/2 are the uplinks to C1. As you can see, GE1/0/1 on C1 and  GE1/0/1 on S1 belong to the same /30 network. The same applies to GE2/0/1 on C1 and GE1/0/2 on S1. There are also two SVIs, one on VLAN 1 (192.168.1.0/24) and another on VLAN 2 (192.168.2.0/24). These VLANs are used for client connectivity.

These are the interfaces on S2, that are working in route mode. GE1/0/1 and GE1/0/2 are the uplinks to C1. The interfaces GE1/0/2 on C1, and  GE1/0/1 on S2 belong to the same /30 network. The same applies to GE2/0/2 on C1 and GE1/0/2 on S2. There are also two SVIs, one on VLAN 1 (192.168.10.0/24) and another on VLAN 2 (192.168.20.0/24).

You might wonder, because the same VLAN IDs are used on both access switches. They don’t care, because there is no layer 2 connectivity between these two switches. The only way from S1 to S2 is over the routed links to the core switch.

Now let’s have a look at the Open Shortest Path First (OSPF) routing protocol.

Single Area OSPF

The Open Shortest Path First (OSPF) routing protocol is an interior gateway protocols (IGP), and also a link-state routing protocol. The calculation of the shortest path for each route is based on Dijkstra’s algorithm. I don’t want to annoy you with details. Take a look at the Wikipedia article for OSPF.

The simplest OSPF setup is a “Single Area OSPF”. This is an OSPF configuration, which has only a single area. This is the area 0, or the backbone area.

The configuration on the core switch looks like this:

The networks, that should be associated with this area, are specified with a wildcard mask. The wildcard mask is the opposite of the subnet mask. The wildcard mask 0.255.255.255 corresponds to the subnet mask 255.0.0.0. Because I have used multiple /30 subnets at the core switch, I can summarize them with a single entry for 10.0.0.0.

The same configuration applies to the access switches S1 and S2.

With this simple configuration, the switches will exchange their routing information. They will synchronize their link-state databases, and they will be fully adjacent. If a link-state change occurs, OSPF will handle this.

The core switch has two links to each access switch. The router ID represents the access switches. 1.1.1.2 is a loopback interface IP address on S1, 1.1.1.3 is a loopback interface IP address on S2.

The same applies to the access switches, in this case S1. The access switches have also two active links to the core switch.

If one of the links fail, the access switch has another working link to the core switch, and OSPF will recalculate the shortest paths, taking the link-state change (link down between core and an access switch) into account.

This is the OSPF routing table of the core switch, based on the example above.

What if I add a new subnet on S1? Let’s create a new VLAN and add a SVI to it (VLAN 3 and 192.168.3.1).

Without touching the OSPF configuration, the core switch C1, and the other access switch S2, added routes to this new subnet.

Pretty cool, isn’t it?

Any downsides?

This is only an example with a single core switch and two access switches. OSPF can be pretty complex, if the size of the network increases. The Dijkstra’s algorithm can be really CPU intensive, and the size of the link-state databases (LSDB) increase with adding more routers and networks. For this reason, larger networks have to be divided into separate areas. It depends on the network size and the CPU/ memory performance of your switches/ routers, but a common practice is a maximum of up to 50 switches/ routers per area. If you have unstable links, the area should be smaller, because each link-state change is flooded to all neighbors and consumes CPU time.

You need a good subnet design, otherwise you have to touch your OSPF configuration too often. You should be able to summarize subnets.

Conclusion

Routing at the network access is nothing for small networks. There are better designs for small networks. But if your network has a decent size, routing at the edge of the network can offer some benefits. Instead of working with SVIs and small transfer VLANs, a routed port is more simple to implement. Routed links can also have a shorter convergence delay, and you can reduce the usage of Spanning Tree Protocol to a minimum.

Routed Port vs. Switch Virtual Interface (SVI)

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Many years ago, networks consisted of repeaters, bridges and router. Switches are the successors of the bridges. A switch is nothing else than a multiport bridge, and a traditional switch doesn’t know how to pass traffic to a different broadcast domains (VLANs). Passing traffic between different broadcast domains, is a job for a router. A router has an IP interface in each broadcast domain, and the IP interface is used by the clients in the broadcast domain as a gateway.

Switch Virtual Interface

A Switch Virtual Interface, or SVI, is exactly this: An virtual IP interface in a broadcast domain (or VLAN). It’s used by the connected clients in the broadcast domain to send traffic to other broadcast domains.

This is how a SVI is created on HPE Comware 7. It’s similar to other vendors.

At least one port is assigned to this VLAN, and as soon as at least one port of this VLAN is online, the SVI is also reachable.

What happens, if you connect two switches with a cable? The broadcast domain spans both switches. Layer 2 traffic is transmitted between the switches. And what would happen if you connect a second cable between the same two switches? As long as you are running Spanning Tree Protocol (STP), or another loop detection mechanism, nothing would happen. But one of the two connection would be blocked. No traffic would be able to pass over this connection. If you want to use multiple, active connections between switches, you have to use Link Aggregation Groups (LAG), or things like Multiple Spanning Tree Protocol (MSTP) and Per VLAN Spanning Tree (PVST).

Routers don’t know this. Multiple connections between the same two routers can’t form a loop. Loops and STP (an some other crappy layer 2 stuff) are legacies of the bridges, still alive in modern switches. Loops are a typical “bridge problem”.

Routed Ports

Some switches offer a way, to change the operation mode of a switch port. After changing this operation mode, a switch port doesn’t act like a bridge port anymore. It’s acting like the port of a router, that only handles layer 3 traffic.

This is again a HPE Comware 7 example. I know that Cisco and Alcatel Lucent Enterprise also offer routed ports.

This is a normal switch port. Please note the “port link-mode bridge”.

To “convert” a switch into a routed port, simply change the link-mode of the port.

As you can see, you can now assign an IP address directly to the port.

Example

Let’s try to make this clear with an example. C1-1 and C1-2 are two HPE Comware based switched, configured as an IRF stack (virtual chassis). These two switches form the core switch C1. S1 and S2 are two access switches, also HPE Comware based. Each access switch has two uplinks: One uplink to C1-1 and another uplink to C1-2, the two chassis that form C1. The 40 GbE Ports between C1-1 and C1-2 are used for IRF. Please ignore them.

The uplinks between the switches, all ports are Gigabit Ethernet (GE) ports, are configured as routed ports.

Without routed ports, the uplinks must be configured as a LAG, or STP would block one of the two uplinks between the core switches and the access switch. But because routed ports are used, no loop is formed. Most layer 2 traffic can’t pass the routed ports (broadcasts, multicasts etc.)

routed_links_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

THe Link Layer Discovery Protocol (LLDP) traffic can pass the routed port. This is what the core switch (C1) “sees” over LLDP.

Each routed port as an IP address assigned. The same applies to the routed ports on the access switches. Each uplink pair (core to access) uses a /30 subnet.

As you can see, the interfaces working in bridge mode start counting at GE1/0/3.

The same applies to STP. The ports, that were configured as routed ports, are not listed in the output. STP is not active on these ports.

What are the implications?

The example shows redundant links between access and core switches. There are no loops, but there’s also no layer 2 connectivity. VLANs are only located on the access switches. There are no VLANs spanning multiple switches. What does this mean? How can a client on S1 reach a server on S2? The answer is simple: You have to route the traffic on the access switches. But that’s a topic for another blog post.

Redundancy on the first hop – VRRP

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

The Virtual Router Redundancy Protocol (VRRP) was developed in 1998 as an open standard protocol. VRRP is the result of an Internet Engineering Task Force (IETF), and it’s described in RFC 5798 (VRRPv3). VRRP was designed as an open standard protocol, but it uses some patents from Cisco. Its function is comparable to Cisco Hot Standby Router Protocol (HSRP), or to the Common Address Redundancy Protocol (CARP). VRRP solves a very specific problem at the network edge: It offers highly available virtual router interfaces, or in simple words: A highly available default gateway. Its home is the network edge, and because of this, VRRP is a so called first hop redundancy protocol. When moving towards network core, VRRP loses importance. If you move from the network edge to the core, redundancy is primarily offered by dynamic routing protocols and redundant links.

Fun fact: Its home is the network edge, but most edge switches, doesn’t support VRRP…

As already mentioned, VRRP is comparable to HSRP, CARP, Cisco Gateway Load Balancing Protocol (GLBP), or the Extreme Standby Router Protocol (ESRP).

VRRPv3 supports IPv6 and IPv4.

How does it work?

 Pretty easy:

  • at least two routers or switches that support VRRP
  • a virtual IP address
  • a virtual mac address

Okay, maybe it’s not that easy.

Key point is the virtual router. A virtual router is defined on each physical router or switch that should offer high availability for a virtual IP address. A virtual router is defined on a per-vlan base, and it consists of a virtual router identifier (VRID), one or more virtual IP addresses, and a statement that declares a router or switch as a master or backup virtual router.

The virtual mac address is build upon the VRID. The mac address is always 00-00-5E-00-01-xx, in which xx is the VRID in hexadecimal format.

The interface IP address, or switch virtual interface (SVI), that is configured for a specific VLAN, and the virtual IP address of a virtual router configured for the same VLAN, must belong to the same subnet.

Master, Backup, Owner

A router or switch can have one of two roles:

  • master virtual router
  • backup virtual router

You can have one master, but multiple backup virtual router. The master virtual router answers to ARP requests and forwards packets for the virtual IP address. The backup virtual router comes into play, in case of a failure of the master virtual router. If a backup virtual router doesn’t receive packets from the master virtual router (a period longer than three times of the advertisement time), the backup virtual routers assume that the master virtual router is dead. An election process is then initiated, to select a new master virtual router.

Master and backup virtual routers communicate via multicast using the multicast IP address 224.0.0.18.

The virtual IP address must also be a real interface IP address on a router or switch. This router or switch is called IP address owner. The IP address owner has always the priority 255. Because of this. the IP address owner will always become the master virtual router, regardless what the configuration says.

vrrp_owner_master_backup

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

As you can see, R1 has the IP 10.0.0.1/24 and the virtual IP address (VIP) is also 10.0.0.1. In this case, R1 is the master virtual router and the IP address owner.

Some vendors allow a no owner design.

vrrp_no_owner_backup_backup

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

As you can see, R1 and R2 are both configured as backup virtual router, but R1 has a higher priority. In this case, R1 will answer to ARP requests and will forward packets for 10.0.0.254. Another interesting fact: The VIP is a true VIP, and it’s not a real interface IP address of any of the participating routers or switches.

Not all vendors seem to support such a design, and RFC 5798 has no references to it. According to some other vendor docs and RFC 5798, VRRP requires that the master virtual router has the virtual IP address configured as a physical IP address, which means that the master virtual router must also the IP address owner (as mentioned above).

VRRP-E – extended VRRP

Brocade and HPE offer VRRP-E, an extended and proprietary version of VRRP. Extended means, that it overcomes limitations of VRRP (told by Brocade and HPE).

VRRP-E doesn’t know the concept of master and backup virtual routers. All routers are acting as backup virtual routers. A priority value is used to determine, which router will act as master virtual router. Furthermore, VRRP-E doesn’t know the concept of the IP address owner.

Brocade states in one of their docs:

The most important difference is that all VRRP-E routers are Backups. There is no Owner router. VRRP-E overcomes the limitations in standard VRRP by removing the Owner.

VRRP and dynamic routing protcols

If VRRP is used together with dynamic routing protocols, like OSPF, there’s a worth mentioning fact: Not a single dynamic routing protocol like it, if the IP address, which is used to build adjacencies, moves to another router. It’s not the IP address that is the problem, but perhaps the not matching routing protocol configuration, a changed router ID or similar. Because of this, the VRRP VIP must not be used in the configuration for dynamic routing protocols. A no owner design can have some benefits if you have to use VRRP and dynamic routing protocols on the same router or switch. In this case, the real interface IP addresses can be used for the dynamic routing protocol configuration, and not the floating VIP.

Changing DHCP server config on AOS 7

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

The embedded DHCP server on AOS 7 and AOS 8 is a less known feature. But it’s pretty handy in some cases, e.g. if you have no servers on premises, or you don’t want that a a non redundant firewall or router acts as DHCP server. Because you can run two or more switches as a virtual chassis, you can easily make the DHCP server role highly available.

Configuring the DHCP server

The configuration is pretty easy.

Edit the dhcpd.conf as necessary. Then start the DHCP server.

The “enable” enables the DHCP server, but it’s not started. “restart” is used to start or restart the DHCP server.

Change the dhcpd.conf

But how do you change the dhcpd.conf? Sure, simply use VI and edit it. Not quite… After starting the DHCP server, the owner of the dhcpd.conf changes from “admin” to “root”. So with your normal “admin” user, you don’t have the permission to write the file.

In order to change the dhcpd.conf, we need an account with more privileges. In this case, the maintenance shell can help.

Maintenance Shell commands should only be used by Alcatel-Lucent Enterprise personnel or under the direction of Alcatel-Lucent Enterprise.

With higher privileges it’s no problem to edit the dhcpd.conf. Make sure to leave the maintenance shell after the change, and don’t forget to restart the DHCP server.

I assume that this behaviour is caused by a bug. I don’t know, if AOS 8 shows the same behaviour. I will update this blog post with further information, as soon as I get them.

Dynamic VLAN assignment with AOS 6

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Manually assigning ports to VLANs can be a time consuming and error prone process. Depending on the size of the network, there is a point where it doesn’t make sense to do this manually. Especially in SMB networks, VLANs are assigned manually, because the effort of automating the VLAN assignment exceeds the effort for manually assigning VLANs. Those environments are often very static. I know many SMB networks where VLAN have not been addressed for a long time. With declining costs for Layer 3 switches, the separation of workloads in VLANs for SMB customers became affordable. Server virtualization was another mainspring for VLANs and inter-VLAN routing. To be honest: I’m talking about SMB customers, not enterprise customers or enterprise-grade SMB customers (latter is my special term for SMB customers with enormous IT budgets…). But the main driver for VLANs was Voice over IP (VoIP). With the increasing proliferation of VoIP, even the smallest SMB customer were forced to use VLANs. But this led to situations, where customers had to change the switch config every time a new client or IP phone was added to the network. Common workarounds:

  • pre-configuring switches, eg. port 1 to 12 for clients and 13 to 24 for IP phones
  • connecting clients behind IP phones and pre-configuring all switch ports (untagged client and tagged VoIP VLAN)

Suitable for small environments, but difficult to handle if environments grow over time. And I’m not a friend of connecting clients behind IP phones… Enterprise, or enterprise-grade SMB customers tend to implement 802.1x to manage access to their network. With 802.1x it’s possible to assign ports to VLANs depending on the user identity. But 802.1x is complex. If you have the the knowhow, the time and the budget, please do 802.1x! But you should take the complexity into account. Today I want to show options, offisde of 802.1x, to dynamically assign ports to VLANs with Alcatel-Lucent Enterprise OmniSwitches.

First of all: We have to differ between AOS 6, AOS 7 and AOS 8. Alcatel-Lucent Enterprise (ALE) currently uses three different software releases, depending on the switch platform.

Switch modelAOS release
OmniSwitch 6250AOS 6
OmniSwitch 6350AOS 6
OmniSwitch 6450AOS 6
OmniSwitch 6850EAOS 6
OmniSwitch 6855AOS 6
OmniSwitch 6860(E)AOS 8
OmniSwitch 6900AOS 7
OmniSwitch 9000(E)AOS 6
OmniSwitch 9900AOS 6
OmniSwitch 10KAOS 7

Depending on the specific AOS release, there are various ways to enable dynamic VLAN assignment. The main reason for the different AOS releases is, that ALE shifts its networking core platform from Windriver VxWorks (AOS 6) to Linux (AOS 7 and AOS 8) (source #1, source #2).

This blog post will focus on

  • OmniSwitch 6250/ 6350/ 6450 running AOS 6.7.1

I plan to publish similar blog posts for

  • OmniSwitch 6900/ 10k running AOS 7.3.4
  • OmniSwitch 6860/ 6860E running AOS 8.2.1

Dynamic VLAN assignment with AOS 6

In general, there are three different ways to dynamically assign ports to VLANs with AOS 6:

  • VLAN mobility
  • User Network Profiles (UNP)
  • LLDP Media Endpoint Detection (LLDP-MED)

Let’s take a look at VLAN mobility. VLAN mobility is used to dynamically assign one or more VLANs to a port, based on traffic characteristics that were received on that port. The following information can be used to classify traffic:

  • 802.1Q VLAN ID tag
  • DHCP MAC address
  • DHCP MAC range
  • DHCP port
  • DHCP generic
  • MAC address
  • MAC address range
  • Network address
  • Protocol
  • Port

You can’t use VLAN mobility on ports that

  • is an 802.1Q tagged port
  • belongs to a Link Aggregation Group (LAG)
  • has Spanning Tree enabled and the BPDU ignore status is disabled
  • is used to mirror traffic

To allow the switch to dynamically assign ports to VLANs, VLAN mobility has to be enabled. By default, all ports are non-mobile ports. A non-mobile port is statically assigned to a specific VLAN.

To enable VLAN mobility for a port:

You can also use a port range.

To disable VLAN mobility use the “no” form of the command.

If a device sends ethernet frames with a 802.1Q VLAN ID tag, you can use the VLAN ID tag to dynamically assign a port to a VLAN. With VLAN mobility enabled, you only have to enable the “mobile-tag” option for the desired VLAN.

As soon as the switch receives a frame with a 802.1Q VLAN ID tag for VLAN 199, the port that received this frame is dynamically assigned to VLAN 199. That’s VLAN mobility based on 802.1Q VLAN ID tags. But you can also use VLAN rules. VLAN rules are created per VLAN. You can have one or more rules per VLAN. You can use the

  • Source MAC address
  • Source MAC address ranges
  • Switch ports, or
  • the DHCP request itself

to dynamically assign a port to a VLAN. This rule matches to DHCP requests from a single MAC address.

If a DHCP request with the specified MAC address is received, the port is dynamically assigned to VLAN 199. Because managing MAC addresses is not very handy, you can use MAC address ranges:

To use all DHCP requests on a specific port, use the DHCP port rule:

To use all received DHCP requests, use the DHCP generic rule:

To remove a rule, use the “no” form of the command.

Once the device has received an IP address from the DHCP server, the VLAN port assignment is dropped! Because of this, you can combine DHCP and network address rules. A network address rule dynamically assigns the VLAN depending on the IP subnet.

This rule assigns VLAN 199 to a port, that receives traffic from a client in the subnet 192.168.20.0/24. If the DHCP server in VLAN 199 assigns IP addresses from this subnet, you can easily combine the DHCP and network address rule.

A MAC address rule assigns the VLAN depending on a single MAC address

 or on a range of MAC addresses.

Less frequently used are port and protocol rules. A port rule doesn’t require incoming traffic to trigger dynamic VLAN assignment. The specified mobile port is immediately assigned to the specified VLAN. Port rules only apply to outgoing broadcast traffic. You still need rules for the incoming traffic. To create a VLAN port rule:

A protocol rule uses the protocol type in an ethernet frame to assign VLANs to ports. Valid values for the port type are:

  • IP Ethernet-II
  • IP SNAP
  • Ethernet II
  • DECNet
  • AppleTalk
  • Ethertype
  • DSAP/SSAP
  • SNAP

A protocol rule is created by issuing

As always, the “no” form of the command removes the rule.

or

User Network Profiles (UNP) is a feature of Access Guardian. Access Guardian refers to security functions, like

  • Authentication and Classification
  • Host Integrity Check (HIC)
  • User Network Profiles (UNP), and
  • Virtual Network Profile (VPN)

UNP are available in AOS 6, AOS 7 and AOS 8. In AOS 6 we need a

  • policy condition
  • policy action
  • policy rule, and a
  • policy list

These four characteristics belong to the QoS feature of AOS. But a UNP needs a policy list, more specific the policy rules that are part of the policy list, to classify traffic and devices. The policy condition is necessary to identifiy a devices on which this policy should match.

Beside the MAC address, you can use source and destionation IP addresses, switch ports, source and destination TCP/ UDP ports, VLANs and many more. To use one or more IP addresses, simple use a network group.

The group “sales” consists of two subnets. To remove a subnet, use the “no” form of the command.

The policy action is used to determine, what should happen with the traffic. In this case: Priorize the traffic.

The rule binds condition and action.

A policy list is used to group one or more policy rules.

A UNP binds a name, a VLAN and a policy list together.

A third way to dynamically assign ports to VLANs is LLDP Media Endpoint Detection (LLDP-MED). LLDP Media Endpoint Detection was developed to increase the interoperability of VoIP devices with other devices on the network (eg. PC, switches etc). AOS uses LLDP-MED network policies to advertise information to devices. A network policy contains information about VLAN ID and L2/ L3 priorities. First, we have to enable network policy support, either for a port or for the chassis.

To enable network policies for the chassis use the keyword “chassis” instead of a port.

To create a network policy enter:

The created policy (ID 1) will advertise the VLAN 100, L2 priority 5 and DSCP 46 to voice devices. The next step is to bind the policy to a port or the chassis.

Or for a specific port.

Furthermore, you need to enable VLAN mobility on the ports. If the IP phones sends tagged VLAN frames, you also have to enable the “mobile tag” feature for the VLAN.

The IP phone receives the configuration information over of the network policy. VLAN mobility and “mobile tag” will make sure, that the VoIP phone is pushed to the correct VLAN.

Summary

Manually assigning VLANs can be a time consuming and error prone process. AOS 6 offers

  • VLAN mobility
  • User Network Profiles, and
  • LLDP-MED

to dynamically assign ports to VLANs. Each of the options has its pros and cons. Especially the combination of VLAN mobility and LLDP-MED is really easy to implement. I will publish more blog posts about the same topic, but with AOS 7 and AOS 8.

Microsoft Windows: Avoiding COM port proliferation

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

This is not a specific problem of Alcatel-Lucent Enterprise (ALE) OmniSwitches, but I’m affected by this behaviour and it’s really, really annoying. It’s not a problem with the switch, but with the device handling of Windows.

ALE delivers a micro USB-to-USB cable with each OmniSwtich 6860E. This cable is used to connect to the console port of the switch. Each time you connect the cable, Windows will discover a new USB-to-UART bridge and creates a new COM port. This happens each time you connect to a new switch or if you choose another USB port. Over time, you will see the number of COM ports increasing (COM 2, COM 3, COM 4, COM 5…).

usb_com_ports_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Furthermore, you have to reconfigure your client software (PuTTY etc.) each time. This is annoying! But there’s a workaround.

Workaround

All you need is the product ID and the vendor ID. You can find these values in the properties of the device.

usb_com_ports_2

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The VID for this device is 10C4, the PID EA60. Now create a reg file, which is used to import registry values.

You can see that the VID and PID was added to the string “IgnoreHWSerNum”. Import the reg file (double click or import it using RegEdit) and remove all unnecessary COM ports from the device manager.

Next time, Windows will not create a new COM port, as long as you use the same USB port. If you change the USB port, Windows will create an additional COM port.