Tag Archives: networking

NetScaler ADC – Hidden vServer for HTTPS redirect

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Starting with release 11.1, NetScaler ADC offers an easy way to redirect traffic from HTTP to HTTPS within the configuration of a load-balanced vServer. With 11.1, Citrix introduced the paramter  -redirectFromPort and -redirectURL.

While playing with a NetScaler ADC in my lab, I discovered a strange error message as I tried to configure the redirect.

NetScaler HTTP Redirect Error Message

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Internal vserver couldn’t be set?! Okay, there was already a vServer, that was listening on port 80. After removing the vServer, I was able to setup the redirection and it was working as expected.

A hidden vServer

Later, I was really suprised to find a hidden vServer in the output of the “stat lb vserver” command.

The name of the vServer is always the same (name of the vServer plus suffix _httpredir_##). Sometimes, the vServer has an other ending number after a reboot. There is no hint to this vServer in the config of the NetScaler. The behaviour is the same for NetScaler ADC 11.1 and 12.0.

I don’t think that this some kind of a hack or an issue. But I think that’s something you should know when working with HTTPS redirection, or for troubleshooting purposes.

Stunnel and Squid on FreeBSD 11

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

I don’t like to use untrusted networks. When I have to use such a network, e.g. an open WiFi network, I use a TLS encrypted tunnel connection to encrypt all web traffic that travels through the untrusted network. I’m using a simple stunnel/ Squid setup for this. My setup consists of three components:

  • Stunnel (server mode)
  • Squid proxy
  • Stunnel (client mode)

What is stunnel?

Stunnel is an OSS project that uses OpenSSL to encrypt traffic. The website describes Stunnel as follows:

Stunnel is a proxy designed to add TLS encryption functionality to existing clients and servers without any changes in the programs’ code. Its architecture is optimized for security, portability, and scalability (including load-balancing), making it suitable for large deployments.

How it works

The traffic flow looks like this:

Stunnel Secure Tunnel Connection Diagram

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The browser connects to the Stunnel client on 127.0.0.1:8080. This is done by configuring 127.0.0.1:8080 as proxy server in the browser. The traffic enters the tunnel on the client-side, and Stunnel opens a connection to the server-side. You can use any port, as long as it is unused on the server-side. I use 443/tcp. The connection is encrypted using TLS, and the connection is authenticated by a pre-shared key (PSK). On the server, the traffic leaves the tunnel, and the connection attempt of the client is directed to the Squid proxy, which listens on 127.0.0.1:3128 for connections. Summarized, my browser connectes the Squid proxy on my FreeBSD host over a TLS encrypted connection.

Installation and configuration on FreeBSD

Stunnel and Squid can be installed using pkg install .

The configuration files are located under /usr/local/etc/stunnel and /usr/local/etc/squid. After the installation of stunnel, an additional directory for the PID file must be created. Stunnel is not running with root privileges, thus it can’t create its PID file in /var/run.

The stunnel.conf is pretty simple. I’m using a Let’s Encrypt certificate on the server-side. If you like, you can create your own certificate using OpenSSL. But I prefer Let’s Encrypt.

The psk.txt contains the pre-shared key. The same file must be located on the client-side. The file itself it pretty simple – username:passphrase. Make sure that the PSK file is not group- and world-readable!

The squid.conf is also pretty simple. Make sure that Squid only listens on localhost! I disabled the access log. I simply don’t need it, because I’m the only user. And I don’t have to rotate another logfile. Some ACLs of Squid are now implicitly active. There is no need to configure localhsot or 127.0.0.1 as a source, if you want to allow http access only from localhost. Make sure, that all requests are only allowed from localhost!

To enable stunnel and squid in the /etc/rc.conf, add the following lines to your /etc/rc.conf. The stunnel_pidfile option tells Stunnel, where it should create its PID file.

Make sure that you have initialized the Squid cache dir, before you start squid. Initialize the cache dir, and start Squid and Stunnel on the server-side.

Installation and configuration on Windows

On the client-side, you have to install Stunnel. You can fine installer files for Windows on stunnel.org. The config of the client is pretty simple. The psk.txt contain the same username and passphrase as on the server-side. The file must be located in the same directory as the stunnel.conf on the client.

Test your connection

Start Stunnel on your client and configure 127.0.0.1:8080 as proxy in your browser. If you access https://www.whatismyip.com, you should see the IP address of your server, not the IP address of your local internet connection.

You can check the encrypted connection with Wireshark on the client-side, or with tcpdump on the server-side.

Please note, that the connection is only encrypted until it hits your server. Traffic that leaves your server, e.g. HTTP requests, are unencrypted. It is only an encrypted connection to your proxy, not and encrypted end-2-end connection.

How to set a WiFi connection as metered on Windows 10

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

I switched my mobile carrier and my new carrier doesn’t offer multi SIM (but hey, it’s cheap and sufficient for my needs). Now I have to use my iPhone as WiFi hotspot. No big deal, works perfect. Except one thing: When I was using the built-in 4G modem in my laptop, Windows 10 knew that it was using a mobile (metered) connection, and suspended some services like OneDrive sync, download of Windows Updates etc. That is pretty handy in times of “flatrates” with single digit GB highspeed data volume.

Metered WiFi connection

You can mark a WiFi connection as metered in Windows 10, but you need administrator rights to change the setting. And when you switch back to your normal work user, Windows 10 treats the connection still as metered, but the Windows 10 GUIs shows the setting as disabled and greyed out.

Metered Connection WiFi Network

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Use netsh instead

You can change, or check, the setting with netsh. Simply start an elevated command prompt and set the desired WiFi connection cost setting to fixed:

The setting cost=fixed marks the WiFi connection as metered. That’s it. From this point, Windows 10 will treat this connection as metered, until the cost setting is changed to “unrestricted”.

Secure your Azure deployment with Palo Alto VM-Series for Azure

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

When I talk to customers and colleagues about cloud offerings, most of them are still concerned about the cloud, and especially about the security of public cloud offerings. One of the most mentioned concerns is based on the belief, that each and every cloud-based VM is publicly reachable over the internet. This can be so, but it does not have to. It relies on your design. Maybe that is only a problem in germany. German privacy policies are the reason for the two german Azure datacenters. They are run by Deutsche Telekom, not by Microsoft.

Azure Virtual Networks

An Azure Virtual Network (VNet) is a network inside the public Azure cloud. It is isolated from the underlying infrastructure and it is dedicated to you. This allows you to fully control IP addressing, DNS, security policies and routing between subnets. Virtual Networks can include multiple subnets to reflect different security zones and/ or multi-tier designs.  If you want to connect two or more VNets in the same region, you have to use VNet peering. Microsoft offers an excellent documentation about Virtual Networks. Because routing is managed by the Azure infrastructure, you will need to set user-defined routes to push traffic through a firewall or load-balancing appliance.

Who is Palo Alto Networks?

Palo Alto Networks was founded by Nir Zuk in 2005. Nir Zuk is the founder and CTO of Palo Alto Networks. He is still leading the development. Nil Zuk is a former employee of CheckPoint and NetScreen (was acquired by Juniper Networks). His motivation to develop his vision of a Next Generation Firewall (NGF) was the fact, that firewalls were unable to look into traffic streams. We all know this: You want that your employees can use Google, but you don’t want them to access Facebook. Designing polices for this can be a real PITA. You can solve this with a proxy server, but a proxy has other disadvantages.

Gartner has identified Palo Alto Networks as a leader in the enterprise firewall since 2011.

I was able to get my hands on some Palo Alto firewalls and I think I understand why Palo Alto Networks is noticed as a leader.

VM-Series for Microsoft Azure

Sometimes you have to separate networks. No big deal when your servers are located in your datacenter, even if they are virtualized. But what if the servers are located in a VNet on Azure? As already mentioned, you can create different subnets in an Azure VNet to create a multi-tier or multi-subnet environment. Because routing is managed by the underlying Azure infrastructure, you have to use Network Security Groups (NSG) to manage traffic. A NSG contains rules to allow or deny network traffic to VMs in a VNet. Unfortunately a NSGs can only act on layer 4. If you need something that can act on layer 7, you need something different. Now comes the Palo Alto Networks VM-Series for Microsoft Azure into play.

The VM-Series for Microsoft Azure can directly deployed from the Azure Marketplace. Palo Alto Networks also offers ARM templates on GitHub.

Palo Alto Networks aims four main use-cases:

  • Hybrid Cloud
  • Segmentation Gateway Compliance
  • Internet Gateway

The hybrid cloud use-case is interesting if you want to extend your datacenter to Azure. For example, if you move development workloads to Azure. Instead of using Azures native VPN connection capabilities, you can use the VM-Series Palo Alto Networks NGF as IPSec gateway.

If you are running different workloads on Azure, and you need inter-subnet communication between them, you can use the VM-Series as a firewall between the subnets. This allows you to manage traffic more efficiently, and it provides more security compared to the Azure NSGs.

If you are running production workloads on Azure, e.g. a RDS farm, you can use the VM-Series to secure the internet access from that RDS farm. Due to integration in directory services, like Microsoft Active Directory or plain LDAP, user-based policies allow the management of traffic based on the user identity.

There is a fourth use-case: Palo Alto Networks GlobalProtect. With GlobalProtect, the capabilities of the NGF are extended to remote users and devices. Traffic is tunneled to the NGF, and users and devices will be protected from threats. User- and application-based policies can be enforced, regardless where the user and the device is located: On-premises, in a remote location or in the cloud.

Palo Alto Networks offers two ways to purchase the VM-Series for Microsoft Azure:

  • Consumption-based licensing
  • Bring your own license (BYOL)

The consumption-based licensing is only available for the VM-300. The smaller VM-100, as well as the bigger VM-500 and VM-700, are only available via BYOL. It’s a good idea to offer a mid-sized model with a consumption-based license. If the VM-300 is too big (with consumption-based licensing), you can purchase a permanent license for a VM-100. If you need more performance, purchasing your own license might be the better way. You can start with a VM-300 and then rightsize the model and license.

All models can handle a throughput of 1 Gb/s, but they differ in the number of sessions. VM-100 and 300 use D3_v2, the VM-500 and VM-700 use D3_v2 instances.

Just play with it

Just create some Azure VM instance and deploy a VM-300 from the marketplace. Play with it. It’s awesome!

Enable IPv6 SLAAC on HPE OfficeConnect 1920 switches

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

The HPE OfficeConnect 1920 switch series is designed for SMBs. The switch is perfect for small environments, that require features like VLANs, routing or 802.1x. This switch is smart-managed, so it has “only” a web interface and only a limited CLI.

I have two switches in my lab: A 1910-8G and the successor, a 1920-24G. Although the device supports IPv6, it doesn’t support SLAAC (Stateless Address Autoconfiguration) by default. The switch does not send router advertisements (RA). I’m using IPv6 in my lab (Stateless DHCPv6 + SLAAC), so the missing RAs were a problem for me, or at least, annoying. Fortunately you can change the default behaviour.

Enable router advertisements (RA)

To change the default behaviour of the HPE 1920, you have to use the CLI. The CLI is very limited, but there’s a hidden CLI command, which enables access to nearly all available features. If you are familiar with HPEs Comware based switches, you will notice, that the switch is a Comware-based device.

After switching to the system-view, we can change the default behaviour for each VLAN interface. I have multiple VLAN interfaces and each VLAN interface has an IPv4 and an unique local address (ULA) IPv6 address.

The first command enables router advertisements. The second command adds the prefix which should be announced. That’s it. Don’t forget to save the changed configuration with “save force”. If you have more than one VLAN interface, enter this command in each VLAN interface context you wish to change.

I’m routing on the edge…

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

In my last post (Routed Port vs. Switch Virtual Interface (SVI)), I have mentioned a consequence of using routed ports to interconnect access and core switches:

You have to route the traffic on the access switches.

Routing on the network access, the edge of the network, is not a question of performance. It is more of a management issue. Depending on the size of your network, and the number of subnets, you have to deal with lots of routes. And think about the effort, if you add, change or remove subnets from your network. This is not what you want to do with static routes. You need a routing protocol.

The experiment of the week

We have a core switch C1, consisting of two independent switches (C1-1 and C1-2) forming a virtual chassis. S1 and S2 are two switches at the network access. This is a core-edge design. There is no distribution layer. Each switch at the network access has two uplinks: One uplink to C1-1 and one uplink to C1-2. The ports on each end of the links are configured as routed ports.

Please ignore the 40 GbE ports (FGE) between C1-1 and C1-2. These ports are used for the Intelligent Resilient Framework (IRF), which is used to create a virtual chassis.

routed_links_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

These are the interfaces on the core switch, that are working in route mode. GE1/0/1 and GE2/0/1 are the uplinks to S1, and GE1/0/2 and GE2/0/2 are the uplinks to S2.

These are the interfaces on the access switch S1, that are working in route mode. GE1/0/1 and GE1/0/2 are the uplinks to C1. As you can see, GE1/0/1 on C1 and  GE1/0/1 on S1 belong to the same /30 network. The same applies to GE2/0/1 on C1 and GE1/0/2 on S1. There are also two SVIs, one on VLAN 1 (192.168.1.0/24) and another on VLAN 2 (192.168.2.0/24). These VLANs are used for client connectivity.

These are the interfaces on S2, that are working in route mode. GE1/0/1 and GE1/0/2 are the uplinks to C1. The interfaces GE1/0/2 on C1, and  GE1/0/1 on S2 belong to the same /30 network. The same applies to GE2/0/2 on C1 and GE1/0/2 on S2. There are also two SVIs, one on VLAN 1 (192.168.10.0/24) and another on VLAN 2 (192.168.20.0/24).

You might wonder, because the same VLAN IDs are used on both access switches. They don’t care, because there is no layer 2 connectivity between these two switches. The only way from S1 to S2 is over the routed links to the core switch.

Now let’s have a look at the Open Shortest Path First (OSPF) routing protocol.

Single Area OSPF

The Open Shortest Path First (OSPF) routing protocol is an interior gateway protocols (IGP), and also a link-state routing protocol. The calculation of the shortest path for each route is based on Dijkstra’s algorithm. I don’t want to annoy you with details. Take a look at the Wikipedia article for OSPF.

The simplest OSPF setup is a “Single Area OSPF”. This is an OSPF configuration, which has only a single area. This is the area 0, or the backbone area.

The configuration on the core switch looks like this:

The networks, that should be associated with this area, are specified with a wildcard mask. The wildcard mask is the opposite of the subnet mask. The wildcard mask 0.255.255.255 corresponds to the subnet mask 255.0.0.0. Because I have used multiple /30 subnets at the core switch, I can summarize them with a single entry for 10.0.0.0.

The same configuration applies to the access switches S1 and S2.

With this simple configuration, the switches will exchange their routing information. They will synchronize their link-state databases, and they will be fully adjacent. If a link-state change occurs, OSPF will handle this.

The core switch has two links to each access switch. The router ID represents the access switches. 1.1.1.2 is a loopback interface IP address on S1, 1.1.1.3 is a loopback interface IP address on S2.

The same applies to the access switches, in this case S1. The access switches have also two active links to the core switch.

If one of the links fail, the access switch has another working link to the core switch, and OSPF will recalculate the shortest paths, taking the link-state change (link down between core and an access switch) into account.

This is the OSPF routing table of the core switch, based on the example above.

What if I add a new subnet on S1? Let’s create a new VLAN and add a SVI to it (VLAN 3 and 192.168.3.1).

Without touching the OSPF configuration, the core switch C1, and the other access switch S2, added routes to this new subnet.

Pretty cool, isn’t it?

Any downsides?

This is only an example with a single core switch and two access switches. OSPF can be pretty complex, if the size of the network increases. The Dijkstra’s algorithm can be really CPU intensive, and the size of the link-state databases (LSDB) increase with adding more routers and networks. For this reason, larger networks have to be divided into separate areas. It depends on the network size and the CPU/ memory performance of your switches/ routers, but a common practice is a maximum of up to 50 switches/ routers per area. If you have unstable links, the area should be smaller, because each link-state change is flooded to all neighbors and consumes CPU time.

You need a good subnet design, otherwise you have to touch your OSPF configuration too often. You should be able to summarize subnets.

Conclusion

Routing at the network access is nothing for small networks. There are better designs for small networks. But if your network has a decent size, routing at the edge of the network can offer some benefits. Instead of working with SVIs and small transfer VLANs, a routed port is more simple to implement. Routed links can also have a shorter convergence delay, and you can reduce the usage of Spanning Tree Protocol to a minimum.

Routed Port vs. Switch Virtual Interface (SVI)

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Many years ago, networks consisted of repeaters, bridges and router. Switches are the successors of the bridges. A switch is nothing else than a multiport bridge, and a traditional switch doesn’t know how to pass traffic to a different broadcast domains (VLANs). Passing traffic between different broadcast domains, is a job for a router. A router has an IP interface in each broadcast domain, and the IP interface is used by the clients in the broadcast domain as a gateway.

Switch Virtual Interface

A Switch Virtual Interface, or SVI, is exactly this: An virtual IP interface in a broadcast domain (or VLAN). It’s used by the connected clients in the broadcast domain to send traffic to other broadcast domains.

This is how a SVI is created on HPE Comware 7. It’s similar to other vendors.

At least one port is assigned to this VLAN, and as soon as at least one port of this VLAN is online, the SVI is also reachable.

What happens, if you connect two switches with a cable? The broadcast domain spans both switches. Layer 2 traffic is transmitted between the switches. And what would happen if you connect a second cable between the same two switches? As long as you are running Spanning Tree Protocol (STP), or another loop detection mechanism, nothing would happen. But one of the two connection would be blocked. No traffic would be able to pass over this connection. If you want to use multiple, active connections between switches, you have to use Link Aggregation Groups (LAG), or things like Multiple Spanning Tree Protocol (MSTP) and Per VLAN Spanning Tree (PVST).

Routers don’t know this. Multiple connections between the same two routers can’t form a loop. Loops and STP (an some other crappy layer 2 stuff) are legacies of the bridges, still alive in modern switches. Loops are a typical “bridge problem”.

Routed Ports

Some switches offer a way, to change the operation mode of a switch port. After changing this operation mode, a switch port doesn’t act like a bridge port anymore. It’s acting like the port of a router, that only handles layer 3 traffic.

This is again a HPE Comware 7 example. I know that Cisco and Alcatel Lucent Enterprise also offer routed ports.

This is a normal switch port. Please note the “port link-mode bridge”.

To “convert” a switch into a routed port, simply change the link-mode of the port.

As you can see, you can now assign an IP address directly to the port.

Example

Let’s try to make this clear with an example. C1-1 and C1-2 are two HPE Comware based switched, configured as an IRF stack (virtual chassis). These two switches form the core switch C1. S1 and S2 are two access switches, also HPE Comware based. Each access switch has two uplinks: One uplink to C1-1 and another uplink to C1-2, the two chassis that form C1. The 40 GbE Ports between C1-1 and C1-2 are used for IRF. Please ignore them.

The uplinks between the switches, all ports are Gigabit Ethernet (GE) ports, are configured as routed ports.

Without routed ports, the uplinks must be configured as a LAG, or STP would block one of the two uplinks between the core switches and the access switch. But because routed ports are used, no loop is formed. Most layer 2 traffic can’t pass the routed ports (broadcasts, multicasts etc.)

routed_links_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

THe Link Layer Discovery Protocol (LLDP) traffic can pass the routed port. This is what the core switch (C1) “sees” over LLDP.

Each routed port as an IP address assigned. The same applies to the routed ports on the access switches. Each uplink pair (core to access) uses a /30 subnet.

As you can see, the interfaces working in bridge mode start counting at GE1/0/3.

The same applies to STP. The ports, that were configured as routed ports, are not listed in the output. STP is not active on these ports.

What are the implications?

The example shows redundant links between access and core switches. There are no loops, but there’s also no layer 2 connectivity. VLANs are only located on the access switches. There are no VLANs spanning multiple switches. What does this mean? How can a client on S1 reach a server on S2? The answer is simple: You have to route the traffic on the access switches. But that’s a topic for another blog post.

Redundancy on the first hop – VRRP

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

The Virtual Router Redundancy Protocol (VRRP) was developed in 1998 as an open standard protocol. VRRP is the result of an Internet Engineering Task Force (IETF), and it’s described in RFC 5798 (VRRPv3). VRRP was designed as an open standard protocol, but it uses some patents from Cisco. Its function is comparable to Cisco Hot Standby Router Protocol (HSRP), or to the Common Address Redundancy Protocol (CARP). VRRP solves a very specific problem at the network edge: It offers highly available virtual router interfaces, or in simple words: A highly available default gateway. Its home is the network edge, and because of this, VRRP is a so called first hop redundancy protocol. When moving towards network core, VRRP loses importance. If you move from the network edge to the core, redundancy is primarily offered by dynamic routing protocols and redundant links.

Fun fact: Its home is the network edge, but most edge switches, doesn’t support VRRP…

As already mentioned, VRRP is comparable to HSRP, CARP, Cisco Gateway Load Balancing Protocol (GLBP), or the Extreme Standby Router Protocol (ESRP).

VRRPv3 supports IPv6 and IPv4.

How does it work?

 Pretty easy:

  • at least two routers or switches that support VRRP
  • a virtual IP address
  • a virtual mac address

Okay, maybe it’s not that easy.

Key point is the virtual router. A virtual router is defined on each physical router or switch that should offer high availability for a virtual IP address. A virtual router is defined on a per-vlan base, and it consists of a virtual router identifier (VRID), one or more virtual IP addresses, and a statement that declares a router or switch as a master or backup virtual router.

The virtual mac address is build upon the VRID. The mac address is always 00-00-5E-00-01-xx, in which xx is the VRID in hexadecimal format.

The interface IP address, or switch virtual interface (SVI), that is configured for a specific VLAN, and the virtual IP address of a virtual router configured for the same VLAN, must belong to the same subnet.

Master, Backup, Owner

A router or switch can have one of two roles:

  • master virtual router
  • backup virtual router

You can have one master, but multiple backup virtual router. The master virtual router answers to ARP requests and forwards packets for the virtual IP address. The backup virtual router comes into play, in case of a failure of the master virtual router. If a backup virtual router doesn’t receive packets from the master virtual router (a period longer than three times of the advertisement time), the backup virtual routers assume that the master virtual router is dead. An election process is then initiated, to select a new master virtual router.

Master and backup virtual routers communicate via multicast using the multicast IP address 224.0.0.18.

The virtual IP address must also be a real interface IP address on a router or switch. This router or switch is called IP address owner. The IP address owner has always the priority 255. Because of this. the IP address owner will always become the master virtual router, regardless what the configuration says.

vrrp_owner_master_backup

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

As you can see, R1 has the IP 10.0.0.1/24 and the virtual IP address (VIP) is also 10.0.0.1. In this case, R1 is the master virtual router and the IP address owner.

Some vendors allow a no owner design.

vrrp_no_owner_backup_backup

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

As you can see, R1 and R2 are both configured as backup virtual router, but R1 has a higher priority. In this case, R1 will answer to ARP requests and will forward packets for 10.0.0.254. Another interesting fact: The VIP is a true VIP, and it’s not a real interface IP address of any of the participating routers or switches.

Not all vendors seem to support such a design, and RFC 5798 has no references to it. According to some other vendor docs and RFC 5798, VRRP requires that the master virtual router has the virtual IP address configured as a physical IP address, which means that the master virtual router must also the IP address owner (as mentioned above).

VRRP-E – extended VRRP

Brocade and HPE offer VRRP-E, an extended and proprietary version of VRRP. Extended means, that it overcomes limitations of VRRP (told by Brocade and HPE).

VRRP-E doesn’t know the concept of master and backup virtual routers. All routers are acting as backup virtual routers. A priority value is used to determine, which router will act as master virtual router. Furthermore, VRRP-E doesn’t know the concept of the IP address owner.

Brocade states in one of their docs:

The most important difference is that all VRRP-E routers are Backups. There is no Owner router. VRRP-E overcomes the limitations in standard VRRP by removing the Owner.

VRRP and dynamic routing protcols

If VRRP is used together with dynamic routing protocols, like OSPF, there’s a worth mentioning fact: Not a single dynamic routing protocol like it, if the IP address, which is used to build adjacencies, moves to another router. It’s not the IP address that is the problem, but perhaps the not matching routing protocol configuration, a changed router ID or similar. Because of this, the VRRP VIP must not be used in the configuration for dynamic routing protocols. A no owner design can have some benefits if you have to use VRRP and dynamic routing protocols on the same router or switch. In this case, the real interface IP addresses can be used for the dynamic routing protocol configuration, and not the floating VIP.

Setting up split DNS using Windows DNS server

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Sometimes it’s necessary to have two DNS servers that are authoritative for the same DNS namespace. This is the case if you use the same namespace for your web site and your internal Active Directory domain, e.g. terlisten-consulting.de. Or that you have created the zone terlisten-consulting.de in your Windows DNS to point specific hosts to internal IP addresses. The DNS servers at your ISP would be authoritative, and the domain controllers of your Active Directory would also be authoritative for the same domain. The response to a query depends on which DNS server you ask. So what would happen if you try to resolve www.terlisten-consulting.de, and the internal DNS has no record for it?

In this case, the domain controller in my lab is authoritative for terlisten-consulting.de. But he doesn’t has a A record for www.terlisten-consulting.de. If I remove the zone from my domain controller, or if I use an external DNS server, I get a non-authoritative answer.

This, the same DNS namespace on different DNS server, is called “split DNS” (sometimes also called split-horizon DNS, split-view DNS or split-brain DNS).

Do it right

Split DNS is pretty handy, and sometimes it’s necessary. When it comes to Microsoft Exchange, it a common practice to use the same external DNS namespace for the internal and external URLs. This requires, that I create a zone for the externally used DNS namespace on my internal DNS (in most cases: Microsoft Windows Activice Directory domain controllers). The downside: I must create all DNS entries on my internal DNS, and I must point them to their external IP addresses, except the ones that should point to an internal IP.

FQDNInternal/ External IP address
www.terlisten-consulting.deexternal IP address
exchange.terlisten-consulting.deinternal IP address
shop.terlisten-consulting.deexternal IP address

Otherwise, users that use the domain controllers as DNS server, wouldn’t be able to resolve www or shop. This is challenging. But there’s a solution.

Create split DNS for single hosts

The Domain Name System is hierarchy organized. Because of this, I can tell my DNS server to be authoritative only for a sub-tree of a domain, e.g. exchange.terlisten-consulting.de. If I try to resolve www.terlisten-consulting.de, the DNS server would go down the hierarchy starting at the DNS root servers (or it would ask a forwarder). Instead of creating a zone for the whole namespace, create a zone for the host. Simply add

  • a new primary zone
  • don’t allow dynamic updates to the zone, and
  • create a new A or AAAA record for the host

Make sure

  • to leave the name field empty
  • don’t create a PTR record
  • point it to the internal IP of the host
single_host_zone

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

A simple nslookup will show if split DNS works as expected.

Works as expected. Make sure to clear the DNS server cache after you have added the zones.

Windows DNS Server Policies

Windows Server 2016 will introduce Windows DNS Server Policies. DNS Policies will allow you to control how a DNS Server handles answers to queries based on parameters like source IP address, IP address of the network interface that has received the query etc. In future, DNS Server Policies can be used to configure split DNS.

How to dramatically improve website load times

This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Over the last weeks, I’ve tried to improve the performance of my blog. The side was very slow and the page load times varied between 5 and 10 seconds. Much too long! I’ve reduced time consuming plugins, checked the size of pictures, checked CSS and HTML for misconfiguration/ slow clode and tuned the database. The page load times have not really improved.

Yesterday, I checked the httpd.conf on my webserver and found a little typo (accidentally commented line). After a restart of the Apache webserver, the page load times have dramatically improved (down to 2 – 3 seconds). What had happened?

HTTP keep-alive

HTTP keep-alive, sometimes also called “HTTP persistent connection”, was designed to transfer multiple HTTP requests and responses over a single TCP connection. This is much better as opening a new connection for every single request/ response pair. The benefits of HTTP keep-alive are:

  • lower CPU usage
  • lower memory usage
  • reduced latency due to reduced requests/ handshaking

These benefits are even more important, if you use HTTPS connections (and vcloudnine.de is HTTPS-only…), because each new HTTP connection needs much more CPU time and round-trips compared to an unsecure HTTP connection. This little picture clarifies the differences.

HTTP_persistent_connection

Wikipedia/ wikipedia.org/ Public domain image resources

If you’re using Apache, you can enable HTTP keep-alive with a single line in the httpd.conf.

Further information can be found in the documentation of Apache (Apache webserver 2.2 and 2.4).