This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
Usually, bypassing a firewall is not the best idea. But sometimes you have to. One case, where you want to bypass a firewall, is asymmetric routing.
MichaelGaida/ pixabay.com/ Creative Commons CC0
What is asymmetric routing? Imagine a scenario with two routers on the same network. One router offeres access to the internet, the other router provides access to other sites with site-2-site VPN tunnels.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
Host 1 uses R1 as default gateway. R1 has static routes configured to the networks reachable over the VPN, or it has learned them dynamically using a routing protocol from R2. A packet from host 1 arrives at R1, is routed to R2, and is sent over the VPN tunnel. The answer to this packet arrives at R2, and is sent directly to host 1, because host 1 is the destination. This works because R2 and host 1 are on the same network. This is asymmetric routing, because request and answer go different ways.
In case of routing, this is not a problem. But if R1 is a firewall, this firewall might be stubborn, because it does not see the whole traffic.
Bypass the stateful firewall
I recently had such a setup due to some technical debts. The firewall dropped that “Invalid Traffic”. Fortunately, there is a way to bypass the statefull firewall. You can create advanced firewall rules using the CLI. There is no way to create these rules using the GUI. And this only applies to the Sophos XG (former Cyberoam products).
Login to the device console and select option 4. Then enter on the console the following commands, one per destination:
Make sure that you have a static or dynamically learned route to the networks. This is not a routing entry, it only tells the firewall what traffic should bypass the stateful firewall.
This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
The last two days, I have supported a customer during the implementation of 802.1x. His network consisted of HPE/ Aruba and some HPE Comware switches. Two RADIUS server with appropriate policies was already in place. The configuration and test with the ProVision based switches was pretty simple. The Comware based switches, in this case OfficeConnect 1920, made me more headache.
blickpixel/ pixabay.com/ Creative Commons CC0
The customer had already mac authentication running, so all I had to do, was to enable 802.1x on the desired ports of the OfficeConnect 1920. The laptop, which I used to test the connection, was already configured and worked flawless if I plugged it into a 802.1x enabled port on a ProVision based switch. The OfficeConnect 1920 simply wrote a failure to its log and the authentication failed. The RADIUS server does not logged any failure, so I was quite sure, that the switch caused the problem.
After double-checking all settings using the web interface of the switch, I used the CLI to check some more settings. Unfortunately, the OfficeConnect 1920 is a smart-managed switch and provides only a very, very limited CLI. Fortunately, there is a developer access, enabling the full Comware CLI. You can enable the full CLI by entering
_cmdline-mode on
after logging into the limited CLI. You can find the password using your favorite internet search engine. ;)
Solution
While poking around in the CLI, I stumbled over this option, which is entered in the interface context:
RADIUS is the authentication domain, which was used on this switch. The command specifies, that the authentication domain RADIUS has to be for 802.1x authentication requests. Otherwise the switch would use the default authentication domain SYSTEM, which causes, that the switch tries to authenticate the user against the local user database.
I have not found any way to specify this setting using the web GUI! If you know how, of if you can provide additional information about this “issue”, please leave a comment.
This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
The Informationsverbund Berlin-Bonn (IVBB), the secure network of the german government , was breached by an unknown hacker group. Okay, a secure government network might be a worthy target for an attack, but your network not, right? Do you use the same password for multiple accounts? There were multiple massive data breaches in the past. Have you ever checked if your data were also compromised? I can recommend haveibeenpwned.com. If you want to have some fun, scan GitHub for —–BEGIN RSA PRIVATE KEY—–. Do you use a full disk encryption on your laptop or PC? Do you sign and/ or encrypt emails using S/MIME or PGP? Do you use different passwords for different services? Do you use 2FA/ MFA to secury importan services? Do you never work with admin privileges when doing normal office tasks? No? Why? Because it’s uncomfortable to do it right, isn’t it?
My focus is on infrastructure, and I’m trying to educate my customers that hey have to take care about security. It’s not the missing dedicated management network, or the usage of self signed certificates that makes an infrastructre unsecure. Mostly it’s the missing user management, the same password for different admin users, doing office work with admin privileges, or missing security patches because of “never touch a running system”, or “don’t ruin my uptime”. I don’t khow how often I heard the story of ransomware attacks, that were caused by admins opening email attachments with admin privileges…
My theory
Security must approach infinitely near the point, where it becomes unusable.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
Security is nothing you can take care about later. It has to be part of the design. It has to be part of the processes. Most security incidents doesn’t happen because of 0-day exploits. It’s because of default passwords for admin accounts, missing security patches, and because of lazy admins or developers.
Don’t be lazy. Do it right. Even if it’s uncomfortable.
This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
Some days ago, I have implemented one-time passwords (OTP) for NetScaler Gateway for one of my customers. This feature was added with NetScaler 12, and it’s a great way to secure NetScaler Gateway with a native NetScaler feature. Native OTP does not need any third party servers. But you need a NetScaler Enterprise license, because nFactor Authentication is a requirement.
To setup NetScaler native OTP, I followed the availbe guides on the internet.
The setup is pretty straightforward. But I used the AD extensionAttribute15 instead of userParameters, because my customer already used userParameters for something else. Because of this, I had to change the search filter from userParameters>=#@ to extensionAttribute15>=#@ .
Everything worked as expected… except for some users, that could not register their devices properly. They were able to register their device, but a test of the OTP failed. After logoff and logon, the registered device were not available anymore. But the device was added to the extensionAttribute. While I was watching the nsvpn.log with tail -f , I discovered that the built group string for $USERNAME seemed to be cut off (receive_ldap_user_search_event). My first guess was, that the user has too many group memberships, and indeed, the users was member for > 50 groups. So I copied the user, and the copied user had the same problem. I removed the copied user from some groups, and at some point the test of the OTP worked (on the /manageotp website).
With this information, I quickly stumbled over this thread: netscaler OTP not woring for certain users. This was EXACTLY what I discovered. The advised solution was to change the “Group Attribute” from memberOf to userParameter , or in my case, extensionAttribute15. Problem solved!
This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
On January 18, 2018, Microsoft has published KB4074871 which has the title “Exchange Server guidance to protect against speculative execution side-channel vulnerabilities”. As you might guess, Exchange is affected by Meltdown & Spectre – like any other software. Microsoft explains in KB4074871:
Because these are hardware-level attacks that target x64-based and x86-based processor systems, all supported versions of Microsoft Exchange Server are affected by this issue.
Like Citrix, Microsoft does not offer any updates to address this issue, because there is nothing to fix in Microsoft Exchange. Instead of this, Microsoft recommends to run the lates Exchange Server cumulative update and any required security updates. On top, Microsoft recommends to check software before it is deployed into production. If Exchange is running in a VM, Microsoft recommends to follow the instructions offered by the cloud or hypervisor vendor.
This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
In addition to my shortcut blog post about Meltdown and Spectre with regard of Microsoft Windows, VMware ESXi and vCenter, and HPE ProLiant, I would like to add some additional information about HPE Storage and Citrix NetScaler.
When we talk about Meltdown and Spectre, we are talking about three different vulnerabilities:
CVE-2017-5715 (branch target injection)
CVE-2017-5753 (bounds check bypass)
CVE-2017-5754 (rogue data cache load)
CVE-2017-5715 and CVE-2017-5753 are known as “Spectre”, CVE-2017-5754 is known as “Meltdown”. If you want to read more about these vulnerabilities, please visit meltdownattack.com.
Due to the fact that different CPU platforms are affected, one might can guess that also other devices, like storage systems or load balancers, are affected. Because of my focus, this blog post will focus on HPE Storage and Citrix NetScaler.
Citrix NetScaler (MPX/VPX): Citrix believes that currently supported versions of Citrix NetScaler MPX and VPX are not impacted by the presently known variants of these issues.
Citrix believes… So nothing to do yet, if you are running MPX or VPX appliances. But future updates might come.
The case is a bit different, when it comes to the NetScaler SDX appliances.
Citrix NetScaler SDX: Citrix believes that currently supported versions of Citrix NetScaler SDX are not at risk from malicious network traffic. However, in light of these issues, Citrix strongly recommends that customers only deploy NetScaler instances on Citrix NetScaler SDX where the NetScaler admins are trusted.
No fix so far, only a recommendation to check your processes and admins.
01-13-2018: Added information regarding VMSA-2018-0004 01-13-2018: HPE has pulled Gen8 and Gen9 system ROMs 01-13-2018: VMware has updated KB52345 due to issues with Intel microcode updates 01-18-2018: Updated VMware section 01-24-2018: Updated HPE section 01-28-2018: Updated Windows Client and Server section 02-08-2018: Updated VMware and HPE section 02-20-2018: Updated HPE section 04-17-2018: Updated HPE section
Many blog posts have been written about the two biggest security vulnerabilities discovered so far. In fact, we are talking about three different vulnerabilities:
CVE-2017-5715 (branch target injection)
CVE-2017-5753 (bounds check bypass)
CVE-2017-5754 (rogue data cache load)
CVE-2017-5715 and CVE-2017-5753 are known as “Spectre”, CVE-2017-5754 is known as “Meltdown”. If you want to read more about these vulnerabilities, please visit meltdownattack.com.
Multiple steps are necessary to be protected, and all necessary information are often repeated, but were distributed over several websites, vendor websites, articles, blog posts or security announcements.
Two simple steps
Two (simple) steps are necessary to be protected against these vulnerabilities:
Apply operating system updates
Update the microcode (BIOS) of your server/ workstation/ laptop
If you use a hypervisor to virtualize guest operating systems, then you have to update your hypervisor as well. Just treat it like an ordinary operating system.
Sounds pretty simple, but it’s not. I will focus on three vendors in this blog post:
Microsoft
VMware
HPE
Let’s start with Microsoft. Microsoft has published the security advisory ADV180002 on 01/03/2018.
Microsoft Windows (Client)
The necessary security updates are available for Windows 7 (SP1), Windows 8.1, and Windows 10. The January 2018 security updates are ONLY offered in one of theses cases (Source: Microsoft):
An supported anti-virus application is installed
Windows Defender Antivirus, System Center Endpoint Protection, or Microsoft Security Essentials is installed
A registry key was added manually
To add this registry key, please execute this in an elevated CMD. Do not add this registry key, if you are running an unsupported antivirus application!! Please contact your antivirus application vendor! This key has to be added manually, only in case if NO antivirus application is installed, otherwise your antivirus application will add it!
Please note, that you also need a microcode update! Reach out to your vendor. I was offered automatically to update the microcode on my Lenovo ThinkPad X250.
Update 01-28-2018
Microsoft has published an update to disable mitigation against Spectre (variant 2) (Source: Microsoft). KB4078130 is available for Windows 7 SP1, Windows 8.1 and Windows 10, and it disables the mitigation against Spectre Variant 2 (CVE 2017-5715) independently via registry setting changes. The registry changed are described in KB4073119.
The necessary security updates are available for Windows Server 2008 R2, Windows Server 2012 R2, Windows Server 2016 and Windows Server Core (1709). The security updates are NOT available for Windows Server 2008 and Server 2012!. The January 2018 security updates are ONLY offered in one of theses cases (Source: Microsoft):
An supported anti-virus application is installed
Windows Defender Antivirus, System Center Endpoint Protection, or Microsoft Security Essentials is installed
A registry key was added manually
To add this registry key, please execute this in an elevated CMD. Do not add this registry key, if you are running an unsupported antivirus application!! Please contact your antivirus application vendor! This key has to be added manually, only in case if NO antivirus application is installed, otherwise your antivirus application will add it!
After applying the security update, you have to enable the protection mechanism. This is different to Windows Windows 7, 8.1 or 10! To enable the protection mechanism, you have to add three registry keys:
The easiest way to distribute these registry keys is a Group Policy. In addition to that, you need a microcode update from your server vendor.
Update 01-28-2018
The published update for Windows 7 SP1, 8.1 and 10 (KB4073119) is not available for Windows Server. But the same registry keys apply to Windows Server, so it is sufficient to change the already set registry keys to disable the mitigation against Spectre Variant 2 (CVE 2017-5715).
VMware Workstation Pro, Player, Fusion, Fusion Pro, and ESXi are affected by CVE-2017-5753 and CVE-2017-5715. VMware products seems to be not affected by CVE-2017-5754. On 09/01/2017, VMware has published VMSA-2018-0004, which also addresses CVE-2017-5715. Just to make this clear:
Hypervisor-Specific Remediation (documented in VMSA-2018-0002.2)
Hypervisor-Assisted Guest Remediation (documented in VMSA-2018-0004)
I will focus von vCenter and ESXi. In case of VMSA-2018-002, security updates are available for ESXi 5.5, 6.0 and 6.5. In case of VMSA-2018-0004, security updates are available for ESXi 5.5, 6.0, 6.5, and vCenter 5.5, 6.0 and 6.5. VMSA-2018-0007 covers VMware Virtual Appliance updates against side-channel analysis due to speculative execution.
Before you apply any security updates, please make sure that you read this:
Deploy the updated version of vCenter listed in the table (only if vCenter is used).
Deploy the ESXi security updates listed in the table.
Ensure that your VMs are using Hardware Version 9 or higher. For best performance, Hardware Version 11 or higher is recommended.
For more information about Hardware versions, read VMware KB article 1010675.
VMSA-2018-0002.2
OS
Update
ESXi 6.5
ESXi650-201712101-SG
ESXi 6.0
ESXi600-201711101-SG
ESXi 5.5
ESXi550-201709101-SG
In case of ESXi550-201709101-SG it is important to know, that this patch mitigates CVE-2017-5715, but not CVE-2017-5753! Please see KB52345 for important information on ESXi microcode patches.
VMSA-2018-0004
OS
Update
ESXi 6.5
ESXi650-201801401-BG, and ESXi650-201801402-BG
ESXi 6.0
ESXi600-201801401-BG, and ESXi600-201801402-BG
ESXi 5.5
ESXi550-201801401-BG
vCenter 6.5
6.5 U1e
vCenter 6.0
6.0 U3d
vCenter 5.5
5.5 U3g
The patches ESXi650-201801402-BG, ESXi 6.0 ESXi600-201801401-BG, and ESXi550-201801401-BG will patch the microcode for supported CPUs. And this is pretty interesting! To enable hardware support for branch target mitigation (CVE-2017-5715 aka Spectre) in vSphere, three steps are necessary (Source: VMware):
Update to one of the above listed vCenter releases
Update the ESXi 5.5, 6.0 or 6.5 with
ESXi650-201801401-BG
ESXi600-201801401-BG
ESXi550-201801401-BG
Apply microcode updates from your server vendor, OR apply these patches for ESXi
ESXi650-201801402-BG
ESXi600-201801402-BG
ESXi550-201801401-BG
In case of ESXi 5.5, the hypervisor and microcode updates are delivered in a single update (ESXi550-201801401-BG).
Update 01-13-2018
Please take a look into KB52345 if you are using Intel Haswell and Broadwell CPUs! The KB article includes a table with affected CPUs.
All you have to do is:
Update your vCenter to the latest update release, then
Update your ESXi hosts with all available security updates
Apply the necessary guest OS security updats and enable the protection (Windows Server)
CVE-2017-5715 requires that the System ROM be updated and a vendor supplied operating system update be applied as well. For CVE-2017-5753, CVE-2017-5754 require only updates of a vendor supplied operating system.
Update 01-13-2018
The following System ROMs were previously available but have since been removed from the HPE Support Site due to the issues Intel reported with the microcode updates included in them. Updated revisions of the System ROMs for these platforms will be made available after Intel provides updated microcodes with a resolution for these issues.
Update 01-24-2018
HPE will be releasing updated System ROMs for ProLiant and Synergy Gen10, Gen9, and Gen8 servers including updated microcodes that, along with an OS update, mitigate Variant 2 (Spectre) of this issue. Note that processor vendors have NOT released updated microcodes for numerous processors which gates HPE’s ability to release updated System ROMs.
I will update this blog post as soon as HPE releases new system ROMs.
For most Gen9 and Gen10 models, updated system ROMs are already available. Check the bulletin for the current list of servers, for which updated system ROMs are available. Please note that you don’t need a valid support contract to download this updates!
Under Software Type, select “BIOS-(Entitlement Required”) – (Note that Entitlement is NOT required to download these firmware versions.
Update 02-09-2018
Nothing new. HPE has updates the bulletin on 31-01-2018 with an updated timeline for new system ROMs.
HPE finally published updated System ROMS for several Gen10, Gen9, Gen8, G7 and even G6 models, which also includes bread-and-butter servers like the ProLiant DL360 G6 to Gen10, and DL380 G6 to Gen10.
If you are running Windows on your ProLiant, you can use the online ROM flash component for Windows x64. If you are running VMware ESXi, you can use the systems ROMPaq firmware upgrade for for USB key media.
This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
Open network ports in offices, waiting rooms and entrance halls make me curious. Sometimes I want to plugin a network cable, just to see if I get an IP address. I know many companies that does not care about network access control. Anybody can plugin any device to the network. When talking with customers about network access control, or port security, I often hear their complains about complexity. It’s too complex to implement, to hard to administrate. But it is not sooo complex. In the easiest setup (with mac authentication), you need a switch, that can act as authenticator, and a authentication server. But IEEE 802.1x is not much more complicated.
A brief overview over IEEE 802.1x
IEEE 802.1X offers authentication and authorization in wired or wireless networks. The supplicant (client) requests access to the network by providing a username/ password, or a digital certificate to the authenticator (switch). The authenticator forwards the provided credentials to the authentication server (mostly RADIUS or DIAMETER). The authentication server verifies the credentials and decides, if the supplicant is allowed to access the network.
802.1x uses the Extensible Authentication Protocol (EAP RFC5247) for authentication. Because EAP is a framework, there are different implementations, like EAP Transport Layer Security (EAP-TLS), or EAP with pre-shared key (EAP-PSK). Because it is only a framework, each protocol, that uses EAP, has to encapsulate it. Typical encapsulations are EAP over LAN (that is what 802.1x uses), RADIUS/ DIAMETER can use also use EAP. Protected EAP (PEAP) encapsulates EAP traffic into a TLS tunnel. PEAP is typically used as a replacement for EAP in EAPOL, or with with RADIUS or DIAMETER.
Wikipedia/ wikipedia.org/ Public domain image resources
So far nothing special. It’s more a security thing, but an important one, if you ask me. But many customers avoid 802.1x, because of complexity. It’s perfect to keep you out of your own network, if something fails. And not all devices can act as supplicant.
But there is another benefit of 802.1x: RADIUS-Access-Accept messages can be used to dynamically assign VLAN memberships (RADIUS Extensions, RFC6929). To assign a VLAN membership to a port, to which a supplicant is connected, the RADIUS server adds three attributes to the Access-Accept message:
Tunnel-Type (VLAN)
Tunnel-Medium-Type (802)
Tunnel-Private-Group-Id (VLAN ID)
The authenticator uses these attributes to dynamically assign a VLAN to the port, to which the supplicant is connected.
MAC authentication
How does MAC authentication fit into this? If a client does not support 802.1x, the authenticator can use the mac-address of the connected device as username and password. The RADIUS server can use these credentials to authenticate the connected device. If you use a windows-based NAP (Windows Server NPS role), you have to create a user object in your Active Directory or local user database, that uses the mac-address as username and password. Depending on the switch configuration, the format of the username differes (xx:xx:xx:xx:xx:xx or xxxxxx-xxxxxx etc.). It’s a security fail, right? Yes, it is. So please:
Use MAC authentication only when needed, and
make sure that your authenticator uses PEAP
PEAP uses a TLS tunnel to protect the CHAP messages.
Another important part is your authentication server, mostly a RADIUS or DIAMETER server. Make sure that it is highly available. You should have at least two authentication server. I would not load balance them through a load balancer (Citrix NetScaler etc.). Simply add two authentication servers to your switch configuration. If your authentication server uses a user database, like Microsoft Active Directory, make sure that this database is also highly available. As I said: It is perfect to keep you out of your own network.
Sample config for ArubaOS (HPE ProVision based switches)
Here’s a sample config for a Aruba 2920 switch, running ArubaOS WB.16.04. 802.1x and MAC authentication are configured for the ports 1 to 5. If the authentication failes, VLAN 999 will be assigned to the port. VLAN 999 is used as unauth VLAN, which is used for unauthenticated clients.
If 802.1x fails, the authenticator, will try MAC authentication. If this fails too, VLAN 999 is assigned to the switch port.
In this case, the client was authenticated by 802.1x.
SW1(config)# show port-access auth client 1 detailed
Port Access Authenticator Client Status Detailed
Port-access authenticator activated [No] : Yes
Allow RADIUS-assigned dynamic (GVRP) VLANs [No] : No
Dot1x2010 Mode [Disabled] : Disabled Use LLDP data to authenticate [No] : No
Client Base Details :
Port : 1
Client Status : Authenticated Session Time : 9 seconds
Client name : [email protected] Session Timeout : 0 seconds
IP : n/a MAC Address : 643150-7c7c9f
Access Policy Details :
COS Map : Not Defined In Limit Kbps : Not Set
Untagged VLAN : 2500 Out Limit Kbps : Not Set
Tagged VLANs : No Tagged VLANs
Port Mode : 1000FDx
RADIUS ACL List : No Radius ACL List
This is the output for MAC authentication.
SW1(config)# show port-access mac-based clients 1 detailed
Port Access MAC-Based Client Status Detailed
Client Base Details :
Port : 1
Client Status : authenticated Session Time : 14 seconds
MAC Address : 643150-7c7c9f Session Timeout : 0 seconds
IP : n/a
Access Policy Details :
COS Map : Not Defined In Limit Kbps : Not Set
Untagged VLAN : 1 Out Limit Kbps : Not Set
Tagged VLANs : No Tagged VLANs
Port Mode : 1000FDx Auth Mode : User-based
RADIUS ACL List : No Radius ACL List
In both cases, VLAN 1 was dynamically assigned by RADIUS-Access-Accept messages.
Patch management is currently a hot topic, primarily because of the latest ransomware attacks.
After appearance of WannaCry, one of my older blog posts got unfamiliar attention: WSUS on Windows 2012 (R2) and KB3159706 – WSUS console fails to connect. Why? My guess: Many admins started updating their Windows servers after appearance of WannaCry. Nearly a year after Microsoft has published KB3159706, their WSUS servers ran into this issue.
The truth about patch management
I know many enterprises, that patch their Windows clients and servers only every four or eight weeks, mostly during a maintenance window. Some of them do this, because their change processes require the deployment and test of updates in a test environment. But some of them are simply to lazy to install updates more frequent. So they simply approve all needed updates every four or eight weeks, push them to their servers, and reboot them.
Trond mentioned golden images and templates in his blog posts. I strongly agree to what he wrote, because this is something I see quite often: You deploy a server from a template, and the newly deployed server has to install 172 updates. This is because the template was never updated since creation. But I also know companies that don’t use templates, or goldes master images. They simply create a new VM, mount an ISO file, and install the server from scratch. And because it’s urgent, the server is not patched when it goes into production.
Sorry, but that’s the truth about patch management: Either it is made irregular, made in too long intervals, or not made at all.
Change Management from hell
Frameworks, such as ITIL, play also their part in this tragedy. Applying change management processes to something like patch managent prevents companies to respond quickly to threats. If your change management process prevents you from deploying critical security patches ASAP, you have a problem – a problem with your change management process.
If your change management process requires the deployment from patches in a test environment, you should change your change mangement process. What is the bigger risk? Deploying a faulty patch, or being the victim of an upcoming ransomware attack?
Microsoft Windows Server Update Service (WSUS) offers a way to automatically approve patches. This is something you want! You want to automatically approve critical security patches. And you also want that your servers automatically install these updates, and restart if necessary. If you can’t restart servers automatically when required, you need short maintenance windows every week to reboot these servers. If this is not possible at all, you have a problem with your infrastructure design. And this does not only apply to Microsoft updates. This applies to ALL systems in your environment. VMware ESXi hosts with uptimes > 100 days are not a sign of stability. It’s a sign of missing patches.
Validated environments are ransomwares best friends
This is another topic I meet regularly: Validated environments. An environmentsthat was installed with specific applications, in a specifig setup. This setup was tested according to a checklist, and it’s function was documented. At the end of this process, you have a validated environments and most vendors doesn’t support changes to this environments without a new validation process. Sorry, but this is pain in the rear! If you can’t update such an environment, place it behind a firewall, disconnect it from your network, and prohibit the use of removable media such as USB sticks. Do not allow this environment to be Ground Zero for a ransomware attack.
I know many environments with Windows 2000, XP, 2003, or even older stuff, that is used to run production facilities, test stands, or machinery. Partially, the software/ hardware vendor is no longer existing, thus making the application, that is needed to keep the machinery running, another security risk.
Patch quick, patch often
IT departments should install patches more often, and short after the release. The risk of deploying a faulty patch is lower than the risk of being hit by a security attack. Especially when we are talking about critical security patches.
IT departments should focus on the value that they deliver to the business. IT services that are down due to a security attack can’t deliver any value. Security breaches in general, are bad for reputation and revenue. If your customers and users complain about frequent maintenance windows due to critical security patches, you should improve your communication about why this is important.
Famous last words
I don’t install Microsoft patches instantly. Some years ago, Microsoft has published a patch that causes problems. Imagine, that a patch would cause our users can’t print?! That would be bad!
We don’t have time to install updates more often. We have to work off tickets.
We don’t have to automate our server deployment. We deploy only x servers a week/ month/ year.
This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
I don’t like to use untrusted networks. When I have to use such a network, e.g. an open WiFi network, I use a TLS encrypted tunnel connection to encrypt all web traffic that travels through the untrusted network. I’m using a simple stunnel/ Squid setup for this. My setup consists of three components:
Stunnel (server mode)
Squid proxy
Stunnel (client mode)
What is stunnel?
Stunnel is an OSS project that uses OpenSSL to encrypt traffic. The website describes Stunnel as follows:
Stunnel is a proxy designed to add TLS encryption functionality to existing clients and servers without any changes in the programs’ code. Its architecture is optimized for security, portability, and scalability (including load-balancing), making it suitable for large deployments.
How it works
The traffic flow looks like this:
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
The browser connects to the Stunnel client on 127.0.0.1:8080. This is done by configuring 127.0.0.1:8080 as proxy server in the browser. The traffic enters the tunnel on the client-side, and Stunnel opens a connection to the server-side. You can use any port, as long as it is unused on the server-side. I use 443/tcp. The connection is encrypted using TLS, and the connection is authenticated by a pre-shared key (PSK). On the server, the traffic leaves the tunnel, and the connection attempt of the client is directed to the Squid proxy, which listens on 127.0.0.1:3128 for connections. Summarized, my browser connectes the Squid proxy on my FreeBSD host over a TLS encrypted connection.
Installation and configuration on FreeBSD
Stunnel and Squid can be installed using pkg install .
The configuration files are located under /usr/local/etc/stunnel and /usr/local/etc/squid. After the installation of stunnel, an additional directory for the PID file must be created. Stunnel is not running with root privileges, thus it can’t create its PID file in /var/run.
The stunnel.conf is pretty simple. I’m using a Let’s Encrypt certificate on the server-side. If you like, you can create your own certificate using OpenSSL. But I prefer Let’s Encrypt.
The psk.txt contains the pre-shared key. The same file must be located on the client-side. The file itself it pretty simple – username:passphrase. Make sure that the PSK file is not group- and world-readable!
patrick:SuperSecretPassw0rd
The squid.conf is also pretty simple. Make sure that Squid only listens on localhost! I disabled the access log. I simply don’t need it, because I’m the only user. And I don’t have to rotate another logfile. Some ACLs of Squid are now implicitly active. There is no need to configure localhsot or 127.0.0.1 as a source, if you want to allow http access only from localhost. Make sure, that all requests are only allowed from localhost!
acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 21
acl Safe_ports port 443
acl Safe_ports port 70
acl Safe_ports port 210
acl Safe_ports port 1025-65535
acl Safe_ports port 280
acl Safe_ports port 488
acl Safe_ports port 591
acl Safe_ports port 777
acl Safe_ports port 2222
acl Safe_ports port 8443
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access deny all
icp_access deny all
htcp_access deny all
http_port 127.0.0.1:8080
cache_mem 1024 MB
maximum_object_size_in_memory 8 MB
cache_dir ufs /var/squid/cache 1024 16 256 no-store
minimum_object_size 0 KB
maximum_object_size 8192 KB
cache_swap_low 95
cache_swap_high 98
logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %>Hs %<st "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh
# access_log /var/log/squid/access.log combined
access_log none
cache_log /dev/null
cache_store_log /dev/null
ftp_user joe.doe@gmail.com
htcp_port 0
coredump_dir /var/squid/cache
visible_hostname proxy
To enable stunnel and squid in the /etc/rc.conf, add the following lines to your /etc/rc.conf. The stunnel_pidfile option tells Stunnel, where it should create its PID file.
Make sure that you have initialized the Squid cache dir, before you start squid. Initialize the cache dir, and start Squid and Stunnel on the server-side.
Installation and configuration on Windows
On the client-side, you have to install Stunnel. You can fine installer files for Windows on stunnel.org. The config of the client is pretty simple. The psk.txt contain the same username and passphrase as on the server-side. The file must be located in the same directory as the stunnel.conf on the client.
Start Stunnel on your client and configure 127.0.0.1:8080 as proxy in your browser. If you access https://www.whatismyip.com, you should see the IP address of your server, not the IP address of your local internet connection.
You can check the encrypted connection with Wireshark on the client-side, or with tcpdump on the server-side.
Please note, that the connection is only encrypted until it hits your server. Traffic that leaves your server, e.g. HTTP requests, are unencrypted. It is only an encrypted connection to your proxy, not and encrypted end-2-end connection.
To change your privacy setting, e.g. granting or withdrawing consent, click here:
Settings