Tag Archives: linux

How to install Visual Studio Code on Linux Mint 18

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

I have wrote about the installation of PowerShell Core in Linux Mint 18 yesterday. Today, I want to show you, how to install Visual Studio Code on Linux Mint 18. The installation is really easy:

  1. Download the deb package
  2. Install the deb package
  3. Run Visual Studio Code

You can download the latest packages for Windows, Linux (deb and rpm, if you want even a tar ball), and Mac on the Visual Studio Code download page. Download the deb file. To install the package, open a Terminal window and run dpkg .

patrick@nb-patrick ~/Downloads
 % sudo dpkg -i code_1.17.1-1507645403_amd64.deb 
[sudo] password for patrick: 
Selecting previously unselected package code.
(Reading database ... 236413 files and directories currently installed.)
Preparing to unpack code_1.17.1-1507645403_amd64.deb ...
Unpacking code (1.17.1-1507645403) ...
Setting up code (1.17.1-1507645403) ...
Processing triggers for gnome-menus (3.13.3-6ubuntu3.1) ...
Processing triggers for desktop-file-utils (0.22+linuxmint1) ...
Processing triggers for mime-support (3.59ubuntu1) ...
patrick@nb-patrick ~/Downloads
 %
Visual Studio Code on Linux

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

sudo  might ask you for a password. That’s it! Now you can simply start VS Code.After you have installed your favorite extensions, VS Code is ready to code.

How to install PowerShell Core on Linux Mint 18

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Beside my Lenovo X250, which is my primary working machine, I’m using a HP ProBook 6450b. This was my primary working machine from 2010 until 2013. With a 128 GB SSD, 8 GB RAM and the Intel i5 M 450 CPU, it is still a pretty usable machine. I used it mainly during projects, when I needed a second laptop (or the PC Express card with the serial port…). It was running Windows 10, until I decided to try Linux MInt. I used Linux as my primary desktop OS more than a decade ago. It was quite productive, but especially with laptops, there were many things that does not worked out of the box.

Because I use PowerShell quite often, and PowerShell is available for Windows, MacOS and Linux, the installation of PowerShell on this Linux laptop is a must.

How to install PowerShell?

Linux Mint is a based on Ubuntu, and I’m currently using Linux Mint 18.2. Microsoft offers different pre-compiled packages on the PowerShell GitHub repo. For Linux Mint 18, you have to download the Ubuntu 16.04 package. For Linux Mint 17, you will need the 14.04 package. Because you need the shell to install the packages, you can download the deb package from the shell as well. I used wget to download the deb package.

patrick@nb-patrick ~/Downloads
 % wget https://github.com/PowerShell/PowerShell/releases/download/v6.0.0-beta.8/powershell_6.0.0-beta.8-1.ubuntu.16.04_amd64.deb

The next step is to install the deb package, and to fix broken dependencies. Make sure that you run dpkg  with sudo .

patrick@nb-patrick ~/Downloads
 % sudo dpkg -i powershell_6.0.0-beta.8-1.ubuntu.16.04_amd64.deb 
Selecting previously unselected package powershell.
(Reading database ... 235671 files and directories currently installed.)
Preparing to unpack powershell_6.0.0-beta.8-1.ubuntu.16.04_amd64.deb ...
Unpacking powershell (6.0.0-beta.8-1.ubuntu.16.04) ...
dpkg: dependency problems prevent configuration of powershell:
 powershell depends on liblttng-ust0; however:
  Package liblttng-ust0 is not installed.

dpkg: error processing package powershell (--install):
 dependency problems - leaving unconfigured
Processing triggers for man-db (2.7.5-1) ...
Errors were encountered while processing:
 powershell

Looks like it failed, because of broken dependencies. But this can be easily fixed. To fix the broken dependencies, run apt-get -f install . Make sure that you run it with sudo !

patrick@nb-patrick ~/Downloads
 % sudo apt-get -f install
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Correcting dependencies... Done
The following additional packages will be installed:
  liblttng-ust-ctl2 liblttng-ust0 liburcu4
The following NEW packages will be installed:
  liblttng-ust-ctl2 liblttng-ust0 liburcu4
0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 247 kB of archives.
After this operation, 1.127 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://mirror.netcologne.de/ubuntu xenial/universe amd64 liburcu4 amd64 0.9.1-3 [47,3 kB]
Get:2 http://mirror.netcologne.de/ubuntu xenial/universe amd64 liblttng-ust-ctl2 amd64 2.7.1-1 [72,2 kB]
Get:3 http://mirror.netcologne.de/ubuntu xenial/universe amd64 liblttng-ust0 amd64 2.7.1-1 [127 kB]
Fetched 247 kB in 0s (841 kB/s)        
Selecting previously unselected package liburcu4:amd64.
(Reading database ... 236372 files and directories currently installed.)
Preparing to unpack .../liburcu4_0.9.1-3_amd64.deb ...
Unpacking liburcu4:amd64 (0.9.1-3) ...
Selecting previously unselected package liblttng-ust-ctl2:amd64.
Preparing to unpack .../liblttng-ust-ctl2_2.7.1-1_amd64.deb ...
Unpacking liblttng-ust-ctl2:amd64 (2.7.1-1) ...
Selecting previously unselected package liblttng-ust0:amd64.
Preparing to unpack .../liblttng-ust0_2.7.1-1_amd64.deb ...
Unpacking liblttng-ust0:amd64 (2.7.1-1) ...
Processing triggers for libc-bin (2.23-0ubuntu9) ...
Setting up liburcu4:amd64 (0.9.1-3) ...
Setting up liblttng-ust-ctl2:amd64 (2.7.1-1) ...
Setting up liblttng-ust0:amd64 (2.7.1-1) ...
Setting up powershell (6.0.0-beta.8-1.ubuntu.16.04) ...
Processing triggers for libc-bin (2.23-0ubuntu9) ...

That’s it! PowerShell is now installed.

patrick@nb-patrick ~/Downloads
 % powershell
PowerShell v6.0.0-beta.8
Copyright (C) Microsoft Corporation. All rights reserved.

PS /home/patrick/Downloads>  Get-ChildItem /home/patrick                                                                                                                                              


    Directory: /home/patrick


Mode                LastWriteTime         Length Name                                                                                                                                                
----                -------------         ------ ----                                                                                                                                                
d-----         10/10/17  10:26 PM                Desktop                                                                                                                                             
d-----         10/14/17   8:45 AM                Documents                                                                                                                                           
d-----         10/14/17   8:41 AM                Downloads                                                                                                                                           
d-----         10/10/17  10:26 PM                Music                                                                                                                                               
d-----         10/14/17   8:37 AM                Pictures                                                                                                                                            
d-----         10/10/17  10:26 PM                Public                                                                                                                                              
d-----         10/10/17  10:26 PM                Templates                                                                                                                                           
d-----         10/10/17  10:26 PM                Videos                                                                                                                                              


PS /home/patrick/Downloads> exit                                                                                                                                                                      
patrick@nb-patrick ~/Downloads
 %

Yep, looks like a PowerShell prompt…on Linux. Thank you, Microsoft! :)

Simplemonitor – Python-based monitoring

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

While searching for a simple monitoring für my root servers, I’m stumbled over a python-based software called Simplemonitor. Other alternatives, like Nagios, or forks like Incinga etc., were a bit too much for my needs.

What is SimpleMonitor?

SimpleMonitor is a Python script which monitors hosts and network connectivity. It is designed to be quick and easy to set up and lacks complex features that can make things like Nagios, OpenNMS and Zenoss overkill for a small business or home network. Remote monitor instances can send their results back to a central location.

My requirements were simple:

  • Ping monitoring
  • TCP monitoring
  • HTTP monitoring
  • Service monitoring
  • Disk space monitoring

Monitoring is nothing without alerting, so I was pretty happy that Simplemonitor is able to send messages into a Slack channel! But it can also send e-mails, SMS, or it can write into a log file. To get a full feature overview, visit the Simplemonitor website.

The project is hosted on GitHub. If you are familiar with Python, you can contribute to the project, or you can add features as you need.

Installation & configuration

The installation is pretty simple: Just fetch the ZIP or the tarball from the project website, and extract it.

The configuration is split into two files:

  • monitor.ini
  • monitors.ini

The naming is a bit confusing. The monitor.ini contains the basic monitoring configuration, like the interval for the checks, the alerting and reporting settings. The monitors.ini contains the configuration of the service checks. That’s confusing, that confused me, and so I changed the name of the monitors.ini to services.ini.

[monitor]
interval=60
monitors=services.ini

The services.ini (monitors.ini) contains the service checks. This is a short example of a ping, a service check, a port check, and a disk space check.

[ping-host1]
type=host
host=host1.tld.de
tolerance=3

[svc-postfix-host1]
type=rc
runon=host1.tld.de
service=postfix

[port-postfix-host1]
type=tcp
host=host1.tld.de
port=25

[diskspace]
type=diskspace
partition=/
limit=4096M

The alerting is configured in the monitor.ini. I’m using only the Slack notification. All you need is a web hook and the corresponding web hook URL.

[slack]
type=slack
channel=#monitoring
limit=1
url=https://hooks.slack.com/services/afjnsdifnsdfnsdf

In case of a service fail, or service recovery, a notification is sent to the configured Slack channel.

To start Simplemonitor, just start the monitor.py. It expects the monitor.ini in the same directory.

root@host1 /opt/simplemonitor # python2 monitor.py -v
SimpleMonitor v1.7
--> Loading main config from monitor.ini
--> Loading monitor config from services.ini
Adding host monitor ping-host2
Adding rc monitor svc-postfix-host1
Adding rc monitor svc-nginx-host1
Adding rc monitor svc-mysql-host1
Adding rc monitor svc-fail2ban-host1
Adding rc monitor svc-postgrey-host1
Adding rc monitor svc-phpfpm-host1
Adding rc monitor svc-named-host1
Adding diskspace monitor diskspace
--> Loaded 9 monitors.

Adding logfile logger logfile
Adding slack alerter slack

--> Starting... (loop runs every 60s) Hit ^C to stop
php_fpm is running as pid 33937.
Passed: svc-phpfpm-host1
named is running as pid 566.
Passed: svc-named-host1
fail2ban is running as pid 41306.
Passed: svc-fail2ban-host1
Passed: diskspace
postgrey is running as pid 649.
Passed: svc-postgrey-host1
mysql is running as pid 23726.
Passed: svc-mysql-host1
Passed: ping-host2
postfix is running as pid 53332.
Passed: svc-postfix-host1
nginx is running as pid 52736.
Passed: svc-nginx-host1

Summary

I really like the simplicity of Simplemonitor. Download, extract, configure, run, done. That’s what I’ve searched for. It is still under development, but you should not expect that it will gain much complexity. Even if features will be added, it should be a simple monitoring.

Stunnel and Squid on FreeBSD 11

This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

I don’t like to use untrusted networks. When I have to use such a network, e.g. an open WiFi network, I use a TLS encrypted tunnel connection to encrypt all web traffic that travels through the untrusted network. I’m using a simple stunnel/ Squid setup for this. My setup consists of three components:

  • Stunnel (server mode)
  • Squid proxy
  • Stunnel (client mode)

What is stunnel?

Stunnel is an OSS project that uses OpenSSL to encrypt traffic. The website describes Stunnel as follows:

Stunnel is a proxy designed to add TLS encryption functionality to existing clients and servers without any changes in the programs’ code. Its architecture is optimized for security, portability, and scalability (including load-balancing), making it suitable for large deployments.

How it works

The traffic flow looks like this:

Stunnel Secure Tunnel Connection Diagram

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The browser connects to the Stunnel client on 127.0.0.1:8080. This is done by configuring 127.0.0.1:8080 as proxy server in the browser. The traffic enters the tunnel on the client-side, and Stunnel opens a connection to the server-side. You can use any port, as long as it is unused on the server-side. I use 443/tcp. The connection is encrypted using TLS, and the connection is authenticated by a pre-shared key (PSK). On the server, the traffic leaves the tunnel, and the connection attempt of the client is directed to the Squid proxy, which listens on 127.0.0.1:3128 for connections. Summarized, my browser connectes the Squid proxy on my FreeBSD host over a TLS encrypted connection.

Installation and configuration on FreeBSD

Stunnel and Squid can be installed using pkg install .

root@server:~ # pkg search squid-3.5
squid-3.5.24_2                 HTTP Caching Proxy
root@server:~ # pkg search stunnel
stunnel-5.41,1                 SSL encryption wrapper for standard network daemons

The configuration files are located under /usr/local/etc/stunnel and /usr/local/etc/squid. After the installation of stunnel, an additional directory for the PID file must be created. Stunnel is not running with root privileges, thus it can’t create its PID file in /var/run.

root@server:/var/run # mkdir /var/run/stunnel/
root@server:/var/run # chown stunnel:stunnel /var/run/stunnel

The stunnel.conf is pretty simple. I’m using a Let’s Encrypt certificate on the server-side. If you like, you can create your own certificate using OpenSSL. But I prefer Let’s Encrypt.

cert = /usr/local/etc/letsencrypt/live/server/fullchain.pem
key = /usr/local/etc/letsencrypt/live/server/privkey.pem
pid = /var/run/stunnel/stunnel.pid
setuid = stunnel
setgid = stunnel
sslVersion = TLSv1.2
debug = 3
socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1
verify = 2
compression = deflate

[Tunnel.Web]
accept = 46.x.x.x:443
connect = 127.0.0.1:8080
ciphers = PSK
PSKsecrets = /usr/local/etc/stunnel/psk.txt
CAFile = /usr/local/etc/letsencrypt/live/server/fullchain.pem

The psk.txt contains the pre-shared key. The same file must be located on the client-side. The file itself it pretty simple – username:passphrase. Make sure that the PSK file is not group- and world-readable!

patrick:SuperSecretPassw0rd

The squid.conf is also pretty simple. Make sure that Squid only listens on localhost! I disabled the access log. I simply don’t need it, because I’m the only user. And I don’t have to rotate another logfile. Some ACLs of Squid are now implicitly active. There is no need to configure localhsot or 127.0.0.1 as a source, if you want to allow http access only from localhost. Make sure, that all requests are only allowed from localhost!

acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 21
acl Safe_ports port 443
acl Safe_ports port 70
acl Safe_ports port 210
acl Safe_ports port 1025-65535
acl Safe_ports port 280
acl Safe_ports port 488
acl Safe_ports port 591
acl Safe_ports port 777
acl Safe_ports port 2222
acl Safe_ports port 8443
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access deny all
icp_access deny all
htcp_access deny all
http_port 127.0.0.1:8080
cache_mem 1024 MB
maximum_object_size_in_memory 8 MB
cache_dir ufs /var/squid/cache 1024 16 256 no-store
minimum_object_size 0 KB
maximum_object_size 8192 KB
cache_swap_low 95
cache_swap_high 98
logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %>Hs %<st "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh
# access_log /var/log/squid/access.log combined
access_log none
cache_log /dev/null
cache_store_log /dev/null
ftp_user joe.doe@gmail.com
htcp_port 0
coredump_dir /var/squid/cache
visible_hostname proxy

To enable stunnel and squid in the /etc/rc.conf, add the following lines to your /etc/rc.conf. The stunnel_pidfile  option tells Stunnel, where it should create its PID file.

squid_enable="YES"
stunnel_enable="YES"
stunnel_pidfile="/var/run/stunnel/stunnel.pid"

Make sure that you have initialized the Squid cache dir, before you start squid. Initialize the cache dir, and start Squid and Stunnel on the server-side.

Installation and configuration on Windows

On the client-side, you have to install Stunnel. You can fine installer files for Windows on stunnel.org. The config of the client is pretty simple. The psk.txt contain the same username and passphrase as on the server-side. The file must be located in the same directory as the stunnel.conf on the client.

socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1
client = yes
sslVersion = TLSv1.2
debug=2

[Tunnel.Web]
accept = localhost:8080
connect = 46.x.x.x:443
PSKsecrets = psk.txt

Test your connection

Start Stunnel on your client and configure 127.0.0.1:8080 as proxy in your browser. If you access https://www.whatismyip.com, you should see the IP address of your server, not the IP address of your local internet connection.

You can check the encrypted connection with Wireshark on the client-side, or with tcpdump on the server-side.

Please note, that the connection is only encrypted until it hits your server. Traffic that leaves your server, e.g. HTTP requests, are unencrypted. It is only an encrypted connection to your proxy, not and encrypted end-2-end connection.

Using WP fail2ban with the CloudFlare API to protect your website

This posting is ~7 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

The downside of using WordPress is that many people use it. That makes WordPress a perfect target for attacks. I have some trouble with attacks, and one of the consequences is, that my web server crashes under load. The easiest way to solve this issue would be to ban those IP addresses. I use Fail2ban to protect some other services. So the idea of using Fail2ban to ban IP addresses, that are used for attacks, was obvious.

From the Fail2ban wiki:

Fail2ban scans log files (e.g. /var/log/apache/error_log) and bans IPs that show the malicious signs — too many password failures, seeking for exploits, etc. Generally Fail2Ban is then used to update firewall rules to reject the IP addresses for a specified amount of time, although any arbitrary other action (e.g. sending an email) could also be configured. Out of the box Fail2Ban comes with filters for various services (apache, courier, ssh, etc).

That works for services, like IMAP, very good. Unfortunately, this does not work out of the box for WordPress. But adding the WordPress plugin WP fail2ban brings us closer to the solution. For performance and security reasons, vcloudnine.de can only be accessed through a content delivery network (CDN), in this case CloudFlare. Because CloudFlare acts as a reverse proxy, I can not see “the real” IP address. Furthermore, I can not log the IP addresses because of the German data protection law. This makes the Fail2ban and the WordPress Fail2ban plugin nearly useless, because all I would ban with iptables, would be the CloudFlare CND IP ranges. But CloudFlare offers a firewall service. CloudFlare would be the right place to block IP addresses.

So, how can I stick Fail2ban, the WP Fail2ban plugin and CloudFlares firewall service together?

APIs FTW!

APIs are the solution for nearly every problem. Like others, CloudFlare offers an API that can be used to automate tasks. In this case, I use the API to add entries to the CloudFlare firewall. Or honestly: Someone wrote a Fail2ban action that do this for me.

First of all, you have to install the WP Fail2ban plugin. That is easy. Simply install the plugin. Then copy the wordpress-hard.conf from the plugin directory to the filters.d directory of Fail2ban.

[root@webserver filters.d]# cp wordpress-hard.conf /etc/fail2ban/filter.d/

Then edit the /etc/fail2ban/jail.conf and add the necessary entries for WordPress.

[wordpress-hard]

enabled  = true
filter   = wordpress-hard
logpath  = /var/log/messages
action   = cloudflare
maxretry = 3
bantime  = 604800

Please note, that in my case, the plugin logs to /var/log/messages. The action is “cloudflare”. To allow Fail2ban to work with the CloudFlare API, you need the CloudFlare API Key. This key is uniqe for every CloudFlare account. You can get this key from you CloudFlare user profile. Go to the user settings and scroll down.

Cloudflare Global API Key

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Open the /etc/fail2ban/action.d/cloudflare.conf and scroll to the end of the file. Add the token and your CloudFlare login name (e-mail address) to the file.

# Default Cloudflare API token
cftoken = 1234567890abcdefghijklmopqrstuvwxyz99

cfuser = user@domain.tld

Last step is to tell the WP Fail2ban plugin which IPs should be trusted. We have to add subnets of the CloudFlare CDN. Edit you wp-config.php and add this line at the end:

/** CloudFlare IP Ranges */
define('WP_FAIL2BAN_PROXIES','103.21.244.0/22,103.22.200.0/22,103.31.4.0/22,104.16.0.0/12,108.162.192.0/18,131.0.72.0/22,141.101.64.0/18,162.158.0.0/15,172.64.0.0/13,173.245.48.0/20,188.114.96.0/20,190.93.240.0/20,197.234.240.0/22,198.41.128.0/17,199.27.128.0/21,2400:cb00::/32,2405:8100::/32,2405:b500::/32,2606:4700::/32,2803:f800::/32,2c0f:f248::/32,2a06:98c0::/29');

The reason for this can be found in the FAQ of the WP Fail2ban plugin. The IP ranges used by CloudFlare can be found at CloudFlare.

Does it work?

Seems so… This is an example from /var/log/messages.

Jan 15 20:01:46 webserver wordpress(www.vcloudnine.de)[4312]: Authentication attempt for unknown user vcloudnine from 195.154.183.xxx
Jan 15 20:01:46 webserver fail2ban.filter[4393]: INFO [wordpress-hard] Found 195.154.183.xxx

And this is a screenshot from the CloudFlare firewall section.

Cloudflare Firewall Blocked Websites

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Another short test with curl has also worked. I will monitor the firewall section of CloudFlare. Let’s see who’s added next…

Important note for those, who use SELinux: Make sure that you install the policycoreutils-python package, and create a custom policy for Fail2Ban!

[root@webserver ~]# grep fail2ban /var/log/audit/audit.log | audit2allow -M myfail2banpolicy
******************** IMPORTANT ***********************
To make this policy package active, execute:

semodule -i myfail2banpolicy.pp

A strong indicator are errors like this in /var/log/messages:

Jan 22 12:06:03 webserver fail2ban.actions[16399]: NOTICE [wordpress-hard] Ban xx.xx.xx.xx
Jan 22 12:06:03 webserver fail2ban.action[16399]: ERROR curl -s -o /dev/null https://www.cloudflare.com/api_json.html -d 'a=ban' -d 'tkn=7c8e62809d4183931347772b366e621003c63' -d 'email=patrick@blazilla.de' -d 'key=xx.xx.xx.xx' -- stdout: ''
Jan 22 12:06:03 webserver fail2ban.action[16399]: ERROR curl -s -o /dev/null https://www.cloudflare.com/api_json.html -d 'a=ban' -d 'tkn=7c8e62809d4183931347772b366e621003c63' -d 'email=patrick@blazilla.de' -d 'key=xx.xx.xx.xx' -- stderr: ''
Jan 22 12:06:03 webserver fail2ban.action[16399]: ERROR curl -s -o /dev/null https://www.cloudflare.com/api_json.html -d 'a=ban' -d 'tkn=7c8e62809d4183931347772b366e621003c63' -d 'email=patrick@blazilla.de' -d 'key=xx.xx.xx.xx' -- returned 7
Jan 22 12:06:03 webserver fail2ban.actions[16399]: ERROR Failed to execute ban jail 'wordpress-hard' action 'cloudflare' info 'CallingMap({'ipjailmatches': <function <lambda> at 0x7f49967edc80>, 'matches': '', 'ip': 'xx.xx.xx.xx', 'ipmatches': <function <lambda> at 0x7f49967edde8>, 'ipfailures': <function <lambda> at 0x7f49967edc08>, 'time': 1485083163.0328701, 'failures': 2, 'ipjailfailures': <function <lambda> at 0x7f49967eded8>})': Error banning xx.xx.xx.xx

You will find corresponding audit messages in the /var/log/audit.log:

type=AVC msg=audit(1485083254.298:17688): avc:  denied  { name_connect } for  pid=16575 comm="curl" dest=443 scontext=unconfined_u:system_r:fail2ban_t:s0 tcontext=system_u:object_r:http_port_t:s0 tclass=tcp_socket

Make sure that you create a custom policy for Fail2Ban, and that you load the policy.

The Linux OOM killer strikes again

This posting is ~7 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

As a frequent reader of my blog, you might have noticed that vcloudnine.de was unavailable from time to time. Reason for this was, that my server was running out of memory at night.

Jan  1 05:22:16 webserver kernel: : httpd invoked oom-killer: gfp_mask=0x200da, order=0, oom_adj=0, oom_score_adj=0

Running out of memory is bad for system uptime. Sometimes you have to sacrifice someone to help others.

It is the job of the linux ‘oom killer’ to sacrifice one or more processes in order to free up memory for the system when all else fails.

Source: OOM Killer – linux-mm.org

The OOM killer selects the process, that frees up the most memory, and that is the least important to the system. Unfortunately, in my case it is Apache or MySQL. On the other hand: Killing these processes have never brought back the system to life. But that is another story. Something has consumed so much memory at night, that the OOM killer had to start its deadly work.

Checking the logs

The OOM has started its work at ~5am, and it killed the httpd (Apache).

Jan  1 05:22:16 webserver kernel: : httpd invoked oom-killer: gfp_mask=0x200da, order=0, oom_adj=0, oom_score_adj=0

While checking the Apache error_log, this log entry caught my attention.

[Sun Jan 01 03:51:04 2017] [notice] SIGHUP received.  Attempting to restart

The next stop was the Apache access_log. At the same time as in the error_log, the Apache logged a POST request wp-login.php in the access_log.

[01/Jan/2017:03:51:03 +0100] "POST /wp-login.php HTTP/1.1" 200 4168

And there were a lot more attempts… I did a short check of older log files. It was not the first OOM killer event, and the log entries were smoking gun. Especially the POST for wp-login.php.

[root@webserver httpd]# zgrep 'POST /wp-login.php HTTP/1.1' access_log | wc -l
876
[root@webserver httpd]# zgrep 'POST /wp-login.php HTTP/1.1' access_log-20161218.gz | wc -l
14577
[root@webserver httpd]# zgrep 'POST /wp-login.php HTTP/1.1' access_log-20161225.gz | wc -l
12368
[root@webserver httpd]# zgrep 'POST /wp-login.php HTTP/1.1' access_log-20170101.gz | wc -l
12054
[root@webserver httpd]# zgrep 'POST /wp-login.php HTTP/1.1' access_log-20170108.gz | wc -l
6814

The number below the command is the number of the POST requests logged in the access_log. The current access_log starts on Jan 08 2017. And since start, there are alreay 876 POST requests to wp-login.php. Looks like a brute force attack.

So there is nothing wrong with the sever setup, it simply breaks down during a brute force attack.

Python 2.7 for CentOS 6

This posting is ~7 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

By default, CentOS 6 comes with Python 2.6. This is a bit outdated, especially if you take into account, that Python 2.7.11, which is the latest Python 2 release, was released in December 2015. If you are new to Pyhton, you will usually start with Python 3. Currently, Python 3.5.1 is the latest Python 3 release. So, Python 2.6 is REALLY old.

Okay, I could use another distro. Ehm… no. CentOS is the is the open-source version of Red Hat Enterprise Linux (RHEL). It was, and it is, designed to be similar to RHEL. CentOS runs only the most stable versions of packaged software. This greatly reduces the risk of crashes and errors. The downside is… Python 2.6. Or Apache 2.2. Or MySQL 5.1. Switching to CentOS 7 is difficult, because there is no inplace upgrade.

Python 2.7 for CentOS 6

In my case, I needed Python 2.7. Fortunately, this package is offered by the Software Collections ( SCL ) repository. You can install Python 2.7 with two commands.

yum install centos-release-SCL
yum install python27 python27-python-devel python27-python-setuptools python27-python-tools python27-python-virtualenv

After the successful installation of the packages, you can find the files located under /opt/rh/python27. Next step is to create a python.conf under /etc/ld.co.conf.d and run ldconfig afterwards.

[root@server ~]# echo "/opt/rh/python27/root/usr/lib64" > /etc/ld.so.conf.d/python27.conf
[root@server ~]# cat /etc/ld.so.conf.d/python27.conf
/opt/rh/python27/root/usr/lib64
[root@server ~]# ldconfig

Last step is to create a symlink for the Python 2.7 binary.

[root@server ~]# ln -s /opt/rh/python27/root/usr/bin/python2.7 /usr/bin/python2.7

If you want to use Let’s Encrypt with CentOS 6, make sure to use Python 2.7.

How to dramatically improve website load times

This posting is ~7 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Over the last weeks, I’ve tried to improve the performance of my blog. The side was very slow and the page load times varied between 5 and 10 seconds. Much too long! I’ve reduced time consuming plugins, checked the size of pictures, checked CSS and HTML for misconfiguration/ slow clode and tuned the database. The page load times have not really improved.

Yesterday, I checked the httpd.conf on my webserver and found a little typo (accidentally commented line). After a restart of the Apache webserver, the page load times have dramatically improved (down to 2 – 3 seconds). What had happened?

HTTP keep-alive

HTTP keep-alive, sometimes also called “HTTP persistent connection”, was designed to transfer multiple HTTP requests and responses over a single TCP connection. This is much better as opening a new connection for every single request/ response pair. The benefits of HTTP keep-alive are:

  • lower CPU usage
  • lower memory usage
  • reduced latency due to reduced requests/ handshaking

These benefits are even more important, if you use HTTPS connections (and vcloudnine.de is HTTPS-only…), because each new HTTP connection needs much more CPU time and round-trips compared to an unsecure HTTP connection. This little picture clarifies the differences.

HTTP_persistent_connection

Wikipedia/ wikipedia.org/ Public domain image resources

If you’re using Apache, you can enable HTTP keep-alive with a single line in the httpd.conf.

KeepAlive On

Further information can be found in the documentation of Apache (Apache webserver 2.2 and 2.4).

Stunnel refuses to work after update

This posting is ~9 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Yesterday I’ve updated a CentOS 6.6 VM with a simple yum update. A couple of packages were updated and to be honest: I haven’t checked which packages were updated. Today I noticed that an application, that uses a secure tunnel to connect to another application, doesn’t work. While browsing through the log files, I found this message from Stunnel.

LOG3[1145:140388919940864]: SSL_accept: 14076129: error:14076129:SSL routines:SSL23_GET_CLIENT_HELLO:only tls allowed in fips mode

I rised the debug level and restarted Stunnel. Right after the restart, I found this in the logs.

LOG5[1385:140679985747904]: stunnel is in FIPS mode
LOG5[1385:140679985747904]: stunnel 4.29 on x86_64-redhat-linux-gnu with OpenSSL 1.0.1e-fips 11 Feb 2013

So Stunnel was working in FIPS mode. But what is FIPS and why is Stunnel using it? I recommend to read the Wikipedia article about the Federal Information Processing Standards (FIPS). To be precise, Stunnel follows FIPS 140-2. My stunnel.conf is really simple and there’s nothing configured that is, or might be related to FIPS. A short search with man -K fips led me to the stunnel man page.

 fips = yes | no
           Enable or disable FIPS 140-2 mode.

           This option allows to disable entering FIPS mode if stunnel was compiled with FIPS 140-2 support.

           default: yes

This explains a lot. FIPS is enabled by default with this version. So it was enabled with the updated Stunnel version. With FIPS enabled, only TLS can be used. More interesting: FIPS is disabled by default with beginning of version 5.0. But I’m running version 4.29. So I had two options to get rid of this error:

  • Disable FIPS
  • Enable TLS

To disable FIPS, you have to add the following line to the stunnel.conf on the server-side:

fips = off

You can have FIPS enabled when you enforce the use of TLS. In my case, I added the following line on the server- and client-side:

sslVersion = TLSv1

After a restart of Stunnel on the server-side, the connection began to work again.

Load Balancing inbound SMTP connection with HAProxy

This posting is ~9 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

In my last blog post I have highlighted how HAProxy can be used to distribute client connections to two or more servers with Exchange 2013 CAS role. But there is another common use case for load balancers in a Exchange environment: SMTP. Let’s take a look at this drawing:

mailflow

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The inbound SMTP connections are distributed to two Mail Transfer Agents (often a cluster of appliances, like Cisco IronPort or Symantec Messaging Gateway) and the MTAs forward the e-mails to the Exchange servers. Sometimes the e-mails are not directly forwarded to the Exchange servers, but to mail security appliances instead (like Zertificon Z1 SecureMail Gateway). After the e-mails have been processed by the mail security appliances, they are forwarded to the Exchange backend. Such setups are quite common. If a load balancer isn’t used, the MX records often point to the public IP address of a specific MTA. In this case, two or more MX records have to be set to ensure that e-mails can be received, even if a MTA fails.

A setup with a load balancers allows you to have a single MX record in your DNS, but two or more servers that can handle inbound SMTP connections. This makes maintenance easier und allows you to scale without having to fumble on the DNS. It’s without saying that your Load Balancer should be highly available, if you decide to realize such a setup.

It’s not hard to persuade HAProxy to distribute inbound SMTP connections. All you have to do is to add this to your haproxy.conf. To get the full config, check my last blog post about HAProxy.

 mode tcp
    no option http-server-close
    balance roundrobin
    option smtpchk HELO mail.terlisten-consulting.de
    server mail1 192.168.200.107:25 send-proxy check
    server mail2 192.168.200.108:25 send-proxy check

The “send-proxy” parameter ensures, that the incoming IP address is forwarded to the servers behind the load balancer. This is important if you use Greylisting or real-time blacklists on your MTA or mail server. When running Postfix 2.10 or later, please make sure that you add this line to your main.cf:

smtpd_upstream_proxy_protocol = haproxy

This option add support for the PROXY protocol. Incoming requests are distributed alternating to the servers behind the load balancer. The “balance roundrobin” parameter ensures this. Please make sure that the MTA, that is running on your Linux host, doesn’t listen on the external IP. In my case, Postfix listens only on 127.0.0.1.

[root@haproxy haproxy]# netstat -tulpen
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       User       Inode      PID/Program name
tcp        0      0 192.168.200.103:25      0.0.0.0:*               LISTEN      0          228433     22876/haproxy
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      0          15431      1309/master

The statistics page can be used to verify the success of the configuration (click the picture to enlarge).

haproxy_smtp_roundrobin

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Alternatively you can use Telnet to connect to the load balancer on port 25/tcp. As you can see in the screenshot, using the FQDN mailin.vcloudlab.local resulted in an alternating connection to the backend servers.

haproxy_smtp_roundrobin_check_png

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0