nginx + a backend with a dynamic IP (e.g. AWS ELB)

Recently, I wrote about the dynamic resolution of upstream servers in nginx which was achieved by quite an intrusive patch to the core nginx module. The patch was invented a while ago and was working very well up until recent nginx versions were released. With the release of nginx 1.10 it was noticed that the patch crashes some workers under heavy load and this was unacceptable for the production load, hence a new approach was implemented.

The beauty of the new solution is that it is non-intrusive and works with any services that communicate via sockets.


Dynamic resolution of upstream servers in nginx

UPDATE: This approach was superseded by the proxying through systemd-socket-proxyd approach.

Many of my clients are running application stacks consisting of nginx plus some kind of scripting engine behind it (be it PHP, Ruby, or something else). The architecture I designed for this kind of workload involves at least two load balancers: the external, frontend load balancer that serves the web requests from visitors and the internal, backend load balancer that distributes load between the backends.

Everything looks great when you implement this using "in-house" infrastructure where you control most of the networking aspects. However, the tendency is that most enterprises are moving to the cloud providers and with that we lose some control. Specifically, often the cloud providers define their load-balancers as auto-scaleing entities that change their IP addresses depending on the scale-in/out activity.

Unfortunately, the community version of nginx does not know how to dynamically resolve the specified upstream servers (such a functionality is available from the nginx commercial subscription only), so I spent a couple of evenings to implement the desired functionality as a patch. It implements the dynamic DNS resolution of the specified upstream servers in the upstream compatible way: we are re-using the very same "resolve" keyword on the server line as the commercial version of nginx does ensuring that if you ever decide to switch to the commercial subscription you would not need to change your configs.

The patch was originally created for nginx 0.8.6 and was used in production for the last couple of years. The work on the patch was sponsored by Openwall (Australia) and Data Solutions Group.

Enjoy! :)


Transparent SSH host-jumping (Expert)

A while ago in the Transparent SSH host-jumping (Advanced) post I described a technique on how one could jump quite effortlessly through a chain of intermediate hosts. However, there was a catch: the user names and ports across the whole chain should be the same and there was no easy way to change that. Given that I recently paid quite a lot of attention to the ProxyCommand directive I decided to look into the implementation of a helper script that will allow one to tweak parameters for hosts in the chain.


SSH: Interactive ProxyCommand

I was involved in the creation of the sshephalopod project, which is an attempt to build an enterprise level authentication framework for SSH authentication using the SSH CA feature. The project is based on a wrapper script that signs a user via a SAML identity provider and gets user's public key signed for the further usage. In one of the discussions I pointed out that such a wrapper script is not good for the end user experience and I proposed to provide the users with an excerpt for their ssh config file, so the functionality of sshephalopod would be transparent to the general usage scenario of the ssh tool. The response was that ProxyCommand do not support interactivity. OK, as they say: The challenge is accepted :)


Raspberry Pi 3 toolchain on CentOS 7

I heard a lot about Raspberry Pi boards but until now I had no need nor time to work with one. However, recently I purchased a Dodge Journey R/T and found that although I love the car I am so disappointed with its software and hardwired logic that I decided to experiment a bit and fix the most annoying things. Since almost everything inside the car is talking over the CAN bus I needed some kind of a enclave inside where I could run my code and inject/intercept CAN messages. I looked around and found that I can build the desired appliance using Raspberry Pi 3 (Model B) + PiCAN 2 HAT board.

Once the hardware was delivered to my home time came to start building the software side of things. My distribution of choice for this project became CentOS 7 (userland), however, building stuff on the Raspberry Pi itself was a painful and long process, so I needed a proper toolchain to be able to utilise much more powerful hardware and do builds quicker.

The following is a session dump (with some notes) on how I built my toolchain on an AWS EC2 instance which was running a minimal CentOS 7 as its OS.

Building a firewall? Simple and easy!

I strive for simplicity since I am a strong believer that achieving a goal with the most simplest solution looks elegant, proves that you have deep knowledge on the subject, and overall is beautiful by itself. Additionally to this, a simple solution is easier to comprehend and to audit, hence it is much easier to ensure the security of such a solution. Over the last decade I stumbled upon numerous complicated firewalls erected on the NAT boxes with tens of rules describing the traffic flows and punched holes for some edge cases. Every time I wondered what kind of a bug has bitten the person who composed such a convoluted ruleset that is a nightmare to manage. In 99% of the cases I was able to come up with a ruleset of usually less than 20 rules for the whole firewall to achieve the exactly the same result. So, in this article I will explain my approach on building firewalls that are easy to support and to understand.


Transparent SSH host-jumping (Advanced)

In this brief article I am going to describe how I resolved a nagging issue I had with setting up access to hosts which are not directly reachable, but where you need to forward your connection through an intermediate host.
Previously, I was using local SSH port-forwarding technique (although I was configuring the hosts I connect to in the ~/.ssh/config file instead of using the command-line options). However, this approach turned out to be quite inconvenient since every time I wanted to connect to a new host (and, possibly, through a new intermediate host) I had to edit my SSH configuration file and add something like the following:
Host intermediate
        HostKeyAlias intermediate
        LocalForward 10001 target:22

Host target
        HostKeyAlias target
        Port 10001
The inconvenience came from two things:
  1. My ~/.ssh/config file was growing uncontrollably
  2. Each time I needed to connect to the target host through the intermediate host I had to open two sessions with one of them being idle most of the time


Should we use ‘sudo’ for day-to-day activities?

None of the systems I administer or supervise have ‘sudo’ installed and every time I answer a question on how to do privileged work on these systems (i.e. do tasks that require administrator privileges) with a proposal to SSH under the privileged account directly to do such a work whoever asked the question start to blabber how insecure that is, that one should use ‘sudo’ and that nobody should ever login directly as root.

I've spent quite some time explaining the misconception behind so-called "secure way to access systems through sudo", so I decided o write up an article that describes the issues of using that approach and why using ‘sudo’ is actually less secure than a direct SSH access.


SSH port-forwarding (Intermediate)

In my previous blog entry I described some basic functionality of SSH in terms of port-forwarding. Now it's time for a little bit more complex stuff.

In this article I'll highlight:
  • (forward) piercing of a firewall (getting access to resources behind it);
  • dynamic port-forwarding (AKA proxy);
  • (reverse) piercing of a firewall (exposing your local services on the remote side).


SSH port-forwarding (Basics)

I think all of you are using SSH in your daily routines. However, do you use its full potential? Today's topic is the SSH port-forwarding feature and how it can be use to achieve some interesting configurations.

I'm sure most of you are aware of the feature, but how many of you are using it? Personally, I'm a bit obsessed with it and have found numerous cases where this feature of SSH is a real life saver.


HOWTO: VMware Player as a remote console (VNC)

Goal: get a VNC client to access VMware VMs from a Linux-based PC

Since I'm doing a lot of remote systems administration tasks due to the nature of my IT consulting work and since I'm also running Linux on all my computers I was looking for a native way how to get a remote console to VMware VMs from linux.

After some searching I found that VMware Player (which has native binaries for Linux) can be used as a VNC client to get to VMs consoles. However, once I've downloaded VMware player's bundle and was faced with its requirement to run the installation script as root I became quite unhappy with an idea of running some proprietary software on my machine as root, especially after looking into the bundle and the way the installation script was written. Moreover, there was no need for other parts of VMware Player -- I just wanted to have a small tool to be able to hook the remote consoles up under my lovely Linux environment. Therefore, I decided to take a challenge and to tweak the installation so it will be possible to install the whole thing as a non-privileged user. Another sub-goal was to strip the installation further down and prepare a small package with only components needed for remote console sessions.

If you are not concerned about security (and integrity) of your system, e.g. you are fine with the re-installation of the whole system, then it will be cheaper to just install the VMware Player under the root account. In this case you don't need to read any further since what I'm describing below is for those brave hearts who value their systems and who don't want to give a chance to mess their systems up by running low-quality custom installation scripts as root.