Transparent SSH host-jumping (Expert)

A while ago in the Transparent SSH host-jumping (Advanced) post I described a technique on how one could jump quite effortlessly through a chain of intermediate hosts. However, there was a catch: the user names and ports across the whole chain should be the same and there was no easy way to change that. Given that I recently paid quite a lot of attention to the ProxyCommand directive I decided to look into the implementation of a helper script that will allow one to tweak parameters for hosts in the chain.


SSH: Interactive ProxyCommand

I was involved in the creation of the sshephalopod project, which is an attempt to build an enterprise level authentication framework for SSH authentication using the SSH CA feature. The project is based on a wrapper script that signs a user via a SAML identity provider and gets user's public key signed for the further usage. In one of the discussions I pointed out that such a wrapper script is not good for the end user experience and I proposed to provide the users with an excerpt for their ssh config file, so the functionality of sshephalopod would be transparent to the general usage scenario of the ssh tool. The response was that ProxyCommand do not support interactivity. OK, as they say: The challenge is accepted :)


Raspberry Pi 3 toolchain on CentOS 7

I heard a lot about Raspberry Pi boards but until now I had no need nor time to work with one. However, recently I purchased a Dodge Journey R/T and found that although I love the car I am so disappointed with its software and hardwired logic that I decided to experiment a bit and fix the most annoying things. Since almost everything inside the car is talking over the CAN bus I needed some kind of a enclave inside where I could run my code and inject/intercept CAN messages. I looked around and found that I can build the desired appliance using Raspberry Pi 3 (Model B) + PiCAN 2 HAT board.

Once the hardware was delivered to my home time came to start building the software side of things. My distribution of choice for this project became CentOS 7 (userland), however, building stuff on the Raspberry Pi itself was a painful and long process, so I needed a proper toolchain to be able to utilise much more powerful hardware and do builds quicker.

The following is a session dump (with some notes) on how I built my toolchain on an AWS EC2 instance which was running a minimal CentOS 7 as its OS.

Building a firewall? Simple and easy!

I strive for simplicity since I am a strong believer that achieving a goal with the most simplest solution looks elegant, proves that you have deep knowledge on the subject, and overall is beautiful by itself. Additionally to this, a simple solution is easier to comprehend and to audit, hence it is much easier to ensure the security of such a solution. Over the last decade I stumbled upon numerous complicated firewalls erected on the NAT boxes with tens of rules describing the traffic flows and punched holes for some edge cases. Every time I wondered what kind of a bug has bitten the person who composed such a convoluted ruleset that is a nightmare to manage. In 99% of the cases I was able to come up with a ruleset of usually less than 20 rules for the whole firewall to achieve the exactly the same result. So, in this article I will explain my approach on building firewalls that are easy to support and to understand.


Transparent SSH host-jumping (Advanced)

In this brief article I am going to describe how I resolved a nagging issue I had with setting up access to hosts which are not directly reachable, but where you need to forward your connection through an intermediate host.
Previously, I was using local SSH port-forwarding technique (although I was configuring the hosts I connect to in the ~/.ssh/config file instead of using the command-line options). However, this approach turned out to be quite inconvenient since every time I wanted to connect to a new host (and, possibly, through a new intermediate host) I had to edit my SSH configuration file and add something like the following:
Host intermediate
        HostKeyAlias intermediate
        LocalForward 10001 target:22

Host target
        HostKeyAlias target
        Port 10001
The inconvenience came from two things:
  1. My ~/.ssh/config file was growing uncontrollably
  2. Each time I needed to connect to the target host through the intermediate host I had to open two sessions with one of them being idle most of the time
After a while I stumbled upon an article describing quite a generic way to tunnel through an intermediate host and found the approach quite convenient for the day-to-day use. So, I have added the following block into my ~/.ssh/config file just before the "Host *" section:
Host */* 
        ProxyCommand ssh $(dirname %h) -W $(basename %h):%p
From now on, I could connect to the target host via the intermediate one by simply executing the following command:
$ ssh user@intemediate/target
The configuration with the ProxyCommand directive was spawning two ssh processes with one connected to the intermediate host in the background and the other proxied through the intermediate host and connected to the target running in the foreground, so from my point of view I had just one terminal session open. The configuration allowed to chain as many hosts as I wanted, e.g.:
$ssh user@hostA/hostB/hostC/hostD
The above would result in three ssh processes running in the background (the first connected to hostA, the second connected to hostB proxied through hostA, and the third connected to hostC proxied through hostB) and one foreground process which was connected to hostD proxied via hostC. This is great and quite flexible to use, however, this approach has a number of limitations:
  • you cannot specify different ports for different hosts in the chain
  • neither can you use different login names for different hosts in the chain
  • establishing connection to different chains sharing a part of the chain would not reuse already established connections, i.e. slow connection time
Personally, I'm using the same login name and the same ports on hosts I am accessing, so the first two items were not an issue for me, but the last one was irritating enough and I decided to figure out whether it is possible to optimise it. After a bit of reading the documentation and a few attempts I came up with the following configuration block in my ~/.ssh/config file (remember, this block should be placed _before_ the "Host *" one):
Host */*
        ControlMaster auto
        ControlPath   ~/.ssh/.sessions/%r@%h:%p
        ProxyCommand /bin/sh -c 'mkdir -p -m700 ~/.ssh/.sessions/"%r@$(dirname %h)" && exec ssh -o "ControlMaster auto" -o "ControlPath   ~/.ssh/.sessions/%r@$(dirname %h):%p" -o "ControlPersist 120s" -l %r -p %p $(dirname %h) -W $(basename %h):%p'
Let's review it line by line, so the logic is clear:
Host */*
This host definition block would catch any host specified on the ssh command line when the host name matches the "*/*" pattern, so "ssh hostA/hostB/hostC" will be matched as "hostA/hostB" being the first part before "/" and "hostC" as the second part after "/". Due to a recursive call to ssh (see below) this block will be recursively applied to all hosts in the specified chain
ControlMaster auto
This directive instructs ssh to try to reuse the existing control channel to communicate with the remote host and if such a channel does not exist then it will be created and further connections to the same remote host would benefit from a speedup provided by tthe already established connection
ControlPath ~/.ssh/.sessions/%r@%h:%p
This directive provides ssh with the location of the control channel socket file. The socket file should be unique for each remote host and since we are reusing the existing connection and skipping the authentication the socket file should be tagged with the corresponding login name, this is why we are using %r (remote login name), %h (the remote host name), and %p (the remote port) as part of the file name. Please note that due to our usage of "/" as a host separator in the chain the path constructed here will have a subdirectory defined in the middle of the %h expansion. ssh would not automatically create that subdirectory, so it is something we need to address (see below)
ProxyCommand …
This is the heart of the whole block. I am starting this proxy command with /bin/sh -c '…' since ssh is executing the specified command (this replaces the spawned shell and makes it impossible to conditionally chain commands), therefore I am using the shell binary as the proxy command to get the ability to script my logic. Then I am creating the required directory structure for the control channels under ~/.ssh/.sessions (note the -p argument to mkdir, this will create all the missing parts of the specified tree, but also would silence mkdir in case all of the directories already exist). It is worth to mention that with this mkdir command I am creating the subdirectory for the ControlPath defined for the enclosing "Host */*" block.
The second part of the command line is conditionally executing ssh if mkdir did not report any issues. It is good to execute ssh here since we do not need a redundant shell hanging around in the process tree. In this recursive ssh call we explicitly specify that we also need multiplexing of the control channels created by the parent connections (they are "parent" since this is the connection that established first and which enables access to the hosts further down the specified chain) as well as we explicitly specify the location of the control channel (note that since it is a parrent connection we are stripping the rest of host names from the %h macro using dirname. Finally, the third explicitly specified directive is ControlPersist which is set to 120s. This directive instructs ssh to stay in the background and maintain the control channel in case we decide to reuse it and if not activity on the control channel is detected for 2 minutes the ssh process would terminate. Without this directive the moment you closed the connection which was the master connection all dependent connections would also be closed, e.g. if you have two sessions: one to hostA/hostB and the other to hostA/hostC the moment you closed the first one the second one will be immediately terminated if you do not have the ControlPersist configured. The rest of the ssh arguments is obvious: we connect to the first host in the provided host chain (we are extracting that part with dirname %h) and we are proxying stdin/stdout to the last host in the supplied chain with the -W option
Basically, the control flow when you do "ssh user@hostA/hostB/hostC" is the following:
  1. ssh matches the */* pattern against the provided host name (hostA/hostB/hostC)
  2. ssh tries to reuse the control channel by attempting to open the ~/.ssh/.sessions/user@hostA/hostB/hostC:22 socket, if successful the connection is established and the command prompt is displayed to the calling user, otherwise the execution continues
  3. ssh executes the defined ProxyCommand command
  4. the first part of the command creates ~/.ssh/.sessions/hostA/hostB if it is not there
  5. the second part executes 'ssh … -o "ControlPath ~/.ssh/.sessions/user@hostA/hostB:22" … hostA/hostB -W hostC:22' (this will initiate another round of the above steps, but with a shorter chain and it will be recursive until there is just a single host left, e.g. when we ascend to hostA as the host to connect to)
  6. now, with connected stdin/stdout to port 22 on hostC (in the last iteration) ssh performs the authentication against hostC
  7. if authentication is successful ssh creates the ~/.ssh/.sessions/user@hostA/hostB/hostC:22 control channel socket and becomes the master of that control channel
  8. a command prompt is displayed to the calling user
I hope this little trick will save you some time and will make your life easier. :)


Should we use ‘sudo’ for day-to-day activities?

None of the systems I administer or supervise have ‘sudo’ installed and every time I answer a question on how to do privileged work on these systems (i.e. do tasks that require administrator privileges) with a proposal to SSH under the privileged account directly to do such a work whoever asked the question start to blabber how insecure that is, that one should use ‘sudo’ and that nobody should ever login directly as root.

I've spent quite some time explaining the misconception behind so-called "secure way to access systems through sudo", so I decided o write up an article that describes the issues of using that approach and why using ‘sudo’ is actually less secure than a direct SSH access.


SSH port-forwarding (Intermediate)

In my previous blog entry I described some basic functionality of SSH in terms of port-forwarding. Now it's time for a little bit more complex stuff.

In this article I'll highlight:
  • (forward) piercing of a firewall (getting access to resources behind it);
  • dynamic port-forwarding (AKA proxy);
  • (reverse) piercing of a firewall (exposing your local services on the remote side).


SSH port-forwarding (Basics)

I think all of you are using SSH in your daily routines. However, do you use its full potential? Today's topic is the SSH port-forwarding feature and how it can be use to achieve some interesting configurations.

I'm sure most of you are aware of the feature, but how many of you are using it? Personally, I'm a bit obsessed with it and have found numerous cases where this feature of SSH is a real life saver.


HOWTO: VMware Player as a remote console (VNC)

Goal: get a VNC client to access VMware VMs from a Linux-based PC

Since I'm doing a lot of remote systems administration tasks due to the nature of my IT consulting work and since I'm also running Linux on all my computers I was looking for a native way how to get a remote console to VMware VMs from linux.

After some searching I found that VMware Player (which has native binaries for Linux) can be used as a VNC client to get to VMs consoles. However, once I've downloaded VMware player's bundle and was faced with its requirement to run the installation script as root I became quite unhappy with an idea of running some proprietary software on my machine as root, especially after looking into the bundle and the way the installation script was written. Moreover, there was no need for other parts of VMware Player -- I just wanted to have a small tool to be able to hook the remote consoles up under my lovely Linux environment. Therefore, I decided to take a challenge and to tweak the installation so it will be possible to install the whole thing as a non-privileged user. Another sub-goal was to strip the installation further down and prepare a small package with only components needed for remote console sessions.

If you are not concerned about security (and integrity) of your system, e.g. you are fine with the re-installation of the whole system, then it will be cheaper to just install the VMware Player under the root account. In this case you don't need to read any further since what I'm describing below is for those brave hearts who value their systems and who don't want to give a chance to mess their systems up by running low-quality custom installation scripts as root.