UPDATE: This approach was superseded by the proxying through systemd-socket-proxyd approach.
Many of my clients are running application stacks consisting of nginx plus some kind of scripting engine behind it (be it PHP, Ruby, or something else).
The architecture I designed for this kind of workload involves at least two load balancers:
- an external, frontend load balancer that serves the web requests from visitors; and
- an internal, backend load balancer that distributes load between the backends.
Everything looks great when you implement this using “in-house” infrastructure where you control most of the networking aspects.
However, the tendency is that most enterprises are moving to the cloud providers and with that we lose some control.
Specifically, often the cloud providers define their load-balancers as auto-scaling entities that change their IP addresses depending on the scale-in/out activity.
Unfortunately, the community version of nginx does not know how to dynamically resolve the specified upstream servers (such a functionality is available from the nginx commercial subscription only), so I spent a couple of evenings to implement the desired functionality as a patch.
The patch implements the dynamic DNS resolution of the specified upstream servers in the upstream compatible way: we are re-using the very same “resolve” keyword on the server line as the commercial version of nginx does ensuring that if you ever decide to switch to the commercial subscription you would not need to change your configs.