Using AWS, Docker, Nginx, and subdomain routing to host all of your node apps for free

Free tier baby!

Posted by Kory Porter on July 21, 2019 · 10 mins read


Using this combo of technologies you’re able to host all of your node apps for free, and in one place! Take this with a grain of salt of course, if you have apps receiving thousands of request and are doing intensive processing on those requests, this probably isn’t the right solution 😛.

Being a tinkerer myself I start a lot of small projects, and I was finding it slightly annoying to go through the process of:

  • starting a new pm2 process for the app,
  • binding it to a free port on the machine,
  • editing the security group policy to allow that specific port to be public accessible,
  • remembering which port I started it on for future use 🤯.

I think you get the point by now.

Wouldn’t it be nice if you could access your new super cool api that returns cat breeds at, instead of

Cardboard box toy amazing staring into the ether
Danbo on a tree - source

What you’ll need

  • An AWS account with an active linux EC2 instance, be mindful that the AWS free tier only offers 750 hours of t1.micro and t2.micro instances per month.
  • SSH access to that EC2 instance.
  • A registered domain
  • Access to said domains DNS settings
  • Docker and Docker Compose installed on the EC2 instance (check out this article)

You’re going to need to change a few DNS settings to point certain subdomains to the EC2 instance, I would recommend setting up an elastic IP address, which essentially means that that public IP is yours, and will not change while you have that elastic IP assigned that EC2 instance.

The how

Modify the security group in use by your EC2 instance to have TCP ports 22 and 80 publicly accessible (inbound)

Port 22 (SSH/SCP) will be used to access and upload files remotely, port 80 is what NGINX uses to listen and route requests.

To modify these properties you will need to navigate to the EC2 dashboard within the AWS console. Once your’e on the landing page, on the sidebar you should see a link called “instances”, navigate to that view. Now we’re here, click on your EC2 instance, this should bring up a tab towards the bottom of the screen that displays an overview and description of your instances settings. What we’re interested in here is the security groups. If you’ve just created your instance, it’s likely you’ll see a security group assigned to the instance called “launch-wizard-x”, or something very similar.

Clicking on the security group will navigate you to the security groups view. In the tab on the bottom of the screen, click on “Inbound” and then “Edit”.

Adding HTTP -> Click “Add Rule”, the type should be “HTTP”

Adding SSH -> Click “Add Rule”, the type should be “SSH”.

You shouldn’t need to modify the outbound as by default all traffic is allowed out.

Great, now that we have this sorted we can move on and start configuring our proxy through NGINX.

Inbound EC2 rules
Inbound EC2 rules
Outbound EC2 rules
Outbound EC2 rules

Create a configuration file for the Nginx reverse proxy.

This file contains the nitty gritty Nginx configuration that enables us to do subdomain routing.

SSH into your EC2 instance, and create a file called “nginx-proxy.conf”

touch nginx-proxy.conf

Now add the following to that file

worker_processes 1;
events {
worker_connections 1024;
http {
server {
listen 8080;
server_name foo.*;
location / {
proxy_pass http://foo:5050;
} server {
listen 8080;
server_name bar.*;
location / {
proxy_pass http://bar:5050;
server {
listen 8080 default_server;
return 444;

Now lets break down that Nginx configuration file:

  • worker_processes - defines the number of worker processes, leave this at 1.
  • worker_connections - sets the maximum number of simultaneous connections that can be opened by a worker process, I think 1024 is ample!
  • server - this is what we are concerned about. We’re going to use this server directive for each subdomain we wish route against.

You might be wondering why we’ve set the proxy_pass directives to be some strange hostname, like foo or bar. These match the container name that we define in the next step!

Create a docker compose file to document your apps and the Nginx reverse proxy.

Now the fun begins! We need to create a docker-compose file that defines the proxy and our apps.

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.


Create a docker-compose file in the same directory you created the nginx-proxy.conf file.

touch docker-compose.yml

Our compose file will be quite simple, mainly because we aren’t doing any crazy builds or anything too complex. This file will grow/change dependent on your needs and apps that you’re hosting!

version: "3"
image: nginx
- 80:8080
- ./nginx-proxy.conf:/etc/nginx/nginx.conf
restart: always
image: vad1mo/hello-world-rest
- 3000:5050
image: vad1mo/hello-world-rest
ports: - 3001:5050

We’ve defined three services in our compose file, the most important being our proxy config. We map the machine port 80, to the container port 8080, as port 8080 is what we defined in our Nginx config file.

You’ll also notice a special volumes directive there, the long and short of this is, we are saying that we want the file ./nginx-proxy.conf to be placed at the /etc/nginx/nginx.conf. Which is where Nginx will look by default for it’s configuration file.

Now’s a good time to test our configuration so far!

Run the following command to start your containers in the background

docker-compose up -d

Run the following to test that your bar service is up and running.

curl http://bar.localhost:80

Hopefully the response looks somethings like

/ - Hello World! Host:cfcdb08e751e/

Run the following to test that your foo service is up and running

curl http://foo.localhost:80

The response should look something like

/ - Hello World! Host:bb3c8de80266/

You’ll know that it’s talking to different servers as the Host header in the curl responses will be different dependent on the foo or bar service that you’re talking to.

Now if you’re getting any errors run docker-compose down to stop those containers, and then run docker-compose up, note how we’ve forgone the -d this time, as we don’t want to daemonize the processes, we want stdout to appear in our terminals! Open up a new ssh connection to the machine and make a few requests, the logs might give you an indication of where you’ve gone wrong. This could be a rogue semicolon in your nginx setup, a typo in one of your service names in the docker compose file, or a plethora of things that can go wrong when you’re working with services like these. Such is the life of a developer! 🤓

Wait, what, how does that work?!

If you’re anything like me, you might be a little confused at all the magic docker-compose is doing with networking. By default all containers starting in a compose configuration are able to talk through their service name. In our case, we have proxy, foo, and bar.

Another gotcha, and most likely a head scratcher, is you’ll notice in the Nginx configuration file we specify port 5050 on both our subdomain proxy_pass directives. That’s because we’re communicating directly with that container, and therefore we want to talk to the container port, not the host port we mapped it to.

It is important to note the distinction between HOST_PORT and CONTAINER_PORT. In the above example, for db, the HOST_PORT is 8001 and the container port is 5432 (postgres default). Networked service-to-service communication use the CONTAINER_PORT. When HOST_PORT is defined, the service is accessible outside the swarm as well.


Create an A record in your DNS host.

Head over to your hosting provider and add an A record to your domain. It’s up to you, but I have set up an A record using a wildcard that points all subdomain requests to my domain to my ec2 instance. It might look a little like this:

Outbound EC2 Rules DNS A Record

Test It!

Navigating to and works, and notice the different Host headers for the responses! Navigating to returns with a 404! 🥳

Wrap up

I’ve tried my diddliest to be verbose without being boring in this article, we’ve touched on quite a few different topics and this article definitely requires a little bit of experience working with the aforementioned technologies!

Hopefully you can walk away from this article with a good enough explanation of how you can use docker and nginx to host all of your weird and wacky ideas!