We have been deploying node apps to VPS and bare metal at Bocoup for almost 15 years, and with the growing number of full service hosting platforms, we thought this would be a good time to document how to do it yourself in 2024 for those of you who would still like to. We think it’s a great way to save money, and prioritize privacy.

This post offers detailed instructions for provisioning node on anything from a Raspberry Pi, to Ubuntu running on a computer on your desk or a VPS that you rent from a hosting service provider. We’ll step through locking down the server, setting up a reverse proxy with free SSL, installing node, deploying the software, and daemonizing it. We even have a bonus task scheduling section. If you’re more of a video watcher than a reader, be sure to check out the syntax.fm self-hosting 101 video series.

Who is this for?

For the last 8 years or so, we’ve been deploying node applications to AWS using terraform and ansible to provision EC2 instances. For the last year, we’ve been experimenting with serveless deployments on AWS using architect, and the slightly more manual, but way easier to understand deployments to Vercel and Netlify.

We’re also actively experimenting with serverless edge deployments on Cloudflare and long-running edge deployments on Fly.io (which is offering SQLite replication!!). And shout out to viridio a green energy host in Southern California we’ve been talking to about taking over more of our workloads.

At any rate, this is not a post about all those platforms, this is a post about doing it yourself.

This walk-through will talk about old school tried and true tools like ssh, systemd, rsync, cron, nginx, ufw, and more. We’ll show you how to do this by hand, and then we’ll provide some ansible playbooks to automate the process.

These days we’re deploying more and more applications built with Remix.run and SQLite in production. There’s a lot of guidance in the remix community for serverless hosting on platforms as a service, but not much out there for setting up your own server. This tutorial works great for remix.run applications of the the remix indie-stack sort, but also works great for next.js, sveltekit, and anything else you can npm start.

Why run a Virtual Private Server yourself?

Reasons to do this

We’ve been hearing about Platform as a Service (PaaS) fatigue: services are becoming expensive, proprietary, and it’s hard to know what to pick. We also aren’t sure about what PaaS providers, and traditional cloud providers are doing with our monitoring data. Doing it yourself is a way to control the logs, and increase privacy, to a certain extent.

Reasons not do this

Serverless: If you want to deploy as fast and easy as possible, you don’t have a database, and price/vendor lock-in don’t matter to you, you should sign up for netlify. If you do have a database, sign up for vercel. Both have an amazing experience. If you want to spend less money, take a look at AWS with architect. The cloud formation configuration that comes with architect makes a good developer experience on top of AWS.

Edge: If you care about performance and the environment, and want to push your code to the edge, you should sign up for Cloudflare or Fly. They are both great, too! Though Cloudflare’s edge database experience is a little experimental. In either case, edge deployment has a learning curve, especially if you have a database.

What you’ll need

  1. A node app that starts with npm start and talks to SQLite, or a managed database
  2. A server with a public IP
  3. A domain pointed at that public IP

If you’d like to follow along with this walkthrough in real time, you’ll need a domain name pointed at a Virtual Private Server (VPS) so that the SSL signing process will work. We recommend registering a domain name with a service like namecheap, signing up for a VPS service like a digital ocean droplet, provisioning a new server with ~1gb of memory, and pointing the domain to the IP address of that new server with an A record.

You should do that now, before you continue reading, so that your new DNS record has the 10 or 20 minutes it needs to propagate far enough for SSL signing to work.

1) Create a server

This can be anything from a raspberry pi, to a digital ocean droplet, to an AWS EC2 instance. This tutorial will work with any server you have root access to via a password or ssh key.

In either case this tutorial will eventually require an ssh key. So let’s start there.

Create or cat your ssh key

If you already have an ssh key pair, you can get it with one of the following, depending on how long ago you did this for the first time:

cat ~/.ssh/id_rsa.pub

or

cat ~/.ssh/id_ed25519.pub

If you try both of those and get nothing, you probably don’t have one. That’s ok, you generate one with:

ssh-keygen -t ed25519 -C "you@example.com"

Hit enter three times to save the key pair in the default location and not use a passphrase. And now you can get your key pair with:

cat ~/.ssh/id_ed25519.pub

If none of this work, refer to this guide from github.

Buy or build a server

Raspberry Pi: The newer Raspberry Pis can have 8gb of memory, which is overkill for an average node app. I’ve had a node app running on a raspi with this setup for close to a year with no issues. You can unplug it and plug it back in, and it boots the node app automatically.

If you have a Raspberry Pi with 1gb of memory or more, you can flash it with raspian using the Raspberry Pi Imager. Push the setting gear, and add a root user with root permissions, and save the password.

Digital Ocean Droplet: If you’re signing up for digital ocean to create a droplet, you can add your ssh key to your profile so when you create a droplet, you can check the box to add your ssh key to the server, and it will automatically be added. Then you can shell into the server. We recommend starting with the following settings:

  • Region is your choice, pick one close to you!
  • OS: Ubuntu 23.10
  • Droplet Type: basic
  • CPU
  • 1 GB / 1 CPU
  • 25 GB SSD Disk
  • 1000 GB transfer
  • Authentication Method: SSH Key
  • Hostname: example.com (switch for your project name)

After you create a server, grab the IP address from the admin of the hosting provider you selected. You’ll need it for the next step.

2) Point a domain at your server

Register a domain with your preferred provider, and point it at your server. We use Namecheap because we like the price.

We point Namecheap to AWS’s DNS servers so that we can manage our DNS records using AWS’s Route53, which comes with more features & options than Namecheap offers (like health checking, routing policies and yes, being able manage DNS with Terraform). But that’s a different blog post. You probably don’t want to do that if you’re reading this post.

You can use Namecheap to host your DNS. Add DNS hosting to your domain, and then create an A record with a value of the IP address of the server you created in the last step. Namecheap has a tutorial on that if you get stuck.

TLDR

At this point, if you prefer to skip to the end and use automation to do everything covered in this post, the last section #7 called “Ansible” has you covered. That’s the version you should use in day to day work. Keep reading if you’d like to do it manually once, and learn a little linux systems administration.

3) Lock down the server

Now it’s time to log in and lock down our server. If you have a root password, you’ll need it here. If you have an SSH key on the server, you won’t. We’re going to do this part as the root user named root, but this might be something else depending on where you got your server, so if it’s not root, for example, if it’s ubuntu, use that.

ssh root@example.com

It’s a security best practice to perform work as a new user with ssh key login, and never to use the root user. So we’re going to first add a new user, copy your ssh key to the server for that user, and disable root login.

Add a new user

Add a user named deploy:

adduser deploy

Fill in any password (we’re about to delete it in favor of ssh keys so you’ll never need it again) and then fill out your profile. You can hit enter to leave the profile blank. Then, delete the password:

passwd -d

Add that user to the sudoers list by editing the following file:

sudo nano /etc/sudoers.d/deploy

And the add the line:

deploy  ALL=(ALL)  NOPASSWD: ALL

Then save the file with ctr + o then enter, and close the file with ctrl + x.

Then exit the server

exit

Copy your ssh key to that user

Now, back on your computer, copy your ssh key to the server for the new use you created

cat ~/.ssh/id_ed25519.pub | ssh root@example.com "mkdir -p /home/deploy/.ssh && chown -R deploy:deploy /home/deploy/.ssh && touch /home/deploy/.ssh/authorized_keys && chmod -R go= /home/deploy/.ssh && cat >> /home/deploy/.ssh/authorized_keys"

Or if you generated your key with rsa:

cat ~/.ssh/id_rsa.pub | ssh root@example.com "mkdir -p /home/deploy/.ssh && chown -R deploy:deploy /home/deploy/.ssh && touch /home/deploy/.ssh/authorized_keys && chmod -R go= /home/deploy/.ssh && cat >> /home/deploy/.ssh/authorized_keys"

Now log back in as the new user:

ssh deploy@example.com

You won’t need your password here, since you’ve added your SSH key to the server.

Disable root login entirely

Disable root login by editing the following file:

sudo nano  /etc/ssh/sshd_config

And comment out this line by adding a # to the beginning of the line:

PermitRootLogin yes

Restart SSH

sudo systemctl restart ssh

4) Configure the server

Update and upgrade packages

Now let’s update the package manager that comes on your new server:

sudo apt update

Next let’s upgrade all the existing packagers in the server:

sudo apt upgrade

Hit y when it asks you if you’d like to continue. This upgrade will usually take a while.

You are likely to get a notice that you should reboot. If you do, hit ok. You are then likely to have the reboot notice followed by a notice to restart services, with one already selected. You can tab to the cancel option and hit enter, since we’re about to reboot and all the services will be restarted.

After upgrading, if you didn’t get a reboot notice, check to see if any kernel upgrades occurred and reboot if so:

cat /var/run/reboot-required

If it says reboot required, then:

sudo reboot

This will terminate your shell connection.

Install and enable UFW firewall

Give it a couple of minutes and log back in:

ssh deploy@example.com

Now lets install a firewall which will block traffic from unenabled services:

sudo apt install ufw

and let’s make sure enable SSH:

sudo ufw allow OpenSSH

and enable the firewall:

sudo ufw enable

It will ask you if you are sure you want to enable the firewall. You can say yes, since we’ve allowed ssh traffic through, so our connection won’t be disrupted.

Install nginx server, and enable in ufw firewall

Now lets remove apache, which comes default on some distributions of Ubuntu, and which we want to make sure doesn’t start by accident:

sudo apt remove apache2 && sudo apt autoremove

and then install nginx, our preferred web server,

sudo apt install nginx

Hit yes when it asks you if you want to continue.

and enable nginx in our firewall so it can talk to the outside world:

sudo ufw allow 'Nginx Full' && sudo ufw status

Configure nginx server

Now let’s add an Nginx configuration file:

sudo nano /etc/nginx/sites-available/example.com

Then add the following configuration to your nginx, switching out “example.com” for your project’s domain name, and the port number of the proxy pass (3000 in the below example) to the port number of your node application. This configuration sets up nginx as a proxy server in front of your node application, sending any requests that come in over http on port 80 to the node app running on localhost:3000:

server {
    listen 80;
    listen [::]:80;
    server_name iserver.boazsender.com;
    access_log /var/log/nginx/example.com.log;
    error_log  /var/log/nginx/example.com-error.log error;

    location / {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host $http_host;
        proxy_pass http://127.0.0.1:3000;
        proxy_redirect off;
        client_max_body_size 10M;
    }
}

And then link that file to from sites available to sites enabled:

sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled

Install certbot and setup ssl

Now let’s install Cerbot (EFF’s free ssl service), and request a certificate, replacing example.com with your project domain.

Your new DNS A record from the prior step should have propagated by now, but if it hasn’t, this step will fail. If it does, wait a few minutes and try again. Or if you prefer, you can switch to your IP address directly. You’ll just have to change out your domain name for your IP address in the nginx file on the prior step.

sudo apt install python3-certbot-nginx

And hit yes when it asks you if you’d like to continue.

Once it’s installed, you can run the ssl challenge, swapping out example.com for your domain pointed at this server, and the example email for the email that you want the EFF to email if your SSL certificate expires:

sudo certbot --nginx -d iserver.boazsender.com --agree-tos -m boazeliezer@gmail.com

This command creates an SSL certificate. Certbot issues short 90-day certificates, and sets up the system to auto renew them. If for any reason that configuration didn’t work, the EFF will email you with a notice to renew, and you can log back in and manually renew with certbot renew.

This command also expresses your agreement to the certbot Terms of Service, which you should read.

Lastly, this command edits the nginx configuration file we set up above and adds the correct routing to the SSL certificate. So after this, ssl will be installed on your server. We now need to restart nginx since certbot updated our nginx configuration:

sudo systemctl restart nginx

Install node

Now let’s install node. The version of node in the apt package manager is very old, and we may want to switch versions at some point, so we’re going to use the node version manager (nvm) to install it: there:

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash

This script downloads nvm to the deploy user’s home directory, and edit’s the deploy user’s bashrc to add the nvm command to it. Let’s also manually load nvm into our session in case the bashrc hasn’t been reloaded:

source ~/.nvm/nvm.sh

If you want to install the latest version of node you can run

nvm install node

and then use that latest version with

Or if you prefer an older version you can

nvm install 18

And

nvm use 18

Now you should get a version when you ask node for it’s version:

node --version

You can switch back to latest with

nvm use node

Make sure you’re using the same version of node you are developing with on your local computer. Switching node versions between environments can cause strange and hard to find bugs.

Daemonize

Next let’s use systemd to daemonize our node process with systemd. Daemonization means that systemd will start your node process for you, watch it, and then restart it if it crashes. It’s great!

Create a systemd service file with

sudo nano /lib/systemd/system/example.com.service

and paste in the following replacing example with your project name:

[Unit]
Description=Example
Documentation=https://github.com/yourusername/example.com
After=network.target

[Service]
Environment=NODE_ENV=production
Type=simple
User=root
WorkingDirectory=/home/example.com
ExecStart=/home/example.com/start.sh
Restart=on-failure

[Install]
WantedBy=multi-user.target

Now let’s reload, enable and start the daemon, replacing example.com with your project domain:

sudo systemctl daemon-reload && sudo systemctl enable --now example.com && sudo systemctl start example.com

In the future you can check the status of the server with sudo systemctl status example.com and restart the server with sudo systemctl restart example.com.

5) Install and start the project

Deploy project

First make a place for the project on your server. We use /home/example.com:

sudo mkdir /home/example.com

Change ownership of the project directory to the deploy user:

sudo chown -R deploy:deploy /home/iserver.boazsender.com/

Make sure that the directory is writable so that you can push files to it in the next step:

sudo chmod a+rwX -R /home/example.com

And place a bash script called start.sh inside the project directory. We use this script in order to get nvm to work with systemd:

nano /home/example.com/start.sh

And paste in the following:

#!/bin/bash
. /home/deploy/.nvm/nvm.sh
npm start

Make sure that the new start script is owned by the deploy user and executable:

sudo chown deploy:deploy /home/example.com/start.sh && sudo chmod +x /home/example.com/start.sh

Now let’s exit the server again

exit

Add deploy script to your project

We’ll now use rsync to copy our project to our server. rsync is YAMAZING. We’re huge fans. It copies files from one location to another, using a diff, so only files that have changed are copied. It’s idempotent. Nothing fancy, just good software from the 90s.

Our boilerplate deploy script uses rsync to copy files (excluding app, node_modules, .gitignore, .DS_Store, .git, .github, prisma/data.db, and public/media). The deploy command then shells into the server, changes directory into the app directory where rsync copied the files to, runs an npm install, migrates the database (we’re using prisma in this example), and then restarts our systemd service.

Back on your local machine, add the following to your package.json file, replacing example.com with your project name:

  "scripts": {
    "start": "remix-serve ./build/index.js",
    ….
    "deploy": "npm run build && rsync -rlDv --exclude='app' --exclude='node_modules/***'  --exclude='.gitignore' --exclude='.DS_Store' --exclude='.git/***' --exclude='.github/***' --exclude='prisma/data.db' --exclude='public/media' ./ deploy@example.com:/home/example.com && ssh deploy@example.com 'source .nvm/nvm.sh && cd /home/example.com && npm install && sudo -S systemctl restart example.com'"
  },

You can add any other --exclude flags as you’d like. These are just the ones that fit with a boilerplate remix stack. You’ll also notice that the start script is configured to start a built remix.run app with remix-serve. You can change this to whatever command your app uses to use, like node index.js for example.

Deploy the project

Now you can deploy the project!:

npm run deploy

The first time you run the deploy command it will take a while, especially if you’re on a 1gb of memory server, for npm to install all the dependencies.

And that’s it! Your node app is running in a production environment. Nice work! 💃💃

6) Bonus: schedule tasks with crontab

Does your app want to do something daily? Weekly? Monthly? Like sending a digest email? Well, there’s some old software that’s good at that too. Add a daily script to your package.json

  "scripts": {
    ….
    "digest": "ts-node app/utils/digest.server.ts"
    …
  },

On your remote server open crontab in your favorite editor with:

crontab -e

Then paste in the following configuration on the last line of the file save it:

0 4 * * * /usr/bin/npm run digest --prefix /home/example.com > /tmp/cronjob.log 2>&1

This cron task runs the email digest script every day at 4am UTC (8pm PST). It will dump errors into /tmp/cronjob.log so you can debug them. Change the 0 and 4 to * and * to run your script every minute for debugging porpoises.

7) Ansible

Now that we’ve done all of that, let’s never do it again! We’ve got a set of deployment scripts at bocoup/deploy that do everything in this blog post for you. In addition to being more convenient, and faster, this kind of automation is also less prone to human error. These ansible playbooks make it so you can lockdown, provision, and deploy to a server without ever manually shelling into it.

You can, download clone, or submodule that repo to your project:

git submodule add https://github.com/bocoup/deploy.git

And copy the example inventory file out of that new repo:

cp deploy/inventory.example.yml inventory.yml

Open up inventory.yml and fill out all of the variables with your own project’s details.

For this next step, you’ll need your ssh key, which is covered in the tutorial above.

Then you can lock down the server with:

  ansible-playbook -i inventory.yml deploy/lockdown.yml

The above command works if you have an ssh key on the server. If you are using a root user with password, you need to disable host key checking as part of your command:

export ANSIBLE_HOST_KEY_CHECKING=false && ansible-playbook -i deployment/inventory.yml  deploy/lockdown.yml

Either command will pick up from there; disabling the root user, making a new deploy user, adding your key, and so forth.

Next configure the server:

ansible-playbook -i inventory.yml deploy/provision.yml

and build and deploy your project:

npm run build && ansible-playbook -i inventory.yml deploy/deploy.yml

And finally make a deploy command to alias ansible for convenience:

 "scripts": {
    "deploy": "npm run build && ansible-playbook -i inventory.yml deploy/deploy.yml"
  },