The repository contains a concise summary. It contains the bare minimum required to get started. The aim of this article is to provide more in-depths tutorial.
In part one, I have shown how to create a remote development using DigitalOcean and rsync. In part two, I will show how to automate the entire process using Bash script.
To get started, you will need to install and configure the dependencies:
do.sh sync)do.sh watch)do.sh copy do.sh scp)
Once that is done, clone the repository. You can either set it up as a sub-module or as a standalone repo. To make do.sh be accessible from anywhere, copy it or symlink it into your PATH, e.g. /usr/local/bin.
The droplet can be configured using either Bash or yaml configs where examples of both are available in the repository. You would need to call the script either from one level up from do.sh or export environment variable CLOUD_CONFIG with a different path.
This script allows a certain degree of flexibility via environment variables. For instance, the config used can be specified via CLOUD_CONFIG. If you require a greater degree of customizability, you can either submit a PR or fork the repository.
do.sh startdo.sh up prep synccmd to rewrite path in output
Using do.sh is very simple. To get started, type do.sh help which will show you a list of available commands. Some commands support chaining, e.g., do.sh up prep sync which will run in sequential order. Generally, you can chain commands which have a fixed number of arguments such as up or down. Commands like ssh, cmd and copy can have any number of arguments so these do not support chaining. A good workaround is to add these commands at the very end, e.g. do.sh up copy file1 file2 file3.
Below is a list of available commands: up create dev server *
down destory dev server *
reset re-create dev server *
sync rsync from local to remote *
watch watch local for changes and sync
deps install Node deps on remote *
prep[are] shortcut for sync -> deps -> watch
ssh start interactive ssh session
ssh <cmd> execute command on droplet
cmd <cmd> ssh <cmd> and replace cwd with local
scp <path> copy from remote to local (cwd)
copy<path> copy from local to remote (~/.repo/)
cp <path> alias to copy command
dist shortcut to copying dist/ from remote *
host show public ip of remote *
config create config from env var CLOUD_CONFIG *
help show available commands
* these commands support chaining, e.g. do.sh up prep sync
Here is an example of my workflow. I start with up, followed by prep. As this script supports chaining, here is what I do: do.sh up prep. If I need to run a command after copying files, I execute do.sh sync cmd <cmd>. Path re-write is useful if I want to be able to copy and paste path from the error stack straight away (cmd). For instance, I use iTerm which supports semantic history and with path re-write, I can open files directly from console on my local system.
This script supports so-called settings via environment variables overrides. Here is a list of supported overrides:
NAME name of the droplet, defaults to dev-serverIMAGE os (image) to be used, defaults to ubuntu-20-04-x64SPECS droplet specs, defaults to s-2vcpu-2gb; find out more specs by running doctl compute size listREGION droplet datacenter, defaults to lon1CLOUD_CONFIG location of cloud config, defaults to ./dev-server/cloud-config.ymlSSH_KEY local path to private ssh key, defaults to ~/.ssh/developerSSH_USER ssh user, defaults to developerSSH_HOST ssh host, defaults to none; the value is configured at runtime when up command is run and saved to SSH_OUTPUTSSH_SOCKET local path for ssh socket, defaults to none; once SSH_HOST is available, the value becomes ${HOME}/.ssh/sockets/$SSH_USER@$SSH_HOSTSSH_CWD value of pwd on remote host, configured at runtimeLOCAL_CWD value of pwd on local hostSSH_HOST_FILE local path where SSH_HOST value is saved, defaults to /tmp/dev_ssh_hostSSH_CWD_FILE local path where pwd of remote host is saved, defaults to /tmp/dev_ssh_cwdIn this article I have shown a Bash script which automates creation of remote development server. Part one went into technical details of setting up the droplet while this part (part two) automates this process.