Why move the blog? And to where?
After putting up with the clunky WordPress blog (and Bluehost's 2003-looking admin panel for that matter) for three years, I finally decided to ditch everything I currently have and restart my blog in a more civilized manner. There was a couple of things that I was not happy about my old WordPress setup, namely:
- Clunky and eats up my server storage.
- Not as easy way to back up with tools I know.
- Does not come with a command line interface, which is becoming my preferred way of doing almost anything.
- Lacking some basic features I wanted, i.e. multilingual support. As powerful as WordPress may be in the right hands, I do not want to invest too much effort in learning CSS/js/php nor do I want to use some plugin from some sketchy WordPress plugin marketplace.
- These theme and plugin marketplaces creeps me out in the same way as ubuntu software center.
- WordPress has a lot of features I do not actually need, i.e. user permission system, which is an overkill for my personal blog site.
Picking an alternative blogging system was not too hard once I am aware of my needs: a fast and minimalist static site generator implemented in a language I know (or I found valuable to learn) with out-of-the-box multilingual support, a.k.a.
As for hosting services, I considered github pages and netlify to be fast and easy solutions but I want something more substantial for a personal blog, like a VPS. Besides, github pages not supporting https for custom domains is a deal breaker for me. I filtered down the list of VPS hosting providers with Arch Linux support and I ended up with DigitalOcean. Since I wanted to completely sever my connection with Bluehost, I also moved my domain name host to Google Domains.
Install Arch Linux
Do note that Arch Linux is probably not the best suited server Linux distro. Use a non-rolling distro if stability is a concern. I use it only because I also run it on all my other computers. Backup the droplet often if you decided to go down this route: it hasn't happened to me yet but I've heard people complaining about Arch breaking too often.
Apparently my information on DigitalOcean supporting Arch Linux is outdated, as they stopped supporting it a while back. Thankfully, it is still not to hard to bring Arch Linux to a droplet (this is how DigitalOcean refer to a server) due to the awesome project digitalocean-debian-to-arch. All I needed to to was set up a droplet,
ssh into the server, and follow the instructions:
# wget https://raw.githubusercontent.com/gh2o/digitalocean-debian-to-arch/debian9/install.sh -O install.sh # bash install.sh
Low Level Setup
Once the script finishes running, I have an Arch Linux system running on my droplet with internet access. Most of the additional setups needed can be found in Arch Wiki. Since I am by no means a great tutorial writer, I suggest referring to Arch Wiki for detailed steps. The recorded commands here are just for book-keeping purposes and is by no means the best way to do things.
Sync system clock and set time zone.
# timedatectl set-ntp true # timedatectl settimezone <Region>/<City>
Install/update base packages.
# pacman -S base base-devel
# genfstab -U / >> /etc/fstab
en_US.UTF-8 UTF-8 in
/etc/locale.conf then generate locale with:
/etc/hosts and add hostname of droplet:
127.0.1.1 <hostname>.localdomain <hostname>
Boot Loader and Initramfs
Optimizations for intel processors:
# pacman -S intel-ucode # grub-mkconfig -o /boot/grub/grub.cfg
crc32 modules to initramfs, as otherwise the droplet fails to boot. Edit
MODULES= "crc32 libcrc32c crc32c_generic crc32c-intel crc32-pclmul"
Regenerate the initramfs image.
# mkinitcpio -p linux
You know the drill.
Here are some additional settings to make Arch Linux more useable.
Obviously it is not a good idea to use root account:
# useradd -m -G wheel -s /bin/bash <username> # passwd <username>
Add User to Sudoer
/etc/sudoers and add:
<username> ALL=(ALL) ALL
Login As User
We will finish the rest of the configuration using the user account.
# su <username>
I used to use
packer as wrapper around AUR and
pacman. However, after learning about inherent insecurity in their package building processes, I switched to a more secure AUR helper
pacaur is another choice, and fun fact: there is a reddit bot that tells you to switch to
pacaur every time
yaourt is mentioned in a post):
trizen prompts user to inspect
*.install and other scripts before sourcing them and
trizen is written in Perl instead of Bash. To install
trizen, first install dependencies via
pacman according to its AUR Page, then clone its git repo to a local directory. Navigate to the directory containing
PKGBUILD and run
to make package and
$ pacman -U trizen-*.pkg.tzr.xz
Once package manager is in place, install packages to your heart's content! Some of my bread-and-butter packages include
emacs (I installed the cli-only version,
tmux (terminal multiplexor, very useful),
vim (for quick edits), and etc.
Security Related Stuff
Now that a usable Arch Linux installation is in place, I would employ some security measures before hosting my website on it.
Secure Login via
On local machine, generate your ssh keypair:
$ ssh-keygen -t rsa
Send your ssh keys to server:
$ ssh-copy-id <username>@<server>
Now, on server, make the following edits to
PermitRootLogin no ChallengeResponseAuthentication no PasswordAuthentication no UsePAM no AllowUsers <username>
These changes will disable root login, disable password login and only allow specified user to login via ssh.
It is advisible to also change the default port (22) used for ssh connection, in the same file, specify port by (please remember this port selection):
For these changes to take effect, restart
$ sudo systemctl restart sshd.service
ssh session intact and attempt to start another
ssh connection in local machine to see if the changes have taken effect (the original session is needed in case things are not working):
$ ssh -p <non-std-port> <username>@<server>
ufw as my firewall and it is very easy to setup. Install
trizen and enable the desired ports:
$ trizen -S ufw $ sudo ufw allow <port>/<protocol>
For instance, to allow
ssh communication, allow
ssh (if you used a non-standard port, allow
<non-std-port>/tcp). Some other useful ports are:
|imap over |
|receive incoming mail|
|smtp access (with or without |
To review the added ports and enable them:
$ sudo ufw show added $ sudo ufw enable
Auto start up:
$ sudo systemctl enable ufw.service
Sync Server Time
Sync server time with
$ trizen -S ntp $ sudo systemctl enable ntpd.service
Check time server status with:
$ ntpq -p
Setting up PTR Record
It turns out that DigitalOcean handles this automatically, all I needed to do is set the droplet name to a Fully Qualified Domain Name (FQDN), in this case
www.shimmy1996.com. I then checked if the record is in place with:
$ dig -x <ip_address>
Firing up the Server
Next step would be actually preparing the server for serving contents.
Create Web Directory
Create a directory for serving web contents, a common choice would be:
$ mkdir ~/public_html
Make sure to give this directory (including the user
home folder) appropriate permission with
755 would normally work). Populate the directory with a simple
index.html for testing if you want.
trizen, and edit
/etc/nginx/nginx.conf to set up
http server (the one set to
listen 80 default_server):
server_name www.<domainname> <domainname> root /path/to/public_html
server_name line add as many as you want. You may want to put your mail server address on it as well so that you can generate a single ssl certificate for everything. After these changes are made, (re)start and enable
$ sudo systemctl restart nginx.service $ sudo systemctl enable nginx.service
The next step is to set up DNS records for our server. There are three types of records that need to be set up initially,
CNAME. I also included some other useful records:
|@||nameserver address||specifiec name server to use|
|@||supplied IPv4 address||redirects host name to IPv4 address|
|www (can be anything)||@||sets |
|@||mail server address||specifiec mail server to use|
|@||authorizor of SSL certificate||prevents other authority from certifying SSL certificate|
In my case, though I use Google Domains to host my domain, I still use DigitalOcean's name server. So I needed to setup these records on DigitalOcean and
NS records on Google Domains.
After this step, you website should be accessible via your domain name, although it may take a few hours for the DNS record to populate.
Let's Encrypt is a great project and
certbot is an awesome tool for SSL certificate generation. Kudos to the nice folks at EFF and Linux Foundation. I simply followed the instructions on EFF site:
$ sudo pacman -S certbot-nginx $ sudo certbot --nginx
To provide some extra credibility to the certificate, I added an
CAA record in my DNS settings with issue authority granted for
letsencrypt.org. For now Let's Encrypt does not support wildcard certificate but will be January 2018, and this is why I added a bunch of subdomains into my
nginx.config (so that the certificate covers these subdomains as well).
After a couple hours (mostly waiting for DNS records to populate), and my website is online again. With a VPS at my disposal, I also host my personal email now and I might organize my random notes pieced from various websites into a post as well. I am still trying to figure out an efficient workflow for writing multilingual post with
hugo and once I am convinced I have found an acceptable solution, I will also post it.