Migrating a cheap VPS from Vultr to DigitalOcean

Why do this anyway?

So to start this off - I've been a happy user on Vultr for about a year. honestly they are a standup VPS provider with cheap prices and some great features you won't find on other services, notably:

  1. The ability to spin up an ipv6-only VM for SUPER cheap ($2.50 USD/month!)
  2. Being able to "reinstall" a VPS instance, preserving the external IP but basically hard-resetting the instance
  3. In a similar vein to 2, being able to completely change OS' on a whim, while preserving the external IP. If you can't tell I don't like updating my DNS records unless I have to.

While all of these features were great, I ended up deciding to migrate to DigitalOcean mid-October mostly because of the fact I use and enjoy DigitalOcean (heron: DO) a lot for ${side_gig}. I also had coincidentally upsized my VM at one point to attempt to run a Minecraft Server instance and found out quickly that Minecraft really likes memory nowadays, to the tune of 4gb being basically the minimum now. I feel old.

So moving to DO would basically save me a few bucks a month and streamline my workflow, as well as move away from CentOS back to Fedora Server (I like up to date packages more than I realized, even if I have to deal with a major update every 6 months).


Note: This had nothing to do with Red Hat "killing" CentOS, I fully support that decision. The fact that Alma/Rocky/170 other clones have appeared quickly it was definitely the right decision. Why waste the resources.

Another Note: I work for Red Hat and would rather have the resources go to Linux engineering than basically setting up build servers and validating bugs against both CentOS + RHEL. QE hours can be used elsewhere.


This migration gave me a good excuse to change the way I manage services as well: change from system-level systemd units to user-level systemd units, so podman can run in fully-rootless mode. It was a lot of fun migrating and then subsequently automating the entire process so just in case I did want to migrate to another rpm based OS it would be as simple as running ansible-playbook set-up-my-cloudserver.yml.

The Automation

---
- name: Cloud Server Setup
  hosts: localhost
  gather_facts: yes

  vars:
    home_dir: "{{ hostvars['localhost'].ansible_user_dir }}"
    ansible_python_interpreter: /usr/bin/python3

  tasks:
    - name: Debug print home dir
      debug:
        msg: "Installing using home: {{ home_dir }}"
      tags:
        - debug

    - name: Make systemd user directory
      file:
        path: "{{ home_dir }}/.config/systemd/user"
        state: directory
      tags:
        - containers

    - name: Link out container services
      file:
        src: "{{home_dir}}/{{ item.dir }}/{{ item.file }}"
        dest: "{{ home_dir}}/.config/systemd/user/{{ item.file }}"
        state: link
        force: yes
      register: link_file
      with_items:
        - {dir: "blog", file: "container-ghost.service"}
        - {dir: "caddy", file: "container-caddy.service"}
        - {dir: "feed_follower", file: "container-rss.to.telegram.service"}
        - {dir: "homepage", file: "container-homepage.service"}
        - {dir: "radarr_bot", file: "container-radarr_bot.service"}
        - {dir: "thelounge", file: "container-thelounge.service"}
        - {dir: "transfersh", file: "container-transfersh.service"}
        - {dir: "httpin", file: "container-httpin.service"}
        - {dir: "playground", file: "go-playground.service"}
      tags:
        - containers

    - name: Reload systemd user context
      shell: systemctl --user daemon-reload
      when: link_file.changed
      tags:
        - containers
        - systemd

    - name: Start + Enable user services
      systemd:
        scope: user
        name: "{{ item }}"
        state: started
        enabled: yes
      with_items:
        - container-ghost.service
        - container-caddy.service
        - container-rss.to.telegram.service
        - container-homepage.service
        - container-radarr_bot.service
        - container-thelounge.service
        - container-transfersh.service
        - container-httpin.service
        - go-playground.service
      when: link_file.changed
      tags:
        - containers
        - systemd

    - name: Enable linger for root user
      command: loginctl enable-linger root
      tags:
        - systemd

    - name: Open ports
      firewalld:
        port: "{{ item }}"
        permanent: yes
        state: enabled
      with_items:
      - 80/tcp
      - 80/udp
      - 443/tcp
      - 443/udp
      - 25565/tcp
      - 51820/udp
      - 60000/udp
      - 60001/udp
      - 60002/udp
      - 60003/udp
      tags:
        - firewalld
        - ports

    - name: Set timezone to America/Chicago
      shell: timedatectl set-timezone "America/Chicago"
      tags:
        - date
Ansible Playbook Detailing my VPS Setup

I know it's a bit long - but the gist of it is that I link my podman generate'd systemd services to the systemd user directory, then start+enable them. I also open all the ports that I required over the years - notably 80/443/60000 for http/https/mosh.

After running this on a new VPS everything came up swimmingly with minimal fuss - basically all I had to do was point my DNS records at the new server, rsync the home directory from the old VPS to the new one, then run ansible-playbook ... to get everything setup. Caddy fetched certs and was routing traffic to all of my services like nothing had changed.

The craziest part is just how little RAM all of these services use - the highest is between my IRC instance of thelounge and this instance of ghost and when all accounted for I'm still below 400MB of active memory! The little VPS is definitely doggy when doing CPU bound things, but thats expected when you're only dropping a Lincoln instead of a Jackson.


tl;dr: I downsized from a 2c/2gb RAM VPS instance at vultr to a 1c/1gb RAM instance at DO and I ansible-ized all of my container setup so moving is painless.

Jacob Lindgren

Jacob Lindgren

Nebraska, USA