I’m going to provide an overview of my home server. It’s main use cases are image storage, ebook tracking, git hosting, openclaw, and other file storage.

The hardware layer is composed of a laptop and a usb C hdd raid enclosure. The laptop has an i5-8500U,16 GiB of ram, and 256 GiB of storage. The USB C is technically a usb 2.0, the usb 3.0 needs to be used to power the laptop. The usb 2.0 has a max bandwidth of 60 MiB per second. The HDD drives are 5400 RPM. The drives are setup in raid 1. The drives can sustain at 180 MiB read, and spike up to 6GiB/S reading. The USB 2.0 does bottleneck the drives, but for my use cases thus far it hasn’t been a problem. My internet plan max upload is 30 MiB per second, so if I’m not at home that’s the real bottleneck.

The “software infrastructure” layer is setup with proxmox, an NFS share on the USB storage, and one main ubuntu VM with docker-compose stacks. The usb HDD device is shared through NFS on the proxmox host. This is for stability and shareability of the core file system. The main VM is an ubuntu server vm, with docker-compose.

The software running includes: pihole, calibre-web,nextcoud,gitea,and openclaw. Pihole gets it’s own LXC container, because it’s very important for uptime. My fiance will get upset if pihole goes down. The others have most of their data and configuration stored on the VM, with larger files stored in the HDD. The ubuntu VM connects to the HDD storage through NFS. Openclaw is in a docker container, but I want to move it to it’s own VM at some point.

All external connections happen through tailscale. It’s easy and it works, nothing else to say really.

The main ubuntu VM gets backed up regularly to the HDD and to the cloud. Some important data on the HDD gets backed up to the cloud as well, but theres a lot of data that isn’t as important to me.