Self-Hosting Without the Ego Trip
There's a cycle people hit on the privacy journey: realize the cloud is a surveillance instrument, decide to self-host everything, spend six months in YAML, and end up running a cluster that's worse for their data than the SaaS they fled. Self-hosting can be a real privacy and ownership win — but only if you do it for the right reasons and stop short of the cliff. Here's where the cliff is.
The honest case for self-hosting
When self-hosting genuinely improves your privacy posture, it's because the alternative is a service that:
- Stores plaintext data the operator can read (most cloud notes/photos/files);
- Subjects your data to a jurisdiction's lawful-access regime you'd rather not deal with;
- Mines or sells what you upload (consumer photo services, "free" email);
- Could disappear, change terms, or close your account at any moment with no recourse.
Replacing those with a properly run server you own is a real upgrade. But "properly run" is doing all the work in that sentence. A self-hosted server with default credentials, no patching, no backups, and a wide-open admin panel is not more private than Google — it's just less observed by you, more observed by the rest of the internet.
The bad case for self-hosting
Here are the situations where the maths usually go the wrong way:
1. Replacing a service with stronger crypto than you can match
Self-hosting Signal would be insane — Signal already does the thing you'd be trying to do. Self-hosting Bitwarden (Vaultwarden) is fine because Bitwarden's threat model is already "we can't read your vault"; you're just moving the encrypted blob. Self-hosting a Mastodon instance to "escape Twitter surveillance" is questionable because federation means your messages still leave your server. The question is: what changes in the threat model when the data crosses the boundary?
2. Email
You can self-host email. Almost no one should. Modern deliverability requires SPF, DKIM, DMARC, BIMI, IP reputation, residential-ASN avoidance, careful DNS, and sometimes feeder relays through Postmark or AWS SES — at which point you've reintroduced the third party you were trying to avoid. Plus, you cannot control whether the recipient's email provider scans inbound. ProtonMail or Tuta give you the privacy benefit (E2EE between users on the same provider, encrypted at rest) without the deliverability nightmare.
3. Anything with a 99.9% uptime expectation
Your home connection has a dynamic IP, a router that reboots, an ISP that occasionally drops you, and power outages. If your spouse, employer, or family depends on the service being there, the calculus is different. Calendars and shared photo albums need to just work; that is a value, not weakness.
The lifecycle nobody mentions
Before you spin up anything, write down answers to:
- Patching cadence: who/what runs
apt upgradeand the app's own updater? How often? - Backup: 3-2-1 minimum. Three copies, two media, one off-site. Restore-tested at least quarterly.
- Monitoring: if it's down at 3am, who finds out? A heartbeat to healthchecks.io or your phone is the floor.
- Bus factor: if you get hit by a bus, can your spouse retrieve the family photos? Where is the documentation?
- Decommission plan: when (not if) you stop running this, how do users get their data out? In what format?
The quietly-good stack
If you've answered the questions above honestly and still want to host, here's a stack that has aged well across many real-world deployments. None of this is novel. All of it works.
The base layer
- Hardware: a small NUC, a used Optiplex, or a Hetzner dedicated box at $30–50/month. Don't buy a Raspberry Pi for storage workloads — SD cards die.
- OS: Debian stable or Ubuntu LTS. Boring is the point.
- Storage: ZFS or btrfs with snapshots. Dual-disk mirror minimum.
- Containers: Docker Compose, not Kubernetes. You are not Google.
- Reverse proxy: Caddy for automatic Let's Encrypt, or Traefik if you prefer labels-as-config.
- Remote access: Tailscale or WireGuard. Don't expose SSH on port 22 to the public internet.
The application layer
| Replaces | Self-hosted | Honest verdict |
|---|---|---|
| Google Drive / iCloud Drive | Nextcloud, Seafile, Syncthing | Syncthing for power users, Nextcloud if you need the polish |
| Google Photos | Immich | Genuinely good. ML face/object search runs locally |
| 1Password / LastPass | Vaultwarden + Bitwarden client | Excellent. The clients are the same |
| Notion / Evernote | Obsidian + Syncthing, or Trilium | Obsidian wins — local-first by design |
| Google Calendar | Radicale, Baikal | Works fine, syncs to iOS/Android via DAVx5 |
| Slack / Discord | Matrix (Synapse, Conduit, Dendrite) | Operationally heavy. Try Conduit before Synapse |
| Pocket / Instapaper | Wallabag, Readeck | Both solid |
| YouTube subscription tracking | FreshRSS + RSS-Bridge, or Tubefeeder | Surprisingly good |
| Maps | OsmAnd (mobile) + self-hosted tile server | Tile hosting is a rabbit hole; mobile clients are great |
The scaffold
A modest single-host docker-compose layout that holds up for years:
# docker-compose.yml — sketch
services:
caddy:
image: caddy:2
ports: ["80:80", "443:443"]
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
restart: unless-stopped
vaultwarden:
image: vaultwarden/server
environment:
DOMAIN: https://vault.example.org
SIGNUPS_ALLOWED: "false"
volumes: [./vaultwarden:/data]
restart: unless-stopped
immich-server:
image: ghcr.io/immich-app/immich-server:release
volumes: [./immich/upload:/usr/src/app/upload]
depends_on: [postgres, redis]
restart: unless-stopped
volumes:
caddy_data:
The Caddyfile takes care of HTTPS automatically:
vault.example.org {
reverse_proxy vaultwarden:80
}
photos.example.org {
reverse_proxy immich-server:3001
}
That's the architecture. Backups are restic nightly to a remote S3 bucket (Backblaze B2 is the sane choice), encrypted with a passphrase that is not stored on the box. If the server burns, restic + the passphrase = full recovery.
The security minimums
Self-hosted services have been the ransomware industry's favorite pasture for a decade. The common entry points:
- Exposed admin panels with default credentials — Plex, Jellyfin, Home Assistant, NAS units. Don't.
- Outdated PHP applications — Nextcloud is fine when patched, a disaster when not.
- SSH on the open internet — even with key auth, the noise floor is unpleasant. Put it behind WireGuard.
- Container escapes from misconfigured Docker — don't run Docker as root for untrusted code.
- Backup credentials stored on the same host — ransomware encrypts the backups too. Use append-only repos or pull-style backups.
Floor: automatic security updates (unattended-upgrades on Debian), SSH keys only, fail2ban or crowdsec, every public service behind Caddy with HTTPS, every admin interface only reachable through your VPN, and a separate machine doing pull-based backups so a host compromise can't reach them.
The honest middle path
The point of all this
Self-hosting is a craft, not a virtue. The goal is not to run the most software, or to avoid every cloud, or to win an argument with someone on a forum. The goal is to host the things that genuinely benefit from being on hardware you control, and to do it well enough that the result is more reliable, more private, and more durable than what you replaced. If you're running services your family depends on, you're an operator now — and operators measure themselves on uptime, restore-tests, and incident response, not on how purist their stack is.
The best self-hosted setup is a small one that has run boringly for years. The worst is a sprawling estate that requires constant attention from someone who is starting to resent it. Pick the first.