My Accidental Homelab

November 29, 2025 · 1732 words · 9 min read

How a cheap M1 Mac Mini became my entire home infra.

I have a home server. It's a cheap M1 Mac Mini with the base configuration and an external SSD stuck to the back because I refuse to pay Apple's storage tax. It sits velcroed upside down under my desk, doing work I initially thought required a proper rack server, and it's been running quietly for weeks without me thinking about it once.

When I first bought it back in 2020, I really didn't expect much, it was Apple's first attempt at their own silicon in a desktop machine, and early Apple adopters usually get the "privilege" to debug version 1.0 for everyone else. The plan was to use it as a lightweight development machine, maybe run a few Docker containers for for dev work. That lasted about two years, time in which it became my main workstation, until I upgraded to a newer MacBook Pro. Then it sat there for a few weeks, a perfectly good ARM64 machine drawing less power than my phone, doing nothing. That felt wrong!

Now it runs everything. Self hosted alternatives to SAAS apps I got tired of paying subscriptions for, Home Assistant controlling the lights and tracking power usage, background jobs that sync things and process images, a few tools for the missus. It's become the infrastructure I didn't know I needed, and I genuinely enjoy tinkering with it.

I started with Docker because that's what you do, right? Everyone uses Docker. I already knew Docker. Except Docker on macOS has always felt kind of wrong, you're running containers inside a Linux VM inside macOS, and you can feel every layer of that stack whenever something doesn't quite work. File mounts are slow. The VM occasionally decides to eat memory for no clear reason. Updates break networking in unpredictable ways. It ends up working, but it makes you earn it. I also tried Podman for a while, thinking maybe the problem was Docker specifically. Same architecture, same issues! It's still a Linux VM pretending to be native, and that nagging feeling that this could be better persists.

Then at WWDC 2025, Apple announced their Containerization framework and I got genuinely excited about infrastructure for the first time in years. Not because it's trying to replace Docker or be compatible with the entire container ecosystem. That's actually the point, it's not trying to do everything. It's a native Swift framework for running Linux containers on macOS, built specifically for this platform, and it shows. The whole thing is open source on GitHub, both the Containerization framework and the container CLI tool. I spent way too long reading through the source code that first weekend.

The architecture is genuinely clever, instead of running one big persistent Linux VM like Docker Desktop does, Containerization spins up a lightweight VM per container. Sub-second start times, at last! You also get dynamic resource allocation, so the VM only uses CPU and memory when the container is actually doing work. Each container gets its own dedicated IP address, which means no more port-mapping headaches for container-to-container communication. The filesystem is real EXT4 exposed as a block device, so performance is just Linux performance, not some translation layer adding latency.

Inside each VM there's this minimal init system called vminitd, written entirely in Swift. It's compiled as a static binary using Swift's Static Linux SDK with musl for linking, no dynamic libraries, no standard Linux utilities you'd normally expect in a VM. It's a super tiny, secure bootstrap that manages network interfaces, mounts filesystems, and supervises processes. The whole environment is deliberately stripped down so there's almost no attack surface. Reading through the WWDC session notes and the actual source code, I kept thinking "oh, they actually thought about this" time and time again, it feels like it was built by people who understood exactly what was annoying about existing container tooling on macOS and decided to fix the right problems. Thanks Apple, took you long enough.

The container CLI tool is beautifully simple. It supports the short flags you'd expect from Docker (-it, -d, -p, -v, -e, --name), so the muscle memory transfers over immediately.

bash
# Pull an image
container image pull getmeili/meilisearch:latest
 
# Run it interactively
container run -it getmeili/meilisearch:latest /bin/sh
 
# Run detached with port mapping and a master key
container run -d \
  -p 7700:7700 \
  -e MEILI_MASTER_KEY=mysecretkey \
  --name search-engine \
  getmeili/meilisearch:latest

No more daemon management, goodbye Docker Desktop eating 3GB of RAM in the background, also no waiting for a VM to wake up before your container can start! All you do is run the command and the process is already executing. It feels like the latency just disappeared.

I migrated services over one weekend and it was almost boring how straightforward it was. The container CLI does exactly what you'd expect, and wrapping it with launchd for service management is just a plist file. Here's what the Calibre Web one looks like.

xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
  "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Label</key>
    <string>local.calibreweb</string>
 
    <key>ProgramArguments</key>
    <array>
        <string>/usr/local/bin/container</string>
        <string>run</string>
        <string>--name</string>
        <string>calibreweb</string>
        <string>-v</string>
        <string>/Volumes/Storage/books:/books</string>
        <string>-p</string>
        <string>8083:8083</string>
        <string>ghcr.io/linuxserver/calibre-web:latest</string>
    </array>
 
    <key>RunAtLoad</key>
    <true/>
 
    <key>KeepAlive</key>
    <true/>
 
    <key>StandardOutPath</key>
    <string>/var/log/calibreweb.log</string>
 
    <key>StandardErrorPath</key>
    <string>/var/log/calibreweb.error.log</string>
</dict>
</plist>

Load it once with launchctl load ~/Library/LaunchAgents/local.calibreweb.plist and it just runs. Forever. Survives reboots, restarts on crashes, logs everything properly. This is what service management should feel like! Nobody deserves to go through systemd, Docker Compose, or Kubernetes manifests for a small home lab.

Writing configuration in XML is still a miserable endeavor, though! So it's not perfect, yet. I got annoyed after the third plist and wrote a Python script to automatically generate it, so now I just have a YAML file that describes everything.

yaml
# services.yaml
services:
  calibreweb:
    image: ghcr.io/linuxserver/calibre-web:latest
    ports:
      - "8083:8083"
    volumes:
      - "/Volumes/Storage/books:/books"
    environment:
      PUID: "1000"
      PGID: "1000"
 
  homeassistant:
    image: ghcr.io/home-assistant/home-assistant:stable
    ports:
      - "8123:8123"
    volumes:
      - "/Volumes/Storage/homeassistant:/config"
 
  meilisearch:
    image: getmeili/meilisearch:latest
    ports:
      - "7700:7700"
    environment:
      MEILI_MASTER_KEY: "change-me-in-production"
      MEILI_ENV: "production"
      MEILI_DB_PATH: "/meili_data/data.ms"
    volumes:
      - "/Volumes/Storage/meilisearch:/meili_data"

Then the script that generates the plists from it.

python
#!/usr/bin/env python3
"""Generate launchd plist files from a services.yaml definition."""
 
import yaml
from pathlib import Path
 
PLIST_TEMPLATE = '''<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
  "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Label</key>
    <string>local.{name}</string>
    <key>ProgramArguments</key>
    <array>
        <string>/usr/local/bin/container</string>
        <string>run</string>
        <string>--name</string>
        <string>{name}</string>
        {args}
        <string>{image}</string>
    </array>
    <key>RunAtLoad</key>
    <true/>
    <key>KeepAlive</key>
    <true/>
    <key>StandardOutPath</key>
    <string>/var/log/{name}.log</string>
    <key>StandardErrorPath</key>
    <string>/var/log/{name}.error.log</string>
</dict>
</plist>'''
 
 
def generate_args_xml(config):
    """Build the XML string elements for ports, volumes, and env vars."""
    args = []
 
    for port in config.get('ports', []):
        args.extend(['-p', port])
 
    for volume in config.get('volumes', []):
        args.extend(['-v', volume])
 
    for key, value in config.get('environment', {}).items():
        args.extend(['-e', f'{key}={value}'])
 
    return '\n        '.join(f'<string>{arg}</string>' for arg in args)
 
 
def generate_plist(name, config):
    """Generate a complete plist string for a single service."""
    return PLIST_TEMPLATE.format(
        name=name,
        args=generate_args_xml(config),
        image=config['image']
    )
 
 
def main():
    with open('services.yaml') as f:
        config = yaml.safe_load(f)
 
    output_dir = Path.home() / 'Library' / 'LaunchAgents'
    output_dir.mkdir(exist_ok=True)
 
    for name, service_config in config['services'].items():
        plist = generate_plist(name, service_config)
        output_path = output_dir / f'local.{name}.plist'
        output_path.write_text(plist)
        print(f"Generated {output_path}")
 
 
if __name__ == '__main__':
    main()

It's not Docker Compose and it doesn't claim to be, but for my use case it's close enough, and it generates proper macOS service definitions that actually respect the platform instead of fighting it. I can add a new service to the YAML file, run the script, load the plist, and forget about it. That's the whole workflow!

So what's actually running on this thing? Calibre Web serves my ebook collection, a few thousand books on that external SSD, accessible from any device in the house. It's been running on port 8083 for weeks without me touching it once. Meilisearch powers search across a couple of my personal apps, it's absurdly fast and the container uses basically no resources when idle. Home Assistant was the migration that surprised me most. The Docker version was using 1.5GB of RAM just existing. On Containerization it uses about 400MB and responds faster. It controls the lights, tracks energy usage, logs temperature data, so on. It doesn't do much, but the difference is noticeable.

I also built a few personal web apps because I got tired of subscription fees and mediocre UIs. A bookmark manager that actually works the way I think, a read-it-later service that doesn't try to sell me a premium tier, a note-taking app I vibecoded in an hour that's just Markdown files with search. Each one runs in its own container with a SQLite database mounted from the host. They're simple, they're mine, and they don't cost me $5/month each.

Nginx sits in front of everything as a reverse proxy, handling TLS and routing requests to the right containers.

nginx
upstream calibreweb {
    server 127.0.0.1:8083;
}
 
upstream homeassistant {
    server 127.0.0.1:8123;
}
 
upstream meilisearch {
    server 127.0.0.1:7700;
}
 
server {
    listen 443 ssl http2;
    server_name books.local;
 
    ssl_certificate /etc/nginx/certs/local.crt;
    ssl_certificate_key /etc/nginx/certs/local.key;
 
    location / {
        proxy_pass http://calibreweb;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}
 
server {
    listen 443 ssl http2;
    server_name home.local;
 
    ssl_certificate /etc/nginx/certs/local.crt;
    ssl_certificate_key /etc/nginx/certs/local.key;
 
    location / {
        proxy_pass http://homeassistant;
        proxy_set_header Host $host;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
}
 
server {
    listen 443 ssl http2;
    server_name search.local;
 
    ssl_certificate /etc/nginx/certs/local.crt;
    ssl_certificate_key /etc/nginx/certs/local.key;
 
    location / {
        proxy_pass http://meilisearch;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

Everything stays local, so there's no external access! That also means I don't have to struggle with VPN complications or any elaborate firewall rules. PostgreSQL runs natively via Homebrew for the apps that outgrew SQLite, no container needed either! Isolation doesn't buy me anything there. Background jobs handle the boring maintenance stuff, mainly just Python scripts that sync data, process images, back things up. They run on launchd timers instead of cron because this is macOS and the native tools actually work.

xml
<!-- ~/Library/LaunchAgents/local.backup.plist -->
<key>StartCalendarInterval</key>
<dict>
    <key>Hour</key>
    <integer>2</integer>
    <key>Minute</key>
    <integer>0</integer>
</dict>

What I learned after the migration is that the M1 is absurdly overpowered for this kind of work. Peak load barely touches 30% CPU, and most of the time it's completely idle. The power efficiency is actually impressive, my electricity bill didn't move at all when this started running 24/7. I can check resource usage per container with container stats and it's genuinely fun to watch how little these things consume. The sub-second container start times are wild. Docker Desktop takes 2-3 seconds to start a container because it's waking up the VM and doing VM things. Containerization just starts, you run the command and it's already there.

And the external SSD was definitely the right call. Apple wants 400 EUR to upgrade from 256GB to 1TB internal storage. I spent 100 EUR on a 2TB Samsung T7 and just stuck it to the back. Not elegant at all, but I have way more storage than I'd ever have paid Apple's premium for. The whole setup looks a bit ridiculous if you flip the desk over, but it works! And nobody sees it, which is frankly the best thing about infrastructure.

I also didn't bother setting up monitoring dashboards. I thought about Prometheus and Grafana for maybe five minutes, then realized I could just check the logs when something breaks, which is almost never. The good thing about these services is that they either work or they don't, and when they don't, the logs tell me why. I didn't overcomplicate the networking either. Everything runs locally with Nginx handling routing. Simple is better! I look at /r/homelab and it's a sea of complexity, full racks and multi-machine setups with dedicated networking gear, and honestly I don't understand who has the time. Some of those setups are beautiful, genuinely impressive engineering, but they also look like a second job.

Containerization is still really new and the ecosystem is tiny compared to Docker. There's no Compose equivalent yet (though the community is working on it), fewer guides, fewer Stack Overflow answers when something goes sideways and an LLM will be just as lost as you. But for a single-machine homelab where you control everything? It's genuinely better. No persistent VM overhead. Better filesystem performance because it's actual Linux filesystems on block devices. Proper macOS integration. Resource management that works with the OS instead of against it. Each container starts with a default of 4 CPU cores and 1GB of RAM, and you can tune that per service with --cpus and --memory if you need to.

Total cost was 600 EUR for the Mini (though I am not even sure I could factor that as a cost, as it has already served me two full years as a main station), 100 EUR for the SSD, maybe a weekend to set everything up. Since then it's just worked, which means I can stop thinking about the server and actually use the services running on it. That's all I ever wanted! Infrastructure that disappears into the background so I can spend my limited free time on the things I actually care about building.