Blog
April 10, 2026Infrastructure7 min

My Own Jarvis

Proxmox, Kubernetes, local LLMs, and 54 terabytes — my family’s AI infrastructure

When my wife asks what’s in the server room, I say: “Jarvis.” That’s a lie, of course. There’s a Minisforum N5Pro in there. Small, quiet, 96 GB RAM. But what runs on it is closer to Jarvis than to a normal home server.

Why a homelab when there’s the cloud?

Because I want to control my data. Photos, documents, tax returns, health records, my kids’ school reports — that doesn’t belong on someone else’s servers. And because I want to run local LLMs. Models that process my documents without a single token leaving the hardware.

That sounds like principle for principle’s sake. It’s not. When your homelab processes your tax documents, tags your family photos, and analyzes your health data, “local” isn’t a preference. It’s the only defensible option.

The architecture behind Jarvis

Proxmox 9 as hypervisor. 8 VMs: TrueNAS for 54 TB storage, Home Assistant for home automation, Windows for KNX programming with ETS — and a full k3s Kubernetes cluster with 3 masters and 2 workers. Plus 26 LXC containers: DevProcess, Docker, n8n, Qdrant, Elasticsearch, PostgreSQL, Redis, InfluxDB. An AI stack. Document processing. Monitoring. Media. Networking.

Paperless-NGX and Stirling-PDF for documents. Grafana, Prometheus and Uptime Kuma for monitoring. Jellyfin, PhotoPrism and Audiobookshelf for media. WireGuard, AdGuard and UniFi for networking. Node-RED, MQTT and ESPHome for home automation. And in between: n8n as orchestrator, connecting everything.

Family at the center

The storage architecture is family-centric. Each family member has their own space: documents, photos, videos, backups. Plus a shared area: family photos, vacation videos, insurance, school documents. PhotoPrism sorts automatically. Audiobookshelf manages our audiobooks. Jellyfin streams throughout the house. All local, all under my control.

Ollama and the n8n agents

The AI stack uses all 96 GB of RAM. Ollama runs multiple models: Llama for general tasks, Codellama for code analysis, Mistral for quick answers. No token goes to an external server. No prompt is logged. No model can be shut down because a provider changes their pricing.

The truly exciting part isn’t the individual services. It’s the interconnection. n8n workflows as agents that search Paperless documents, generate answers from Ollama, feed results into Home Assistant. All local. No cloud subscription, no API costs for routine tasks, no dependency on companies that kill their free tier tomorrow.

RAG for your own life

The workflow is simple: snap a document with Adobe Scan, forward the email to a specific address — done. Different email addresses mean different pre-qualifications: invoice, insurance, school, health. Paperless-NGX ingests the email, and Paperless AI handles fine classification using local LLMs. No manual sorting, no folder system, no cloud.

The same goes for digital mail: forward the email, done. The endgame is RAG over everything. Qdrant as vector database. Ollama embeddings for semantic search. “When was the last vehicle inspection?” — and the system doesn’t search folders but understands the question and finds the answer in a scanned letter from three years ago. On a Minisforum under my desk.

The Doorbird and face recognition

The doorbell is a Doorbird D2102. The camera recognizes faces. Home Assistant reacts: known face → open door, lights on, start music. Unknown face → notification with photo to phone. All processed locally, no cloud service.

The dream of an autonomous home

Jarvis isn’t finished. He’ll never be finished. But he grows. Every new service, every new workflow, every new model makes him a bit smarter. And one day he won’t just answer my questions, but anticipate them.

When my son asks in ten years why there’s a server in the basement, I’ll say: “That’s Jarvis. He watches over us.”

— Philipp