The Infinite Loop
How my dev team finds bugs, fixes them, and learns from it — without asking me
It's 11:47 PM. My laptop notifies me: 3 new DevProcess tickets. I glance at them: Severity Low, UI bugs. I type: “Work autonomously on the open tickets.” Then I go to sleep.
The dream of autonomous development
Everyone who works with AI agents knows the feeling: you're the bottleneck. The agent waits for your “yes,” your review, your confirmation. Every commit. Every file. Every decision. That's Human in the Loop — and it doesn't scale.
In my Agent-in-the-Loop study, I measured it: 38% of all my messages to AI agents were frustration overhead. 31% were completely automatable. Only 6% were real strategic decisions.
The autonomous dev team
So I built a system that eliminates the 31%. No new framework, no standalone service — just intelligent chaining of what's already there: 34 skills, 4 hook types, 2 MCP servers, memory system.
The architecture follows a simple principle: risk determines autonomy. Levels 1-3 — code style, commit messages, test strategy — the agent decides alone. Levels 4-6 — architecture, API design — it consults the Knowledge Backbone. Levels 7-8 — breaking changes, security, production — it asks me.
Skill chains: the autonomous workflow
When I say “Implement MVA-456 completely,” this happens: The orchestrator detects the workflow type. Reads the JIRA ticket via MCP. Queries the KB agent for context. Creates a feature branch. Implements. Runs self-review. Tests in the browser. Creates the PR. And reports: “PR ready for review.”
Between those steps: 5 chained skills, 2 MCP queries, 4 quality gates, and zero questions to me. That's the infinite loop: ticket in, PR out.
The Knowledge Backbone
The heart is the KB agent — a semantic knowledge store with temporal knowledge management. Every decision, correction, and error flows back. Confidence decay ensures outdated knowledge loses weight. Conflict detection catches contradictory information.
When I correct the agent, the learning loop captures it and stores it. Next time, it won't make the same mistake. The system improves with every interaction.
Circuit breakers and reliability
Autonomy without reliability is dangerous. So I built a reliability system: workflow state machine, task persistence, circuit breakers for external services, confidence tracking for decisions.
When JIRA is unreachable, cache kicks in. When a skill fails, a different approach is tried automatically. When browser tests report errors, the agent reads the source code before fixing. Three attempts. Then escalation to me.
The numbers
Before: 10 manual triggers per feature, 5 corrections, 3 context rebuilds, 4 hours time-to-ship. After: 2-3 triggers, 1-2 corrections, zero context rebuilds, under one hour.
These aren't marketing numbers. These are real metrics from my daily work on 5 projects in parallel. The point isn't perfection — it's direction. And the direction is right.
What comes next
The infinite loop is never done. Currently I'm working on improving retro automation: after every sprint, the system analyzes its own performance. Which skills were slow? Which corrections accumulated? Which rules need adjustment?
The goal: not me optimizing the system. The system optimizing itself. And I remain the conductor who writes the score.
— Philipp