Every 10 minutes, three AI agents wake up on our laptop. One researches leads and scans the web. One writes content — blog posts, social captions, outreach emails. One manages operations — updating task queues, auditing previous output, organizing files.
They run in parallel. They share state through markdown files. They pick up exactly where the last session left off. And every 10th cycle, the system audits its own performance and rewrites its own instructions based on what it learns.
The entire thing runs on a MacBook with a Claude subscription. No cloud GPUs. No API keys. No DevOps. Here is how it works and what it actually costs.
The Architecture: Skill File + State Files + Subagents
The system has three components:
1. The skill file. A markdown document — about 250 lines — that describes the growth agent's job. What to do each cycle, in what order, with what tools. Think of it as a runbook that an AI can execute. It includes the strategy (who to sell to, how to find them, what to say), the process (orient, plan, execute, update state), and the rules (do not research more than 10 leads ahead of outreach, deploy the website at most once every 3 cycles, every cycle must produce something within 1-2 steps of a booking).
2. The state files. Four markdown files that persist between sessions: STATE.md (what to do next), PROGRESS.md (what was done today), QUEUE.md (prioritized task list), and a daily log. Every session starts by reading these files. Every session ends by updating them. This is how continuity works — each session is a fresh spawn, but the state files give it full context.
3. The subagents. Each cycle spawns 1-3 parallel agents, each with a scoped task. A research agent searches the web for leads. A content agent drafts a blog post. An ops agent updates the queue. They run simultaneously and return results to the main agent, which triages and updates state.
The Three Agent Roles
Research Agent. This one searches the web for businesses that need AI automation. It finds their websites, reads their pages, identifies pain points (fragmented tools, no content marketing, manual operations), and scores them as leads. It saves each lead as a markdown file with the business name, contact email, score, pain points, and recommended outreach angle. It has found fitness coaches running 4 separate booking platforms, tutoring companies charging $350/hr with zero SEO, and business coaches teaching AI who do not use it themselves.
Content Agent. This one writes. Blog posts, LinkedIn content, outreach emails, YouTube scripts, case study sections. It reads existing content for voice consistency, references real production data from our systems, and drafts first-quality output that we review and approve. It wrote 4 full blog posts, 5 LinkedIn posts, 10 personalized outreach emails, and a YouTube script — all in the same 48-hour window.
Ops Agent. This one does housekeeping. It updates the task queue, audits previous output for quality, organizes lead files, and cleans stale data. It also handles self-improvement — every 10th cycle, it reviews the last 10 cycles of logs and recommends changes to the skill file itself. In one audit, it identified that the system was spending 35% of its time iterating on a website that nobody was visiting, and rewrote its own priority order to focus on outreach instead.
Self-Improvement Is Built In
The skill file includes a self-improvement protocol. Every 10th cycle, the agent runs a self-audit: What produced results? What was busywork? What should change?
At cycle 8, the agent conducted its first audit. It analyzed 7 cycles of output and concluded:
- The highest-impact activity (outreach email drafting) got the least time — 15% of total effort
- The lowest-impact activities (website iteration, state file management) consumed 45% of total effort
- Zero items had been published, sent, or seen by anyone outside the team
- 5 website deployments were made to an audience of zero
- Lead research continued when 11 leads already existed with no outreach sent
The agent then updated its own skill file with new rules: cap lead research at 10 until emails are flowing, batch website deploys, measure every cycle by distance to a booked call instead of volume of output.
This is not theoretical AI self-improvement. These are specific instruction changes the agent wrote into its own configuration file, which changed its behavior in subsequent cycles. Cycle 13 — the first cycle after the new rules — sent 10 cold emails. The previous 12 cycles sent zero.
What It Actually Costs
A Claude subscription. That is the recurring cost for the AI engine.
The scheduled task in Claude Desktop fires every cycle, runs for up to 10 minutes, spawns subagents, does the work, and shuts down. The subscription covers all of it — no per-token billing, no surprise charges, no runaway costs at 3 AM.
Additional costs that are not Claude-specific: - Domain registration (~$12/year) - Firebase Hosting (free tier covers it) - A VPS for email automation ($5-10/month — we already had this) - Email hosting for the business address (~$20/year)
Total infrastructure cost beyond the Claude subscription: under $15/month.
Compare that to what we replaced: if you hired a freelance marketer to do what the agents do — research leads, write personalized outreach, draft blog posts, manage social content, maintain a website — you are looking at $2,000-5,000/month minimum. The agents do it for the cost of a subscription and a domain name.
The Human Layer: What We Actually Do
The agents handle about 90% of the execution. We handle 100% of the decisions.
Here is what that looks like daily:
Morning. We check PROGRESS.md to see what the agents did overnight or since we last looked. We scan the lead files. We review any content drafts. We approve, edit, or reject. This takes 10-15 minutes.
When an outreach email gets a reply. We handle the conversation. The agent drafted the first email and got us in the door. The human relationship takes over from there.
When content needs recording. We record ourselves on camera. The agents handle everything before (researching, scripting) and after (editing, publishing, distributing). We show up and talk.
Strategy decisions. Which audience segments to prioritize. Whether to adjust pricing. When to launch a new product. The agents execute strategy — they do not set it.
The bottleneck is always our approval, never the execution. That is by design. An autonomous system that sends bad emails or publishes bad content without review would do more harm than good. The human-in-the-loop is not a limitation — it is the quality control layer.
What This Means for Your Business
If you run a business where you spend hours on tasks that do not require your best thinking — researching, drafting, scheduling, following up, posting — you are doing manually what AI agents can do from a laptop.
The setup is not trivial. Writing a good skill file requires understanding the business deeply enough to separate decisions from execution. Choosing the right tools and integrations requires knowing what exists and how to wire it together. Building the state management so agents have continuity between sessions requires architectural thinking.
That is what we do. We study your workflow, write the skill files, wire the integrations, and hand you a system that runs from your machine. You keep the decisions. The agents handle the rest.
Three agents. One laptop. One subscription. Real output, every day.
Ready to see what AI can do for your business?
We build custom AI systems like the ones we write about. Fifteen minutes is all it takes to map your workflows and show you what is possible.
Book an AI Intro Consultation