How I ship features without a dev team
The exact workflow I use to go from idea to deployed feature as a non-technical founder: Linear, Claude Code, AMP, and a CI/CD pipeline that keeps it safe.
How I ship features without a dev team
I'm the CEO of a digital clinic. I have a background in biotechnology, no formal CS degree, and until about 18 months ago, I couldn't write a line of production code. Last week I shipped a feature that pulls patient data from our CRM, runs it through a logic layer, and surfaces a prioritized task list for our medical team. It took me three days.
This isn't a post about AI being magic. It's a walkthrough of the exact workflow I use — from the moment an idea surfaces to the moment it's live — so you can judge whether it would work for you.
Where ideas come from and what I do with them immediately
Ideas at Lumina come from three places: patient feedback, things that break, and things I notice while using our own platform. When one surfaces, I have maybe five minutes before I'll forget the specific detail that made it click.
My first move is always a Linear ticket. Not a note app, not a Notion doc, not a voice memo. Linear forces me to write a title and at minimum one sentence of context. That friction is useful — if I can't describe the feature in one sentence, I don't understand it yet.
One thing worth mentioning: I barely type. Almost everything goes into Linear via dictation using WisprFlow. I speak the ticket, WisprFlow transcribes it cleanly, and I'm done in 30 seconds. Same for spec comments, feedback notes, everything. It sounds like a small thing but it's removed most of the activation energy around capturing ideas properly.
Here's roughly what a ticket looks like at this stage:
Title: Medical team task prioritization view
Description: Right now the team goes into each patient profile one by one to decide what to action. We need a consolidated view that surfaces which patients need attention today, ordered by urgency. Saves ~40 min/day per advisor.
That estimated time saving isn't decoration. It's the first filter. If I can't justify the build time against a real cost — hours, churn, CAC — the ticket moves to the backlog and often dies there, which is the right outcome.
Planning: from ticket to spec
Once a ticket reaches the top of my priority queue, I spend 30-60 minutes turning it into something I can actually build from. Again, mostly dictated through WisprFlow directly into the Linear ticket as a structured comment.
I don't write code specs in the traditional sense. I write a detailed description of what the feature needs to do, what data it touches, what the edge cases are, and what "done" looks like.
For the task prioritization feature, the spec included:
- The data model (which fields from patient records define urgency)
- The ranking logic (last contact date, treatment phase, outstanding actions)
- The UI layout I had in mind (I'd already sketched it in Magic Patterns)
- What the feature explicitly doesn't do (e.g., it's read-only in this version — no inline editing)
The last point matters a lot. Scoping what you're not building is where most feature bloat starts. The coding agents will build what you describe, so if you're vague, they fill gaps with assumptions.
Coding: Claude Code and AMP working together
My main coding agent is Claude Code, which runs in the terminal against my local repo. But increasingly I use AMP alongside it — AMP does essentially the same thing, and I've developed a habit of cross-referencing both, particularly when I'm making decisions I'm not fully confident about.
The practical pattern: I'll use Claude Code to generate an approach, then feed the same problem to AMP and see if it reaches the same conclusion. When they agree, I have more confidence moving forward. When they disagree, that disagreement is usually pointing at a genuinely ambiguous problem I need to think through more carefully before committing to anything. It's added maybe 20% more time to some sessions, but it's caught real mistakes — structural decisions that would have been annoying to undo later.
My process with both tools is iterative, not one-shot. I work in three phases.
Phase 1: Scaffolding. I share the spec and ask for the file structure, the data fetching logic, and the basic component shell. I review this before running anything. At this stage I'm looking for structural decisions that will be hard to undo — how it's fetching data, where state lives, whether it's introducing a new dependency I'll have to maintain.
Phase 2: Feature build. Once the scaffold looks right, I move through the spec section by section. Asking one well-defined problem at a time produces cleaner, more auditable code than trying to do everything in one prompt.
Phase 3: Edge cases and cleanup. I test the feature manually, write down everything that breaks or looks wrong, and bring that list back as a numbered set of fixes. A concrete bug list gets resolved faster than a vague "something's off."
For this specific feature, the part that required the most back-and-forth was the ranking logic. The rules for what makes a patient "urgent" are clinically nuanced, and I had to iterate on the logic several times with our endocrinologist before the output actually matched her intuition. That's not a limitation of the tools — it's a reminder that domain expertise still has to come from somewhere.
When I hit Claude Code's usage limits on heavy build days, I switch to AMP fully, or to Warp. The workflow is the same; the context window management is just slightly different.
From local to live
Once the feature works locally and I've done a manual QA pass, I push to staging and share the URL with one or two internal users — usually our medical team lead. I collect feedback over 24-48 hours, fix anything critical, then merge to main.
The part I feel genuinely good about here is our CI/CD pipeline, which was built by senior engineers. It's solid. It runs automated checks before anything reaches production, which means I can push with real confidence rather than crossing my fingers. Having that infrastructure underneath me — infrastructure I didn't build and couldn't have built on my own — is a big part of why this solo-founder-shipping-features setup actually works in practice. The AI handles the feature code; the pipeline handles making sure I haven't broken something else in the process.
Sentry catches errors in production. When something surfaces there, Codex has an automation that converts it into a Linear ticket automatically — I set that up in about an hour. It means nothing falls through the cracks between "error happened" and "someone's working on it."
What the real example taught me
The task prioritization feature is live. Our medical team uses it every morning. The 40-minute estimate turned out to be roughly right — closer to 35 minutes saved per advisor per day once you account for the time they still spend on follow-up.
What I didn't anticipate was how much it changed the texture of their mornings. Before, they'd start the day by scanning through individual profiles trying to construct a mental picture of who needed what. Now they start with a list that already reflects that picture. That's not something I could have spec'd in advance. I only know it because I asked.
That feedback loop — deploy, observe, ask, improve — is the part of this workflow that gets underweighted in most "how I build with AI" posts. The coding is the easier part. Knowing what to build and whether it actually solved the problem requires talking to the people using it.
The workflow, summarized
| Stage | Tool | What I'm actually doing |
|---|---|---|
| Idea capture | Linear + WisprFlow | Dictate a ticket with one-sentence description and time/cost justification |
| Planning | Linear + Magic Patterns + WisprFlow | Dictate a detailed spec; sketch UI; define what's out of scope |
| Coding | Claude Code + AMP | Scaffold → build → edge cases, one problem at a time. Cross-reference both agents on key decisions |
| Overflow | Warp | Same workflow when usage limits hit |
| QA | Manual + Staging | Test locally, share staging URL with one real user |
| Deploy | CI/CD pipeline | Pipeline runs automated checks; merge to main when clear |
| Error tracking | Sentry → Linear (via Codex) | Automated ticket creation from production errors |
The honest caveat: this works because Lumina's codebase is one I understand well enough to review what the agents generate. I'm not reading every line, but I know when something looks structurally wrong. If you're working in a codebase you don't understand at all, the review stage will be slower — not broken, just slower.
The other thing that made this possible is that I started small. The first thing I shipped with Claude Code was a minor UI fix. The task prioritization feature was maybe the 30th thing I'd built this way. There's a ramp-up, and it's worth going through it on something low-stakes.
Three days, one feature, zero engineers on my side. That's the output. The input is a workflow you can actually follow.
Stay Updated
Subscribe to get notified about new articles, ways to leverage AI tools and learn from my mistakes.
More Articles
How I built this website for free
Learning to code when you don't have to code. How I used v0, Github and Antigravity to build a professional website for free.
How We Cut Our CAC by 70%: The Experiments, the Creatives, and the AI Tools That Actually Helped
How I reduced CAC by over 70% with almost no budget. The exact experiments I ran, how I used AI tools like Manus for research, and why the experimentation mindset matters more than any tool.
Is Your Wix Site Slow? Here's Why (And How to Fix It)
Wix sites average 4-6 second load times. Here's why + how to get to <1.5 seconds. Real before/after: 6.5s → 1.1s (83% faster). Performance optimization guide.
