Back to Blog
2025-02-25|10 min read

How We Cut Our CAC by 70%: The Experiments, the Creatives, and the AI Tools That Actually Helped

How I reduced CAC by over 70% with almost no budget. The exact experiments I ran, how I used AI tools like Manus for research, and why the experimentation mindset matters more than any tool.

How We Cut Our CAC by 70%: The Experiments, the Creatives, and the AI Tools That Actually Helped

When I started running paid acquisition for my company, our cost per acquisition was high enough to make the channel unsustainable. We had limited budget and no room for waste. Over the following months, through a series of structured experiments, we reduced CAC by over 70% — while keeping ad spend roughly constant.

This is what I did, including the assumptions that turned out to be completely wrong.


It Started With Conversations, Not Ads

Before touching a single creative, I did something that sounds obvious but most people skip: I talked to every lead we had. Not a quick survey. Actual calls.

I tested different scripts — different ways of framing the problem, different questions, different emotional angles. Some resonated immediately. Others got polite non-answers. After enough of these conversations, a clear pattern emerged. And it challenged an assumption I had been building our targeting strategy on.

I had assumed our customers would skew higher-income. It felt like a reasonable hypothesis — the product sits in a premium segment and the price point isn't cheap. So we had been crafting messaging and targeting parameters with that profile in mind. It turned out to be wrong. The people who were most motivated to buy, most eager to start, and most likely to convert were not the high-income segment I had assumed. The calls made that obvious within a few weeks. We had been optimizing for the wrong person entirely.

This is the kind of thing you cannot learn from analytics. Analytics tells you what happened. Conversations tell you why, and more importantly, who.

Once I had an accurate picture of who was actually buying and why, the messaging came together quickly. Before those calls, I was guessing. After them, I was translating real language that real people had used back to them.

I heard Adam Grenier — who built Uber's growth marketing infrastructure from the ground up — talk about this on Lenny's podcast. When asked about the biggest mistake marketers make, he said: "If you just assume you need to launch a new channel to fix this problem, you're going to be wrong, because your entire customer base changed, not just the next 10% of customers you're looking for." The broader point was that understanding your customer has to come before everything else. That's exactly what those calls gave me. The assumption I had carried into the first few months of paid acquisition wasn't based on data — it was based on a prior I hadn't questioned.


Building Creatives With Almost Zero Budget

Once I had messaging grounded in actual customer conversations, I translated it into static creatives. I did this in Figma — no agency, no designer, just a structured testing approach.

The process was deliberately sequential, and keeping it sequential is the whole point.

First, I isolated messaging. Same visual template, different copy. I wanted to know which angle performed best before touching any design variables. Running them through Meta, I let the data tell me which message got people to stop scrolling.

Once I found a message that worked, I started tweaking the visual side — different imagery, different layouts, different color treatments. This is where most people get it backwards. They tweak everything at once and end up with no idea what moved the needle. If your new creative outperforms the old one but you changed the headline, the image, and the color scheme simultaneously, you've learned nothing you can build on.

Changing one variable at a time feels inefficient. In practice it's the only way to actually accumulate knowledge across tests rather than just accumulating data.


How I Structured the Meta Campaigns

Getting the messaging and creatives right is only half the job. The other half is campaign structure, and this is where a lot of small advertisers waste money without realizing it.

If you put all your test creatives in one ad set, Meta's algorithm will pick a "winner" early and funnel almost all of the budget to it — often before you have meaningful data on the others. You lose the test before it actually runs.

My approach: each experiment gets its own ad set with its own budget. This forces Meta to actually deliver each ad to a real audience and gives me clean, comparable data. When I want to test a new creative concept, I create a new ad set for it. Winning ad sets keep running. Losing ones get killed.

This structure also makes performance reasoning much easier over time. You're not trying to reverse-engineer why a blended ad set's cost-per-lead suddenly jumped last Tuesday.


When to Kill a Test

This is a question I got wrong early on, mostly by waiting too long.

The practical answer is simpler than most people make it: if an ad generates zero leads over a 7-day period while other ads in the same structured setup are consistently generating leads, it's not working. Kill it.

The reason this works cleanly in our setup is that Meta's algorithm is genuinely good at finding winners within an ad set. Because we isolate one variable per test and run a small number of ads within each ad set, the algorithm has a clear job to do — find which version of that one variable performs best. It usually does. Within a week, there are clear winners getting the volume and clear losers getting nothing.

When an ad has had a real week of delivery and has generated zero leads while the others are pulling strong numbers, that's a signal, not noise. Waiting another week hoping for a turnaround is almost always a waste of budget that could be going to what's working.

The key reason this decision is clean is the structure. If you're testing multiple variables simultaneously across a messy ad set, you can never be confident about what's causing underperformance. When you're testing one variable at a time, the poor performer has had a fair shot and lost. You can move on without second-guessing.


Where AI Actually Helped

I want to be specific here, because "I used AI" has become completely meaningless.

Three things AI genuinely accelerated:

Analyzing competitor ads. I used AI to go through Meta's ad library systematically — pulling competitor creatives, categorizing them by messaging angle, noting what had been running long enough to probably be working. This would have taken days manually. With AI helping structure and summarize the analysis, I could do it in an afternoon. It also helped surface which angles were already saturated in the market, which shaped what I decided to test.

Performance analysis. When I had data across multiple ad sets, I'd run it through AI and ask specific questions: which creative is showing the best cost-per-lead trajectory in the first 72 hours? Where's the drop-off between click and form completion? AI doesn't replace your judgment here, but it compresses analysis time and surfaces things you might miss when you're staring at a spreadsheet at the end of a long day.

Market research and competitive context. Manus helped me do broader research — understanding what messaging angles were already saturating the market in our space, what positioning competitors were leaning on, and where there might be white space. The output wasn't a final strategy, but it was a solid brief that I could act on rather than spending a week piecing together from scattered sources. That context directly shaped which angles I prioritized in my own tests.


The Experimentation Mindset Is What Actually Matters

None of this works without a structured approach to testing. I learned this the hard way.

Early on, I'd run a creative, get a mixed result, feel uncertain about what it meant, and then change too many things at once in the next iteration. You end up generating activity but not learning. Each test teaches you nothing because you've changed too many variables. You feel productive. You're not.

Albert Cheng, who runs growth at Chess.com and previously led growth at Duolingo and Grammarly, talked about this shift on Lenny's podcast. He described how Chess.com went from running almost no experiments to 250 a year, with a stated target of a thousand. He was honest that the specific number was made up — "Did I make it up? Yes, absolutely, I made it up" — but the point was the culture shift it forces. When you commit to running structured experiments, you start asking different questions before you launch anything: what exactly am I testing? What do I expect to happen? What would the result have to look like for me to consider this a win or a loss?

That mental shift matters more than any specific tool or creative format. With a limited budget, I couldn't run tests with massive sample sizes. But I could be disciplined about changing one variable at a time and being honest about what the data was actually telling me versus what I wanted it to say.


Results

PhaseWhat ChangedCAC Direction
BaselineNo structured testing, wrong customer assumptionHigh, unsustainable
After customer conversationsMessaging and targeting correctedMeaningful drop
After messaging testsBest-performing angle identifiedContinued improvement
After campaign structure overhaulSeparate ad sets per test, clean kill criteriaSignificant reduction
End state70%+ CAC reduction vs. baselineSustainable channel

Ad spend stayed roughly constant across all of these phases. The budget didn't change. The quality of the hypotheses did.


A Practical Checklist

If I were starting this process from scratch, this is the order I'd follow:

  1. Talk to your leads before writing a single ad. Call them. Test different scripts. Write down what lands.
  2. Pressure-test your customer assumptions. The profile you imagine buying your product is probably off in at least one important way.
  3. Build creatives in whatever tool you have. Figma works fine. The tool matters less than the discipline.
  4. Test messaging first, design second. Same template, different copy. Don't touch the visual until you have a messaging winner.
  5. Give each test its own ad set. Never mix experiments in one ad set and expect clean data.
  6. Set a kill criterion before you launch. For us: zero leads in 7 days while others are performing. Decide yours upfront, not after the fact.
  7. Use AI for the research work, not the judgment calls. Competitor analysis, performance summaries, market research — AI is fast and useful here. Deciding what to test next is still your job.

What I'd Do Differently

I'd talk to leads sooner. I spent too much time guessing at messaging and targeting from my laptop before picking up the phone. The wrong customer assumption cost us weeks of budget we could have deployed better.

I'd also set up cleaner tracking from the start. Some of my early experiments have fuzzy data because attribution wasn't configured correctly. You can't run a disciplined experiment on dirty data, and fixing attribution mid-campaign is messy.

The AI tools helped — genuinely. But the leverage came from doing the slow, unglamorous work of actually understanding who my customers are and what makes them decide to buy. Everything else was just building on top of that.


A lot of the thinking on experimentation and growth in this article was sharpened by episodes from Lenny's Newsletter and Podcast. If you're building a product and not reading it yet, it's the best resource I've found for practical growth strategy: lennysnewsletter.com.

Arnau Requena

Arnau Requena

Product guy and startup founder. Using AI to build beautiful and functional websites that convert. Helping others do the same.

Stay Updated

Subscribe to get notified about new articles, ways to leverage AI tools and learn from my mistakes.