CallixCallixThe Product
Return to Journal
LEDGER·Vol. III — Issue 03

How Northwind Coaching rebuilt a 3x close rate in ninety days

An inside look at a $14m/yr high-ticket coaching operation — and the attribution cleanup that made it scale again.

FIG. 01 · CUSTOMER LEDGER · Vol. III — Issue 03
Fig. 01 — Editorial plate2026-04-02

When Jordan took over the sales floor at Northwind, the closing rate had quietly dropped from 32% to 11%. Six months later, it sits at 34%.

Jordan Reyes inherited the sales floor at Northwind Coaching in October. What they inherited looked, on paper, like a sales floor in crisis.

Meta spend was up. Booking rate was up. Ad-reported ROAS was healthy. And yet closed-won revenue had flattened and then fallen, quarter over quarter, for three consecutive quarters. Nobody could explain why.

§The problem

Northwind's dashboard said every source was converting. Their actual bank statements said otherwise.

"We had a dashboard that a data vendor built for us two years ago," Jordan said. "It averaged close rates across the whole team, blended organic and paid into one column, and called anything that deposited 'closed.' When we opened it on Monday, nobody actually trusted it. We ran the business on a spreadsheet one of the reps maintained."

That's the thing about attribution: once the numbers stop telling you the truth, they stop telling you anything. Your team starts trusting their gut. And the gut is just pattern-matching against the last three calls — not the last three thousand.

§The audit

The first thing we did when Northwind switched to Callix was an attribution audit.

We ran three cuts in parallel:

  • By rep. We indexed every call on userId and sliced close rate by sales rep. Two reps were carrying the floor. Three reps were sub-10%. The blended average was hiding both facts.
  • By source. We attributed each call to its first-touch and last-touch campaign. One Facebook campaign — "Retargeting — 7d Site Visitors" — was responsible for 48% of booked calls but 71% of disqualified calls.
  • By quality score. We ran every call through deal analysis and produced a quality score. The distribution was bimodal: a cluster at 70+ and a cluster at 30-. Very few calls in the middle. That's a sign of a lead-quality problem, not a sales-skill problem.

By the end of week one, we knew three things: which reps needed coaching, which campaigns needed killing, and how much real demand the funnel actually had. Up to that point, Northwind had been running on a weighted blend of all three.

The first Monday we opened the Callix dashboard, I watched Jordan laugh out loud at the campaign breakdown. He'd been defending that retargeting campaign for a year.

§The rebuild

Once the audit was done, the rebuild was mechanical.

  1. Killed the retargeting campaign. Meta spend dropped 22%. Total booked calls dropped 11%. But showed calls only dropped 3%, and closed calls stayed flat. We weren't losing revenue — we were losing junk.
  2. Rebalanced reps. The three underperformers moved to an inbound setter role. The top two reps got every high-quality lead routed to them first. Nobody left; nobody was demoted.
  3. Wired the conversion tiers. We sent HighTierCall ($50k), MidTierCall ($1k), and LowTierCall ($50) events to Meta based on deal analysis quality score. Within three weeks Meta's auction started feeding them leads that scored in the top quartile.
  4. Deployed the new Custom Dashboards. Jordan built a Monday-morning view — Total Calls, Closed Calls, Cash Collected, Closing Rate, Ad Spend, Profit, all broken out by rep. He opens it every morning before standup. The spreadsheet is gone.

§The numbers

Three months in:

  • Close rate: 11% → 34%
  • Cash collected (monthly): $410k → $1.2m
  • Ad spend: $180k/mo → $142k/mo
  • Profit: $170k/mo → $870k/mo
  • Meta-reported ROAS: 2.3 → 7.1

The numbers that didn't change are almost as interesting. Booked calls dropped. Unique site visitors dropped. What went up was the quality of every call the floor took — and every downstream metric that quality drives.

§What Jordan does differently

"The biggest thing isn't the dashboard. It's that I stopped averaging," Jordan said. "I stopped saying 'our close rate.' There is no 'our close rate.' There's a rate for Maya, a rate for Chen, a rate for the retargeting campaign, a rate for the Friday evening leads. Those are the numbers. Everything else is a story I tell myself."


If you want a copy of the dashboard Jordan built, we've templated it. Drop your email and we'll send the config.

END OF DISPATCH
2026-04-02 · Vol. III — Issue 03
EM
Filed by
Elena Marchetti
Head of Customer Success
Further reading

Keep turning pages.

All dispatches
Product Dispatch·

Custom Dashboards — the metrics view that matches how you sell

A composable canvas for your numbers: drag, resize, filter, and save the view your operator actually reads at 7am.

Operator Playbooks·

First-click vs. last-click attribution — which model actually runs a high-ticket sales floor

Neither model is right. Neither model is wrong. The question is which decision you're about to make.

∿ field reading
Field Notes·

Ad fatigue is a story operators tell themselves

A short note on why the data usually says something else.