Don’t profile your ideal customers. Prove them.
This chapter is an expansion of Jordan Crawford (Blueprint)’s teachings on the list is the message.
You know that feeling when you look at your outreach dashboard and something just feels off? The numbers are there. Emails sent. Sequences running. Lists growing. Everything looks busy.
But the replies? Lukewarm at best. The meetings? Barely qualified. The pipeline? Not moving like it should.
You’ve done everything right on paper. Built your ICP. Defined the filters. Found thousands of companies that match. Launched campaigns. Yet somewhere between “perfect fit on paper” and “actually gives a damn” something got lost.
Here’s what probably happened: you profiled your ideal customers instead of proving they exist.
And there’s a massive difference between the two.
The comfortable fiction we tell ourselves about icp
Most ICPs are born in conference rooms, not in market reality. Someone pulls up the CRM. Looks at the best accounts. Finds patterns.
“Our best customers are in construction. 50-200 employees. Don’t have an operations manager. Located in growing metros.”
Perfect. Clean. Measurable.
You can pull that list from Apollo right now. Export 5,000 names. Load them into your outreach tool.
This feels like strategy. It’s actually just pattern matching on outcomes you don’t fully understand. Because here’s the question nobody asks: *Why* did those customers buy?
Was it the employee count? Or was it that their last ops manager just quit and projects were falling apart?
Was it the industry? Or was it that they’d just lost a major client due to missed deadlines and were desperate to prevent it happening again?
Was it the company size? Or was it that they’d recently expanded to multiple locations and their old systems couldn’t scale?
The attributes you’re filtering by probably showed up in successful deals. But they’re not why the deals closed.
You’ve defined correlation. You haven’t proved causation.
And when you build campaigns around correlation, you get exactly what you’re seeing: lots of outreach, very little resonance.
What’s actually happening in your campaigns
Let’s trace what happens when you target based on unvalidated profiles.
You export 5,000 companies that match your filters. They all look perfect on paper. Your sequences launch. They’re well-written. Personalized with merge tags. A/B tested subject lines.
Response rate: 3-5%.
The few who reply fall into predictable buckets:
- “Not interested” (most common)
- “Not the right time” (translation: we don’t actually have this problem)
- “Send me information” (translation: polite brush-off)
- “Maybe next quarter” (translation: this isn’t urgent)
Maybe 1-2% actually book meetings. Half of those are tire-kickers.
Your team blames the messaging. Runs new tests. Tries different angles. Tweaks the copy.
Results barely move. Because the problem isn’t the message. It’s the audience.
You’re sending well-crafted emails to people who don’t have the problem you solve—or don’t know they have it yet—or aren’t in enough pain to care.
No amount of clever copywriting fixes targeting that was never validated in the first place.
Diagnosing why profiling fails
Here’s what went wrong at the root:
You optimized for list size, not problem fit.
Filters are seductive because they scale. You can generate massive lists quickly. That feels productive. But filters only tell you who you *can* reach. Not who actually *needs* you right now. Most of your “ideal customer profile” is just describing people you’re able to find in a database.
You confused demographics with intent.
Someone matching your company size and industry doesn’t mean they’re experiencing the pain point you solve.
A construction company with 100 employees might be thriving with great systems. Or they might be chaos. The headcount doesn’t tell you which.
You’ve been targeting symptoms you think indicate problems, not actual evidence of problems. You assumed your understanding was accurate.
When you built your ICP, you made assumptions about:
- What problems these companies face
- What language resonates with them
- What triggers make them ready to buy
- What they’ve already tried that failed
But you never tested whether those assumptions were true. Your ICP became a set of beliefs you execute against, not hypotheses you validate.
The shift: from profile to proof
Stop calling it your Ideal Customer Profile. Start calling it what it should be: your Ideal Customer Hypothesis. Because right now, all you have is an educated guess wrapped in confident language.
The word matters. A profile is static. Final. “This is who we target.”
A hypothesis is dynamic. Testable. “We believe these companies have this problem. Here’s how we’ll prove it.”
One leads to spray and pray at scale. The other leads to actual market intelligence.
The ideal customer validation framework: prove before you scale
Before you load 5,000 names into sequences, prove your hypothesis with 50.
This isn’t a pilot campaign. It’s a validation sprint. You’re testing whether the companies you think need you actually do.
Phase 1: Select with scrutiny (50 companies)
Pull companies that match your current hypothesis. But keep it small. Deliberately constrained. Now apply the bet test: For each company, ask yourself:
“Based on what I can see, would I bet $100 this company has our problem badly enough to take a meeting?”
Not “they match the filters.” Not “they probably have this issue.”
Actual evidence you can point to that suggests real, active pain.
Evidence looks like:
- Job postings that suggest the problem exists (“Project Coordinator needed to reduce scheduling chaos”)
- Reviews mentioning the pain (“Communication with field teams is always delayed”)
- News about expansion or transition that creates the need
- LinkedIn posts from leadership discussing the challenge
- Tech stack indicating they’re using outdated systems
If you can’t find external evidence in 10 minutes of research, they don’t make the cut.
You’re not targeting people who might have the problem. You’re targeting people where you can see the problem from the outside.
If you can’t confidently say “hell yes” to 30+ of those 50 companies, refine your hypothesis. Pull a different 50.
Phase 2: Outreach with intention (manual, personalized)
Write messages one at a time. No templates. No automation. No merge tags.
Each message should demonstrate you understand their *specific* situation; not just that you can read their LinkedIn.
This will feel slow. That’s correct. You’re not trying to hit activity metrics. You’re trying to learn whether these companies recognize they have the problem you solve.
Phase 3: Measure recognition, not replies
You’re not tracking reply rates. You’re tracking signal.
Positive signal sounds like:
- “How did you know we’re dealing with this?”
- “This is exactly what we’re trying to solve”
- “Yes, let’s talk”
Not polite deferrals. Not “interesting but not now.” Not “send more info.”
Actual recognition that you’ve identified a real, current problem they’re actively trying to solve.
If you hit 20-30% recognition responses: Your hypothesis is validated. These companies have the problem. They know they have it. They’re receptive to solving it.
If you’re at 10-20%: You’re in the right neighborhood but need refinement. Some element of your hypothesis is off.
If you’re under 10%: Your hypothesis is wrong. These aren’t your ideal customers, regardless of what the filters say.
What validation actually teaches you
This isn’t just a testing exercise. It’s an intelligence-gathering operation. When you run validation sprints, you discover things databases never show:
The real predictive signals
You thought company size mattered. You learn what matters is whether they recently expanded to multiple locations.
You thought industry was key. You learn it’s whether they’ve had recent leadership turnover in operations.
You thought it was about missing a role. You learn it’s about having that role, but them being overwhelmed.
The filters you built your ICP around probably correlate with good customers. But they don’t predict them. Validation reveals which signals actually predict active need versus which just happened to show up in past wins.
The triggers that create urgency
Some companies have chronic problems. They live with them. They’re not in buying mode.
Others just hit a breaking point—lost a client, failed an audit, missed a major deadline, hired someone new who exposed the mess.
That breaking point is what separates “interesting but not now” from “we need to fix this immediately.”
You can’t find triggers in a database. You only discover them by documenting what was happening at companies right before they became receptive.
Building targeting intelligence: the 2 layers
After validation, your targeting becomes two-dimensional:
Layer 1: Findable signals (who to target)
These are attributes you can identify through research and scale:
- Recently opened second location (expansion trigger)
- Posted jobs for coordinators (capacity strain signal)
- Use specific legacy software (technical debt indicator)
- Google reviews mention communication issues (symptom visibility)
- New operations leader hired in last 90 days (fresh eyes, mandate to fix)
This layer is scalable. You can use data tools, AI research, and automation to find these signals across hundreds or thousands of companies.
Layer 2: Recognized truths (what to say)
These are insights you can’t research externally, but your validation taught you they universally feel:
- Project managers drowning in administrative work instead of managing
- Information doesn’t flow between office and market
- Built everything in spreadsheets that are now breaking
- Team doesn’t trust the current system because it’s let them down
- Tried a solution before that made things worse
You can’t scale discovering these truths. But you can scale applying them—by writing copy that demonstrates you’ve heard these exact frustrations before.
Most outreach only uses Layer 1:
“I saw you recently opened a location in Austin…”
Everyone can see that. It’s not insight. It’s observation.
When you integrate Layer 2:
“Most companies hit the same wall when they expand to multiple locations. The systems that worked fine with one team and one site turn into coordination chaos with two. Your PMs spend more time tracking down information than managing work. The new location gets slower responses because communication doesn’t flow. You know you need better systems but everyone’s too busy dealing with what’s broken to fix it.”
That’s not something you found on their website. That’s something validation taught you: the pattern you heard when you talked to ten companies in this exact situation.
From validation to systematic scale
You don’t stay at 50 companies forever. But you don’t jump to 5,000 either.
Month 1: Validation (50 companies)
- Manually research and vet each one
- Personalized outreach, zero automation
- Target: 20-30% recognition responses
- Document everything: what works, what doesn’t, what patterns emerge
Month 2: Controlled expansion (200-500 companies)
- Use Layer 1 signals to build larger lists
- Use Layer 2 insights to craft messaging
- Still personalized, but now based on patterns not individual research
- Measure conversion at each stage: reply → meeting → opportunity → close
Month 3+: Refined scale (1,000+ companies)
- Double down on signals that predict conversion
- Kill signals that correlate but don’t predict
- Test new hypotheses about what drives need
- Let actual closed deals refine your understanding
Your ICP isn’t static. It evolves based on what actually converts, not what you hoped would work.
The ongoing discipline nobody maintains
Here’s where most organizations fail: they validate once, then forget to keep validating.
You run the exercise. Build your ICP. Then treat it like gospel for the next two years. Meanwhile, the market shifts. Customer needs evolve. New triggers emerge. Old signals stop predicting. But your ICP doesn’t change because nobody’s connected it to ongoing learning.
What winning companies do differently:
They don’t just define their ICP. They systematically update it.
Quarterly, they pull closed-won deals from the last 90 days and ask:
- What do these customers have in common that our ICP missed?
- What signals predicted their need that we’re not currently targeting?
- What language did they use that differs from our current messaging?
- What triggers made them ready to buy that we should be monitoring?
Then they validate those new hypotheses with another 50-company sprint.
Your best customers aren’t just revenue. They’re ongoing market intelligence about who actually needs you and why.
Most organizations have this data. They just never systematically connect it back to their targeting.
The shift forward
Stop spending quarters executing against ICPs that are fictional characters you agreed upon in a meeting. Start spending time proving who your ideal customers actually are and continuously refining that understanding based on market reality.
The companies with 30% reply rates aren’t better at copywriting than those with 3%. They’re better at targeting people who actually need them, using intelligence those people recognize as true.
Your ideal customers shouldn’t be profiles you define.
They should be hypotheses you prove, refine, and validate against reality every single quarter. Because the difference between outreach that works and outreach that dies isn’t the message. It’s whether you’re talking to someone who actually has the problem you solve and is ready to do something about it.
Prove that first. Then scale. Keep proving, keep scaling.