Marketing agency case studies: 5 proof gaps to check

Marketing agency case studies can mislead. Check baseline, scope, measurement, fit, and work trail before you trust the result.
Marketing agency case studies can mislead. Check baseline, scope, measurement, fit, and work trail before you trust the result.
A 12-point website conversion audit founders can run in 30 minutes. Score yes/no, fix the no's first, and stop redesigning before diagnosing.

Gabriel Espinheira

Marketing agency case studies are useful only when they show what changed, how it was measured, and whether the starting point looked anything like yours. Otherwise, they are just polished sales stories with a number at the end.

That matters when you have already been burned. A good case study can show you how an agency thinks, what they own, and whether they can connect marketing work to enquiries. A weak one hides behind screenshots, vague percentage lifts, and client logos that do not tell you what actually happened.

TL;DR: Marketing agency case studies should prove five things before you trust them: the starting baseline, the work shipped, the measurement path, the buyer fit, and the operating model behind the result. If any of those pieces are missing, treat the case study as a conversation starter, not proof.

Why marketing agency case studies matter before you sign

Case studies matter because buyers use them as late-stage proof, not light reading. By the time a founder asks for examples, they are usually asking one question: "Can this partner do the work for a business like mine?"

Content Marketing Institute's 2025 B2B research found that case studies and customer stories were rated among the most effective content types by 53% of B2B marketers. NetLine's 2024 report goes further: users requesting case studies were 78.5% more likely to make a purchase decision within 12 months.

So yes, case studies work. That is exactly why you should inspect them harder.

The mistake is treating a case study as proof because it has a client name and a chart. Agencies know case studies sit near the buying decision. Weak agencies use them to borrow trust they have not earned. Strong agencies use them to show their thinking in public: the problem, the constraints, the decisions, the work shipped, and the result.

For a burned founder, the case study is not the finish line. It is the first filter.

What a real baseline should show

A useful case study starts before the agency touched anything. If it does not show the baseline, you cannot tell whether the result was skill, timing, seasonality, budget, or plain luck.

Look for the starting numbers. Not just "traffic was low" or "leads were inconsistent." You want the actual before-state: traffic, conversion rate, qualified enquiries, ad spend, lead quality, sales cycle, content cadence, or whatever metric the project was meant to change.

The baseline should also explain the constraint. A website rebuild with no analytics access is a different job from a rebuild with clean tracking. A content project for a founder with no internal reviewer is different from one with a marketing manager who can approve drafts in 24 hours. A paid ads win with a large creative library is not the same as a win from a cold account with one landing page.

Weak case studies skip this because the baseline makes the result smaller. Strong ones show it because context is the proof.

Ask this on the call:

"What did the numbers look like before you started, and what could have explained the improvement besides your work?"

If the answer gets vague, the case study is not doing its job.

How to spot vanity metrics dressed up as results

Real results connect to business movement. Vanity metrics stop at attention: impressions, clicks, reach, sessions, rankings, or engagement without showing what those numbers did for enquiries, bookings, pipeline, or revenue.

This does not mean every case study needs a revenue number. Some clients will not allow it. Some projects are too early. Some work is legitimately about rebuilding the measurement layer before growth can be claimed.

But the case study still needs a logic chain.

For a website project, the chain might be:

  • higher-quality traffic reached the right service page

  • more visitors completed the form

  • the enquiries matched the target customer profile

  • the owner could trace each enquiry back to the page, ad, or article that caused it

For a content project, the chain might be:

  • the post targeted a query tied to buyer pain

  • the page earned search visibility

  • visitors moved to a service page or pricing page

  • the team could see assisted enquiries, not just pageviews

For an ads project, the chain might be:

  • tracking was fixed before spend scaled

  • campaigns were judged by qualified leads, not cheap form fills

  • search terms, landing pages, and follow-up quality were reviewed together

This is the difference between "we increased traffic" and "we increased the kind of traffic that gave sales something real to work with."

SharpHaw's own rule is simple: tracked from click to client, not click to dashboard. If an agency cannot explain the path from the case-study metric to the commercial outcome, keep asking.

The 5 proof gaps to check in every case study

A strong case study should survive five checks: baseline, scope, measurement, fit, and trail. If one is missing, do not throw the agency out immediately. Mark the gap and ask for the missing proof before you buy.

1. Baseline

What was true before the work started? You want the starting numbers, not just the success number. "We grew leads by 40%" means very little if the business went from five poor leads to seven poor leads.

2. Scope

What did the agency actually own? Strategy, copy, design, development, tracking, ads, content, CRM follow-up, or all of it? A result is only useful if you know which parts were controlled by the agency and which parts belonged to the client.

3. Measurement

How was the result tracked? Ask what tool produced the number, who had access to it, and whether the client could see it independently. If a dashboard is the only proof, ask what source system sits underneath it.

4. Fit

Does the case study match your business model, stage, and constraint? A funded SaaS launch does not prove the agency can help an owner-operated services firm with a slow sales cycle. Similarity matters more than glamour.

5. Trail

Can they show the work behind the result? Briefs, shipped pages, ad iterations, content calendar, audit findings, test notes, or review history. The trail tells you whether the result came from a repeatable operating model or one lucky campaign.

That fifth check is where many case studies fall apart. A polished before-and-after screenshot can be made after the fact. A visible work trail is harder to fake.

What to ask when the agency has no case studies yet

No case studies is not an automatic deal-breaker. Hiding the absence is. If an agency is early, ask for live proof of thinking, shipping cadence, and operating discipline instead.

This is where the buyer needs a different standard. A new agency may not have named client logos yet. SharpHaw is explicit about that gap. The question is whether the agency can still show enough proof to reduce your risk.

Ask for four things:

  • a live teardown of your current website, ads, or content

  • a sample first-week plan based on your real business

  • proof of the founder's technical or operating depth

  • a workspace or board showing how work will be tracked after you sign

If the agency claims a visible operating model, ask to see the workspace before you buy. In SharpHaw's case, SharpOS is the shared workspace where briefs, boards, audits, assets, and weekly shipping evidence live.

This is also why we point buyers to our marketing agency testimonials checklist before asking them to trust us. Testimonials and case studies can both mislead. The useful move is to make the proof standard visible.

If an agency has no case studies and no live diagnostic ability, that is a problem. If they have no case studies yet but can explain your bottleneck better than the agencies with glossy decks, keep the conversation open.

One recent founder discussion on Reddit put the issue plainly: when prospects ask for a case study, what they often want is proof you can solve their specific problem. That is the right frame. The proof can come from a named case study, but it can also come from a sharp diagnosis and a clear first-week operating plan.

How to use the case study on the discovery call

Use the case study as a stress test. Do not ask the agency to present it. Ask them to defend it.

Bring one case study to the call and walk through these questions:

  • What was the client's starting point?

  • What exactly did you ship?

  • What did the client have to do for this to work?

  • What number moved first?

  • What number did not move?

  • What would you do differently now?

  • Which part of this would not apply to my business?

The best answer is rarely a perfect answer. The best answer sounds like a senior operator who remembers the trade-offs.

Bad answers sound smooth. "We used a full-funnel strategy." "We optimised the customer journey." "We created a scalable growth engine." None of that tells you what changed on Tuesday.

Good answers get specific. "The form was buried below two trust sections." "The tracking event was firing on page load." "The client had leads, but the sales team was replying three days later." "The blog was ranking, but every CTA sent buyers back to the homepage."

That is what you are buying: judgement under constraint.

Frequently asked questions

What should a marketing agency case study include?

It should include the starting baseline, the business problem, the exact work shipped, the measurement source, the result, and the client context. The result matters less when the case study hides how it was measured or which parts of the project the agency actually controlled.

How do I know if a marketing case study is real?

Check whether the agency can show a trail behind the result: live pages, dated screenshots, dashboard source data, shipped work, or a client reference. A real case study should withstand questions about baseline, scope, measurement, and what the agency would change if they ran it again.

Are anonymised agency case studies useful?

Yes, but only if the anonymity does not remove the proof. The agency can hide the client name while still showing the industry, starting point, constraints, work shipped, measurement method, and outcome range. If everything specific is removed, you are reading a sales story, not proof.

Should I reject an agency with no case studies?

Not automatically. Reject the agency if they hide the gap or cannot diagnose your problem live. If they are early but can show founder expertise, a clear operating model, sample work, and a first-week plan, the risk may still be acceptable for the right buyer.

What is the biggest red flag in agency case studies?

The biggest red flag is a result without a path. If the case study shows a big lift but does not explain the baseline, source data, work shipped, and commercial impact, assume the number has been chosen because it looks good, not because it proves the agency can repeat the work.

Case studies should make your decision clearer, not make the agency look untouchable. The right one shows the work, the constraint, the measurement, and the mess. The wrong one gives you a logo and asks you to fill in the trust yourself.

Plan. Build. Iterate.

Want a sharper read before you sign with anyone? Book a 30-min call — bring one agency case study you are considering, and we will help you see what it proves, what it hides, and what to ask next.

You can also see the current plans before the call so the commercial model is clear from the start.

Ready to start?

Book a 30-minute call. We'll dig into what's working, what isn't, and what the first move should be. No fluff, no pressure. If it makes sense to work together, we'll make it happen.

Ready to start?

Book a 30-minute call. We'll dig into what's working, what isn't, and what the first move should be. No fluff, no pressure. If it makes sense to work together, we'll make it happen.

Read more