Peter Grafe

Apr 9, 2026

How to Build Your First Marketing Mix Model (No Code Required)

Assess your MMM readiness, set Bayesian priors in plain English, and fit a real Meridian model — all through a conversation with Claude. Free, open-source.

Marketing Mix

How to Build Your First Marketing Mix Model (No Code Required)

How to Build Your First Marketing Mix Model (No Code Required)

How to Build Your First Marketing Mix Model (No Code Required)

TL;DR: There's no shortage of content explaining what Marketing Mix Modeling is. What's missing is a way to experience it with your own business context. We built an open-source MCP tool (pip install mcp-server-mmm-builder) that lets you assess your MMM readiness, generate industry-matched sample data, set Bayesian priors in plain English, and fit a real Google Meridian model — all through a conversation with Claude. This article walks you through it.


There are hundreds of articles explaining Marketing Mix Modeling. They tell you it's a Bayesian statistical technique. They explain adstock and Hill saturation curves. They show diagrams of how priors get updated by data. You finish reading and think: that sounds powerful — but what would it actually tell me about my business?

That question is surprisingly hard to answer without doing it. MMM isn't like A/B testing where you can grasp the concept from a blog post. The value is in the specifics: which of your channels is saturated, how long your TV spend carries over, whether your search budget is over-allocated. Until you see those outputs for data that resembles yours, it stays abstract.

This article is the bridge. We're going to walk through the full journey — from "should I even do this?" to a fitted model with real outputs — using a tool you can install in 30 seconds and run through a conversation with Claude. Every output below is real, pulled live from the BlueAlpha MMM Builder MCP.


How to Set Up the MMM Builder (Even If You're Not Technical)

The setup is a one-time thing that takes about 5 minutes. After that, everything happens through a normal conversation with Claude — no code, no terminal, no technical knowledge required.

You'll need two things installed on your computer: Python (version 3.10 or newer) and Claude Desktop (the app, not the website). Here's how to get both, step by step.

1. Install Python (if you don't already have it)

Python is the programming language that the MMM Builder runs on. You don't need to learn it — you just need it installed so the tool can work behind the scenes.

Check if you already have it: Open your computer's terminal (on Mac, search for "Terminal" in Spotlight; on Windows, search for "Command Prompt") and type:

python3 --version

If you see something like Python 3.11.5, you're good — skip to step 2. If you see an error or a version below 3.10, install it:

  • Mac: Go to python.org/downloads and download the latest version. Run the installer — it's a standard Mac install wizard, just click through it.

  • Windows: Same site, same process. During installation, check the box that says "Add Python to PATH" — this is important.

You'll also need a small tool called uv that manages Python packages cleanly. In your terminal, run:

curl -LsSf https://astral.sh/uv/install.sh | sh

On Windows, use:

powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"


2. Install Claude Desktop

Download from claude.ai/download and install it like any other app. If you already use the Claude desktop app, you're set.

3. Connect the MMM Builder to Claude Desktop

This is the step that sounds technical but is really just editing a settings file. The connection uses MCP (Model Context Protocol) — an open standard that lets AI assistants like Claude call external tools. Think of it as handing Claude a specialized toolbelt: instead of just generating text, Claude can now run feasibility checks, fit statistical models, and return structured results.
Here's exactly what to do:

On Mac:

  1. Open Finder

  2. Press Cmd + Shift + G (this opens "Go to Folder")

  3. Paste this path: ~/Library/Application Support/Claude/

  4. Look for a file called claude_desktop_config.json. If it doesn't exist, create a new text file with that exact name.

  5. Open it in TextEdit (or any text editor) and paste this:

{
  "mcpServers": {
    "mmm-builder": {
      "command": "uvx",
      "args": ["mcp-server-mmm-builder"]
    }
  }
}
  1. Save the file and quit Claude Desktop completely (right-click the dock icon → Quit), then reopen it.

On Windows:

  1. Press Win + R, type %APPDATA%\Claude\ and hit Enter

  2. Same process — find or create claude_desktop_config.json, paste the same JSON above, save, and restart Claude Desktop.

Important: If you already have other tools connected in this file, you're adding the "mmm-builder" block inside the existing "mcpServers" section, not replacing the whole file. If you're unsure, ask Claude to help you merge the config — just paste your existing file contents into the chat.

4. Verify it works

After restarting Claude Desktop, you should see a small hammer icon (🔨) near the text input. Click it — you should see tools like assess_mmm_readiness, generate_sample_data, and set_channel_roi_belief in the list. If they're there, you're done.

If they're not showing up, the most common issues are: a typo in the JSON (missing comma or bracket), Python not being installed, or Claude Desktop not fully restarted. Try quitting and reopening it once more.

For developers: the fast path

If you're comfortable with a terminal, the whole setup is one command:

claude mcp add mmm-builder --

Or pip install mcp-server-mmm-builder and configure your client to use mcp-server-mmm-builder as the command.

After setup: everything is conversation

Once connected, everything below happens through a normal chat with Claude. You type questions in plain English, Claude calls the builder tools behind the scenes, and you get answers. No terminal, no code, no file paths. Just talk.


Step 1: Find Out If You're Actually Ready

Before spending a dollar on MMM, you need to know whether your data can support one. Most companies skip this step and discover the gaps months into an engagement. The MMM Builder's readiness assessment takes your business inputs and returns a score with specific gaps.

Here's what it looks like for a DTC ecommerce brand spending across 5 channels with 18 months of weekly data:

> "I have 5 channels, 18 months of weekly data, revenue as my KPI,
   spend broken out by channel, and one top-of-funnel channel.
   Am I ready for MMM?"

Readiness Score: 87 / 100
Status: READY 

Recommendations:
  You have top-of-funnel activity MMM will help you
    understand its true impact, which click-based
    attribution typically undervalues.
   You can build a strong national-level MMM especially
    useful for understanding holistic channel performance.
    If geo-level data becomes available later, it can
    further improve precision

An 87 means you're in good shape. But not everyone is. Here's the same assessment for an earlier-stage company — 2 channels, 6 months of monthly data, no channel-level spend breakout:

> "I have 2 channels, 6 months of monthly data, revenue KPI,
   but I don't have spend broken out by channel."

Readiness Score: 0 / 100
Status: BLOCKED 

Blockers:
  No channel-level spend data. MMM fundamentally models
    the relationship between spend and outcomes. Without
    knowing how much was spent per channel per time period,
    the model cannot be built.

Recommendation: Address this blocker first. In the
meantime, consider incrementality testing (geo-lift,
conversion lift) to validate individual channel
effectiveness

That's a hard zero — and it's an honest one. The tool doesn't try to upsell you into something your data can't support. It tells you what's missing and what to do instead.

The readiness check evaluates: number of channels, data length, granularity (daily/weekly/monthly), geographic variation, whether you have a clear KPI, whether spend is broken out, and whether you're running top-of-funnel channels. It's the checklist nobody else gives you. If you want a deeper understanding of what incremental measurement actually means before diving in, that context will make these readiness criteria more intuitive.


Step 2: Understand What MMM Would Actually Be Worth

"You should do MMM" is easy to say. "Here's what it would save you" is more useful. The value estimator takes your spend, channels, and current optimization method, and quantifies the gap:

> "We spend $200K/month across 5 channels, revenue is $800K/month,
   and we currently optimize using platform-reported ROAS."

Current state:
  Annual media spend:           $2,400,000
  Optimization method:          Platform-reported ROAS
  Estimated misallocated spend: $528,000 (22%)

MMM impact:
  Potential annual savings:     $275,616 (11.5% of spend)
  Estimated ROAS improvement:   9.9%
  Additional annual revenue:    $950,400
  Payback period:               1-3 months

What you'd learn:
  Which channels are actually driving results
    (not just what platforms report)
  How much spend is wasted on saturated channels
  The optimal budget allocation across channels
  How long each channel's effect lasts
  Where you have room to spend more vs. diminishing returns
  True ROI per channel with confidence intervals

That 22% misallocation estimate isn't made up — it's based on research showing that platform-reported ROAS systematically over-credits lower-funnel channels (search, retargeting) and under-credits upper-funnel channels (video, podcast, display). MMM corrects this because it measures incrementality from the outcome data, not from platform pixels. If you've ever wondered why your multi-touch attribution might be misleading you, this is the core reason.

If you're spending 200K/month and optimizing on platform dashboards, you're likely making ~528K/year in suboptimal allocation decisions. The model pays for itself in the first budget cycle.


Step 3: Bring Your Own Data (Or Generate Sample Data to Practice)

Use your own data…

The builder works with any CSV that has the basics: a time column, a geography column (even if it's just "national"), your KPI, and spend/impression columns per channel. Point Claude at your file and it validates the structure automatically:

> "Here's my data: /path/to/marketing_data.csv — check if it's
   ready for MMM"

Schema validation:
  Time column detected: 'week' (weekly granularity)
  Geo column detected: 'state' (42 unique regions)
  KPI column detected: 'revenue'
  Spend columns: paid_search_spend, paid_social_spend,
    display_spend, email_spend, affiliate_spend
  Impression columns: paid_search_impressions, ...
  Control variables: avg_price, promo_flag

  No issues found. Ready for data quality diagnostics

If something's off — missing columns, wrong granularity, unnamed fields — the validator tells you exactly what to fix. Once the schema checks out, you can run a deeper quality diagnostic that checks for missing values, time gaps, outliers, low variance, and spend concentration. This is the kind of data QA that usually takes a data scientist half a day; the builder does it in seconds.

Most companies have this data spread across platform dashboards (Google Ads, Meta, etc.) and internal reporting. The hardest part is usually consolidating it into one CSV with a consistent time grain. If you've done that, you're ready.

…Or generate sample data to practice first

If you want to build intuition before using real data, the builder generates realistic sample datasets matched to your industry:

> "Generate sample data for a DTC ecommerce brand"

Generated: sample_mmm_data_ecommerce.csv
  Rows:      1,560 (15 geos × 104 weeks)
  Channels:  paid_search, paid_social, display, email, affiliate
  KPI:       revenue
  Controls:  avg_price, promo_flag, holiday_flag

  True ROI values (hidden from model, used to verify):
    Email:        6.0x
    Paid Search:  4.5x
    Affiliate:    3.5x
    Paid Social:  2.8x
    Display:      1.2x

Five industry templates are available:

Sector

KPI

Channels

Ecommerce / DTC

Revenue

Search, Social, Display, Email, Affiliate

Fintech

Signups

Search, Social, YouTube, Podcast, Display

Consumer Healthtech

Appointments

Search, Social, Connected TV, Display

Consumer SaaS

Trials

Search, Social, YouTube, Content Syndication, Display

Consumer Subscription

New Subscribers

Social, Search, CTV, Influencer, Display, Direct Mail

The sample data has known "true" ROI values baked in, so after you fit the model, you can check how close it got. This is a powerful way to learn: fit the model, compare its estimates to the known values, and see where it's accurate and where it struggles. That builds real intuition for what MMM can and can't detect — before your budget depends on it.


Step 4: Tell the Model What You Know (In Your Own Words)

This is where most MMM tutorials lose people. In a Bayesian model, you need to set "priors" — your starting beliefs about how each channel performs. Traditionally, this means writing something like LogNormal(mu=1.504, sigma=0.3). That's fine if you're a statistician. It's meaningless if you're a marketing director.

The MMM Builder lets you express priors in three ways, depending on how much you know.

Option A: You know your ROI expectations

If you have a sense of what each channel returns, just say so:

> "I think paid search returns about $4.50 per dollar spent,
   and I'm fairly confident."

Prior set for paid_search:
  Expected ROI:      $4.50
  80% credible range: $3.06  $6.61
  Translation:       LogNormal(mu=1.504, sigma=0.300)

  The model will start with this belief and update it
  based on the data. Your high confidence means it will
  take stronger evidence to move away from $4.50.

Notice what happened: you said "about $4.50, fairly confident" and the tool translated that into a proper LogNormal distribution with the right parameters. You never had to think about mu or sigma.

Option B: You just know the ranking

Don't know specific ROI numbers? Just rank your channels from best to worst:

> "Rank my channels: email is best, then search, affiliate,
   social, and display is worst. Moderate spread."

Channel         Expected ROI   80% Range
────────────────────────────────────────
Email           $3.00          [$0.95, $9.51]
Paid Search     $1.95          [$0.62, $6.18]
Affiliate       $1.27          [$0.40, $4.02]
Paid Social     $0.82          [$0.26, $2.61]
Display         $0.54          [$0.17, $1.70]

The tool assigns decreasing ROI priors based on your ordering, with the "spread" controlling how different they are. The wide credible intervals (80% ranges) mean the model has room to learn from the data — your ranking is a starting point, not a constraint.

Option C: You think in terms of how ads work

You can also set beliefs about how channels behave rather than how much they return:

> "Social media ads — the effect lasts a few days, no delay."

Prior set for paid_social:
  Effect duration: days
  Peak delay:      none (immediate)
  Adstock range:   α  [0.1, 0.5]

  This means: the social ad's impact starts right away
  and mostly fades within a few days

Compare that to what you might say about podcast sponsorships: "the effect lasts months, with a slight delay." Same interface, dramatically different priors — and you never touched a distribution parameter.

You can also think in CPA

If your KPI isn't revenue, ROI might feel backwards. The builder lets you set priors using cost-per-acquisition instead:

> "Search costs about $25 per conversion, and each conversion
   is worth about $100."

 Implied ROI: $100 / $25 = 4.0x
Prior set accordingly


Step 5: Check If Your Model Can Run Locally

Before fitting, it's worth knowing what you're getting into:

> "Can I run a model with 5 channels, 15 geos, 104 weeks locally?"

Model complexity:
  Parameters:       55
  Data points:      1,560
  Estimated time:   5-15 minutes
  Memory required:  ~2 GB
  Local execution:  Feasible

Recommendation: This model can run locally

If your model is too large (many geos, many channels, reach-frequency data), the builder will tell you and offer a cloud export option instead.


Step 6: Fit the Model and See What It Finds

With data loaded and priors set, fitting is one command:

> "Fit the model — 4 chains, 500 samples each."

Fitting MMM with MCMC...
  Chains: 4 | Samples: 500 | Warmup: 500
  Estimated time: 5-15 minutes

[After fitting]

> "How did convergence look?"

Convergence diagnostics:
  Divergences: 0 
  R-hat: all parameters < 1.05 
  Effective sample size: adequate 

  Model converged successfully

If convergence fails — divergences, high R-hat, low effective sample size — the diagnostics tool tells you exactly what went wrong and what to try next. No staring at trace plots wondering what you're looking at.

Once fitted, the model produces the same outputs we walk through in our operationalization playbook: channel ROI with credible intervals, contribution breakdowns, saturation curves, adstock parameters, marginal ROI, and budget reallocation simulations. The difference is you just built this model yourself, from scratch, in a single conversation.


What You Don't Need (That You Probably Thought You Did)

One of the biggest barriers to MMM adoption is the assumption about what's required. Let's clear some of those up.

You don't need a data science team to get started. The MMM Builder handles the Bayesian modeling pipeline. You need someone who understands your marketing spend data and can answer questions like "which channel do you think performs best?" That's a marketer, not a statistician.

You don't need geo-level data. It helps — geographic variation gives the model natural experiments to learn from. But a national-level model with 18+ months of weekly data can still produce useful ROI estimates and saturation insights, especially for understanding top-of-funnel channels that click-based attribution misses entirely.

You don't need perfect data. The builder includes data validation and quality diagnostics that flag issues like missing values, time gaps, outliers, and low variance. It tells you what to fix and what's acceptable. Most real-world marketing data is messy — the model is designed for that.

You don't need to understand Bayesian statistics. The prior-setting interface translates your business knowledge into proper distributions. "Email is our best channel, search is second, I'm moderately confident" is a valid input. The math happens behind the curtain.


What You Do Need (That Nobody Talks About)

Building a first model is the easy part. The hard part — and the part most MMM content conveniently skips — is everything that comes after. A model that gets built once and never updated is only marginally better than no model at all. Here's what the full picture actually requires.

You need channel-level spend data. This is the one non-negotiable for building the model itself. If you can't break your spend out by channel and time period, MMM can't work. Platform dashboards usually have this. If you don't have it consolidated, that's your first step.

You need a plan for ongoing model refreshes. Marketing conditions change. New channels get added. Seasonality shifts. A model trained on last year's data loses accuracy over time. Production MMMs need to be retrained regularly — weekly or biweekly in fast-moving environments — with fresh data flowing in automatically. That means a data pipeline, not a one-time CSV upload. This is exactly why breaking free from single-channel dependency requires both MMM and incrementality testing working together continuously.

You need model validation, not just model fitting. A model that converges isn't necessarily a model you should trust. Production-grade MMM requires out-of-sample testing (does the model predict held-out weeks accurately?), prior-posterior diagnostics (is the data actually informing the estimates, or are you just getting your priors back?), and sensitivity analysis (do the results change dramatically with small changes to assumptions?). The MMM Builder gives you convergence diagnostics and prior-posterior comparisons to start. Going deeper requires deliberate experimentation.

You need a way to operationalize the outputs. A fitted model is useful. A fitted model that your marketing team can query on demand — checking ROI, simulating budget shifts, monitoring saturation — is transformative. We cover this in detail in How to Operationalize Your MMM using the companion mcp-server-meridian tool. Building the model and making it usable are two different problems, and both need solving.

You need the model to get smarter over time. The most valuable MMMs aren't static — they learn. As new data arrives, as campaigns launch and conclude, as channels scale up or down, the model should incorporate that information and update its estimates. This means automated retraining pipelines, drift detection (is the model's accuracy degrading?), and feedback loops where actual business outcomes get compared against model predictions. This is where MMM crosses from analytics project into production ML system.

You need someone to interpret the edge cases. What does it mean when a channel's prior-posterior overlap is 95%? When marginal ROI is negative but average ROI is positive? When the model says TV is your best channel but your business hasn't run TV in three months? These situations require judgment — understanding where the model is confident, where it's uncertain, and where the data simply doesn't support a conclusion. That's a skill that comes from experience running MMMs across many businesses and verticals.

The MMM Builder gives you a genuine starting point. You can assess readiness, build a first model, and see what it tells you about your business. But the gap between a first model and a decision-grade measurement system is real. It involves data engineering, model ops, ongoing validation, and interpretive expertise. BlueAlpha exists specifically to close that gap — handling the deployment, maintenance, automated retraining, and strategic interpretation so the model stays accurate and actionable over time. Companies like Pettable saved $2.12M in annualized spend by pairing always-on MMM with incrementality tests that validated what the model found. Klover unlocked millions in 30 days using the same measurement framework. Whether you build that capability in-house or work with a partner, knowing it's needed is the first step.


The Full Pipeline at a Glance

Here's the entire journey from "am I ready?" to fitted model, in one conversation:

Step

What you do

What the tool does

Readiness

Answer 5-6 questions about your data

Returns a 0-100 score with specific gaps

Value estimate

Share your spend and current method

Quantifies savings and revenue impact

Sample data

Pick your industry

Generates a realistic dataset to practice on

Data validation

Point at your CSV

Checks schema, quality, gaps, outliers

Set priors

Describe your beliefs in plain English

Translates to Bayesian distributions

Complexity check

Confirm your setup

Estimates fit time and memory

Fit

Say "fit the model"

Runs MCMC, returns diagnostics

Explore

Ask questions

ROI, contribution, saturation, adstock, scenarios

The entire pipeline runs through Claude Desktop or Claude Code. Install with pip install mcp-server-mmm-builder, point it at your data (or generate sample data), and start talking.


When MMM Is the Wrong Tool

We'd rather be honest than sell you on something that won't work. MMM is the wrong approach when:

You have fewer than 12 months of data. The model needs enough time-series variation to separate channel effects from seasonality and trend. Six months isn't enough — you'll get priors back, not insights.

You only run one channel. MMM measures the relative and absolute contribution of multiple channels. If all your spend is in one place, there's nothing to decompose. Use incrementality testing (geo-lift or conversion lift) instead.

Your spend doesn't vary. If you spend exactly $10K/week on every channel with no variation, the model can't distinguish signal from noise. It needs natural experiments — weeks where you spent more on search and less on social, or vice versa. Fortunately, most real marketing budgets fluctuate plenty.

You need real-time optimization. MMM is a planning and allocation tool, not a bid optimizer. It tells you where to shift budget next quarter, not how to adjust bids this afternoon. For real-time, you need platform algorithms or multi-touch attribution.


What Happens After Your First Model

The MMM Builder gives you a starting point — your first look at what the model sees in your data. From here, the path depends on what you found.

If the results are directionally useful, you've just proven the concept for your business. The next step is operationalization: making the model queryable by your team (see our companion playbook on that), setting up a refresh cadence so the model stays current, and building validation into the process so you know when to trust the outputs and when to dig deeper.

If the readiness check reveals gaps, you now know exactly what to fix. Get spend data broken out by channel. Accumulate a few more months of history. Start varying your spend across geos. Come back when the score is above 70.

If the model shows something surprising — a channel you thought was a star is saturated, or a channel you neglected has high marginal ROI — that's the model working. Those surprises are the whole point. They're the budget decisions you weren't making because the data wasn't accessible.

If you want to take it to production, you're looking at automated data pipelines, regular retraining, drift monitoring, and strategic interpretation. That's a meaningful investment — in tooling, in process, or in a partner who specializes in it. But you'll be making that decision with a concrete understanding of what MMM can do for your specific business, not based on a slide deck or a sales pitch.


Frequently Asked Questions

What is Marketing Mix Modeling? Marketing Mix Modeling (MMM) is a Bayesian statistical technique that measures how much each marketing channel contributes to a business outcome like revenue or conversions. Unlike click-based attribution, it accounts for carry-over effects (adstock), diminishing returns (saturation), and works without user-level tracking — making it privacy-safe and immune to cookie deprecation. For a full primer, see our article: What Is Media Mix Modeling (MMM)?

How much data do I need for MMM? At minimum, 12 months of weekly data with spend broken out by channel. Ideally, 24+ months with geographic variation (state or DMA level). The MMM Builder's readiness assessment will tell you exactly where you stand.

Do I need to know Python or statistics to build an MMM? No. The MMM Builder MCP lets you set Bayesian priors in plain English ("search returns about $4 per dollar, I'm fairly confident"), generate industry-matched sample data, and fit a Google Meridian model — all through a conversation with Claude. The technical translation happens automatically.

How do I install the MMM Builder? Run pip install mcp-server-mmm-builder and configure it in Claude Desktop or Claude Code. See the setup section for full instructions.

How long does it take to fit a model? A typical model (5 channels, 15 geos, 104 weeks) fits in 5-15 minutes locally. Larger models with many geos or reach-frequency data may take longer and can be exported to cloud execution.

What's the difference between the MMM Builder and the Meridian MCP server? The MMM Builder helps you build a model — assess readiness, prepare data, set priors, and fit. The Meridian MCP server helps you query a fitted model — ROI, contribution, saturation, budget simulations. They're complementary: build with one, operationalize with the other.

Can I use my own data, or only sample data? Both. The builder works with any CSV that has a time column, geography column, KPI, and channel-level spend. Point Claude at your file, and it validates the schema, runs data quality diagnostics, and tells you exactly what (if anything) needs fixing. Sample data is there for practice and learning — your own data is where the real value is.

What's the difference between building a model and running one in production? Building a first model is a one-time exercise — you fit it, explore the outputs, and learn what MMM tells you about your business. Running one in production means automated data pipelines, regular retraining (weekly or biweekly), drift monitoring, out-of-sample validation, and ongoing interpretation. The MMM Builder handles the first part. The second part requires either in-house data engineering or a partner who specializes in production MMM.

Is this a replacement for a data science team? For building a first model — largely yes. For running MMM as a production measurement system — no. The builder gets you a real model you can learn from and use to make better near-term decisions. But ongoing model maintenance, automated retraining, validation, and strategic interpretation require sustained investment. The builder shows you what's possible; the decision of how to scale it is yours.

What if my readiness score is low? The assessment tells you exactly what's missing and what to do about it. Common gaps: not enough historical data (solution: wait and accumulate), no channel-level spend breakout (solution: export from ad platforms), monthly granularity (solution: switch to weekly reporting). Some gaps, like lacking any spend data at all, are true blockers — the tool will be direct about that.


Ready to see what production-grade MMM looks like? Book a demo and we'll show you how always-on measurement turns into weekly budget decisions.

PLAYBOOK

Get this playbook as a PDF

PLAYBOOK

Get this playbook as a PDF

PLAYBOOK

Get this playbook as a PDF

Your marketing is capable of more.
Get on BlueAlpha. Make it happen.

Your marketing is capable of more.
Get on BlueAlpha. Make it happen.

Your marketing is capable of more.
Get on BlueAlpha. Make it happen.